00:00:00.000 Started by upstream project "autotest-spdk-master-vs-dpdk-v23.11" build number 1050 00:00:00.000 originally caused by: 00:00:00.000 Started by upstream project "nightly-trigger" build number 3712 00:00:00.000 originally caused by: 00:00:00.000 Started by timer 00:00:00.073 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.074 The recommended git tool is: git 00:00:00.074 using credential 00000000-0000-0000-0000-000000000002 00:00:00.075 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.118 Fetching changes from the remote Git repository 00:00:00.121 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.159 Using shallow fetch with depth 1 00:00:00.159 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.159 > git --version # timeout=10 00:00:00.201 > git --version # 'git version 2.39.2' 00:00:00.201 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.231 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.231 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.676 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.689 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.702 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:05.702 > git config core.sparsecheckout # timeout=10 00:00:05.713 > git read-tree -mu HEAD # timeout=10 00:00:05.728 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:05.749 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:05.750 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:05.856 [Pipeline] Start of Pipeline 00:00:05.867 [Pipeline] library 00:00:05.868 Loading library shm_lib@master 00:00:05.868 Library shm_lib@master is cached. Copying from home. 00:00:05.882 [Pipeline] node 00:00:05.891 Running on CYP9 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:05.892 [Pipeline] { 00:00:05.900 [Pipeline] catchError 00:00:05.901 [Pipeline] { 00:00:05.911 [Pipeline] wrap 00:00:05.918 [Pipeline] { 00:00:05.924 [Pipeline] stage 00:00:05.925 [Pipeline] { (Prologue) 00:00:06.123 [Pipeline] sh 00:00:06.411 + logger -p user.info -t JENKINS-CI 00:00:06.427 [Pipeline] echo 00:00:06.428 Node: CYP9 00:00:06.433 [Pipeline] sh 00:00:06.735 [Pipeline] setCustomBuildProperty 00:00:06.744 [Pipeline] echo 00:00:06.746 Cleanup processes 00:00:06.750 [Pipeline] sh 00:00:07.038 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.038 2432493 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.051 [Pipeline] sh 00:00:07.339 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.339 ++ grep -v 'sudo pgrep' 00:00:07.339 ++ awk '{print $1}' 00:00:07.339 + sudo kill -9 00:00:07.339 + true 00:00:07.353 [Pipeline] cleanWs 00:00:07.363 [WS-CLEANUP] Deleting project workspace... 00:00:07.363 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.370 [WS-CLEANUP] done 00:00:07.373 [Pipeline] setCustomBuildProperty 00:00:07.384 [Pipeline] sh 00:00:07.667 + sudo git config --global --replace-all safe.directory '*' 00:00:07.755 [Pipeline] httpRequest 00:00:09.206 [Pipeline] echo 00:00:09.208 Sorcerer 10.211.164.101 is alive 00:00:09.219 [Pipeline] retry 00:00:09.221 [Pipeline] { 00:00:09.268 [Pipeline] httpRequest 00:00:09.274 HttpMethod: GET 00:00:09.274 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.275 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.280 Response Code: HTTP/1.1 200 OK 00:00:09.281 Success: Status code 200 is in the accepted range: 200,404 00:00:09.281 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.019 [Pipeline] } 00:00:10.030 [Pipeline] // retry 00:00:10.036 [Pipeline] sh 00:00:10.320 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.339 [Pipeline] httpRequest 00:00:10.890 [Pipeline] echo 00:00:10.892 Sorcerer 10.211.164.101 is alive 00:00:10.904 [Pipeline] retry 00:00:10.906 [Pipeline] { 00:00:10.918 [Pipeline] httpRequest 00:00:10.922 HttpMethod: GET 00:00:10.923 URL: http://10.211.164.101/packages/spdk_a2f5e1c2d535934bced849d8b079523bc74c98f1.tar.gz 00:00:10.923 Sending request to url: http://10.211.164.101/packages/spdk_a2f5e1c2d535934bced849d8b079523bc74c98f1.tar.gz 00:00:10.941 Response Code: HTTP/1.1 200 OK 00:00:10.941 Success: Status code 200 is in the accepted range: 200,404 00:00:10.942 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_a2f5e1c2d535934bced849d8b079523bc74c98f1.tar.gz 00:01:46.131 [Pipeline] } 00:01:46.149 [Pipeline] // retry 00:01:46.157 [Pipeline] sh 00:01:46.448 + tar --no-same-owner -xf spdk_a2f5e1c2d535934bced849d8b079523bc74c98f1.tar.gz 00:01:49.010 [Pipeline] sh 00:01:49.298 + git -C spdk log --oneline -n5 00:01:49.298 a2f5e1c2d blob: don't free bs when spdk_bs_destroy/spdk_bs_unload fails 00:01:49.298 0f59982b6 blob: don't use bs_load_ctx_fail in bs_write_used_* functions 00:01:49.298 0354bb8e8 nvme/rdma: Force qp disconnect on pg remove 00:01:49.298 0ea9ac02f accel/mlx5: Create pool of UMRs 00:01:49.298 60adca7e1 lib/mlx5: API to configure UMR 00:01:49.317 [Pipeline] withCredentials 00:01:49.330 > git --version # timeout=10 00:01:49.340 > git --version # 'git version 2.39.2' 00:01:49.359 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:49.361 [Pipeline] { 00:01:49.368 [Pipeline] retry 00:01:49.370 [Pipeline] { 00:01:49.383 [Pipeline] sh 00:01:49.671 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:01:50.254 [Pipeline] } 00:01:50.271 [Pipeline] // retry 00:01:50.276 [Pipeline] } 00:01:50.294 [Pipeline] // withCredentials 00:01:50.303 [Pipeline] httpRequest 00:01:50.653 [Pipeline] echo 00:01:50.655 Sorcerer 10.211.164.101 is alive 00:01:50.663 [Pipeline] retry 00:01:50.665 [Pipeline] { 00:01:50.677 [Pipeline] httpRequest 00:01:50.681 HttpMethod: GET 00:01:50.682 URL: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:50.682 Sending request to url: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:50.685 Response Code: HTTP/1.1 200 OK 00:01:50.685 Success: Status code 200 is in the accepted range: 200,404 00:01:50.685 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:53.032 [Pipeline] } 00:01:53.049 [Pipeline] // retry 00:01:53.057 [Pipeline] sh 00:01:53.347 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:55.281 [Pipeline] sh 00:01:55.572 + git -C dpdk log --oneline -n5 00:01:55.573 eeb0605f11 version: 23.11.0 00:01:55.573 238778122a doc: update release notes for 23.11 00:01:55.573 46aa6b3cfc doc: fix description of RSS features 00:01:55.573 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:55.573 7e421ae345 devtools: support skipping forbid rule check 00:01:55.584 [Pipeline] } 00:01:55.597 [Pipeline] // stage 00:01:55.605 [Pipeline] stage 00:01:55.607 [Pipeline] { (Prepare) 00:01:55.628 [Pipeline] writeFile 00:01:55.644 [Pipeline] sh 00:01:55.935 + logger -p user.info -t JENKINS-CI 00:01:55.950 [Pipeline] sh 00:01:56.241 + logger -p user.info -t JENKINS-CI 00:01:56.256 [Pipeline] sh 00:01:56.547 + cat autorun-spdk.conf 00:01:56.547 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:56.547 SPDK_TEST_NVMF=1 00:01:56.547 SPDK_TEST_NVME_CLI=1 00:01:56.547 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:56.547 SPDK_TEST_NVMF_NICS=e810 00:01:56.547 SPDK_TEST_VFIOUSER=1 00:01:56.547 SPDK_RUN_UBSAN=1 00:01:56.547 NET_TYPE=phy 00:01:56.547 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:56.547 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:56.556 RUN_NIGHTLY=1 00:01:56.562 [Pipeline] readFile 00:01:56.587 [Pipeline] withEnv 00:01:56.590 [Pipeline] { 00:01:56.603 [Pipeline] sh 00:01:56.895 + set -ex 00:01:56.895 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:56.895 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:56.895 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:56.895 ++ SPDK_TEST_NVMF=1 00:01:56.895 ++ SPDK_TEST_NVME_CLI=1 00:01:56.895 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:56.895 ++ SPDK_TEST_NVMF_NICS=e810 00:01:56.895 ++ SPDK_TEST_VFIOUSER=1 00:01:56.895 ++ SPDK_RUN_UBSAN=1 00:01:56.895 ++ NET_TYPE=phy 00:01:56.895 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:56.895 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:56.895 ++ RUN_NIGHTLY=1 00:01:56.895 + case $SPDK_TEST_NVMF_NICS in 00:01:56.895 + DRIVERS=ice 00:01:56.895 + [[ tcp == \r\d\m\a ]] 00:01:56.895 + [[ -n ice ]] 00:01:56.895 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:56.895 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:56.895 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:56.895 rmmod: ERROR: Module irdma is not currently loaded 00:01:56.895 rmmod: ERROR: Module i40iw is not currently loaded 00:01:56.895 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:56.895 + true 00:01:56.895 + for D in $DRIVERS 00:01:56.895 + sudo modprobe ice 00:01:56.895 + exit 0 00:01:56.906 [Pipeline] } 00:01:56.921 [Pipeline] // withEnv 00:01:56.926 [Pipeline] } 00:01:56.940 [Pipeline] // stage 00:01:56.950 [Pipeline] catchError 00:01:56.952 [Pipeline] { 00:01:56.966 [Pipeline] timeout 00:01:56.967 Timeout set to expire in 1 hr 0 min 00:01:56.969 [Pipeline] { 00:01:56.983 [Pipeline] stage 00:01:56.985 [Pipeline] { (Tests) 00:01:57.000 [Pipeline] sh 00:01:57.292 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:57.292 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:57.292 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:57.292 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:57.292 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:57.292 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:57.292 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:57.292 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:57.292 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:57.292 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:57.292 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:57.292 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:57.292 + source /etc/os-release 00:01:57.292 ++ NAME='Fedora Linux' 00:01:57.292 ++ VERSION='39 (Cloud Edition)' 00:01:57.292 ++ ID=fedora 00:01:57.292 ++ VERSION_ID=39 00:01:57.292 ++ VERSION_CODENAME= 00:01:57.292 ++ PLATFORM_ID=platform:f39 00:01:57.292 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:57.292 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:57.292 ++ LOGO=fedora-logo-icon 00:01:57.292 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:57.292 ++ HOME_URL=https://fedoraproject.org/ 00:01:57.292 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:57.292 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:57.292 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:57.292 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:57.292 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:57.292 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:57.292 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:57.292 ++ SUPPORT_END=2024-11-12 00:01:57.292 ++ VARIANT='Cloud Edition' 00:01:57.292 ++ VARIANT_ID=cloud 00:01:57.292 + uname -a 00:01:57.292 Linux spdk-cyp-09 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:57.292 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:00.598 Hugepages 00:02:00.598 node hugesize free / total 00:02:00.598 node0 1048576kB 0 / 0 00:02:00.598 node0 2048kB 0 / 0 00:02:00.598 node1 1048576kB 0 / 0 00:02:00.598 node1 2048kB 0 / 0 00:02:00.598 00:02:00.598 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:00.598 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:02:00.598 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:02:00.598 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:02:00.598 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:02:00.598 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:02:00.598 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:02:00.598 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:02:00.598 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:02:00.598 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:02:00.598 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:02:00.598 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:02:00.598 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:02:00.598 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:02:00.598 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:02:00.598 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:02:00.598 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:02:00.598 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:02:00.598 + rm -f /tmp/spdk-ld-path 00:02:00.598 + source autorun-spdk.conf 00:02:00.598 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:00.598 ++ SPDK_TEST_NVMF=1 00:02:00.598 ++ SPDK_TEST_NVME_CLI=1 00:02:00.598 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:00.598 ++ SPDK_TEST_NVMF_NICS=e810 00:02:00.598 ++ SPDK_TEST_VFIOUSER=1 00:02:00.598 ++ SPDK_RUN_UBSAN=1 00:02:00.598 ++ NET_TYPE=phy 00:02:00.598 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:02:00.598 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:00.598 ++ RUN_NIGHTLY=1 00:02:00.598 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:00.598 + [[ -n '' ]] 00:02:00.598 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:00.598 + for M in /var/spdk/build-*-manifest.txt 00:02:00.598 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:00.598 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:00.598 + for M in /var/spdk/build-*-manifest.txt 00:02:00.598 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:00.598 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:00.598 + for M in /var/spdk/build-*-manifest.txt 00:02:00.598 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:00.598 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:00.598 ++ uname 00:02:00.598 + [[ Linux == \L\i\n\u\x ]] 00:02:00.598 + sudo dmesg -T 00:02:00.598 + sudo dmesg --clear 00:02:00.598 + dmesg_pid=2434076 00:02:00.598 + [[ Fedora Linux == FreeBSD ]] 00:02:00.598 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:00.598 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:00.599 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:00.599 + [[ -x /usr/src/fio-static/fio ]] 00:02:00.599 + export FIO_BIN=/usr/src/fio-static/fio 00:02:00.599 + FIO_BIN=/usr/src/fio-static/fio 00:02:00.599 + sudo dmesg -Tw 00:02:00.599 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:00.599 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:00.599 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:00.599 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:00.599 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:00.599 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:00.599 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:00.599 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:00.599 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:00.599 09:19:36 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:02:00.599 09:19:36 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:00.599 09:19:36 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:00.599 09:19:36 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:02:00.599 09:19:36 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:02:00.599 09:19:36 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:00.599 09:19:36 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:02:00.599 09:19:36 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:02:00.599 09:19:36 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:02:00.599 09:19:36 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:02:00.599 09:19:36 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ SPDK_TEST_NATIVE_DPDK=v23.11 00:02:00.599 09:19:36 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@10 -- $ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:00.599 09:19:36 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@11 -- $ RUN_NIGHTLY=1 00:02:00.599 09:19:36 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:00.599 09:19:36 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:00.860 09:19:36 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:02:00.860 09:19:36 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:00.860 09:19:36 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:00.860 09:19:36 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:00.860 09:19:36 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:00.860 09:19:36 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:00.860 09:19:36 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:00.860 09:19:36 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:00.860 09:19:36 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:00.860 09:19:36 -- paths/export.sh@5 -- $ export PATH 00:02:00.860 09:19:36 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:00.860 09:19:36 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:00.860 09:19:36 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:00.860 09:19:36 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733732376.XXXXXX 00:02:00.860 09:19:36 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733732376.eEO5qO 00:02:00.860 09:19:36 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:00.860 09:19:36 -- common/autobuild_common.sh@499 -- $ '[' -n v23.11 ']' 00:02:00.860 09:19:36 -- common/autobuild_common.sh@500 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:00.860 09:19:36 -- common/autobuild_common.sh@500 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:02:00.860 09:19:36 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:02:00.860 09:19:36 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:02:00.860 09:19:36 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:00.860 09:19:36 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:00.860 09:19:36 -- common/autotest_common.sh@10 -- $ set +x 00:02:00.860 09:19:36 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:02:00.860 09:19:36 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:00.860 09:19:36 -- pm/common@17 -- $ local monitor 00:02:00.860 09:19:36 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:00.860 09:19:36 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:00.860 09:19:36 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:00.860 09:19:36 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:00.860 09:19:36 -- pm/common@25 -- $ sleep 1 00:02:00.860 09:19:36 -- pm/common@21 -- $ date +%s 00:02:00.860 09:19:36 -- pm/common@21 -- $ date +%s 00:02:00.860 09:19:36 -- pm/common@21 -- $ date +%s 00:02:00.860 09:19:36 -- pm/common@21 -- $ date +%s 00:02:00.860 09:19:36 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733732376 00:02:00.860 09:19:36 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733732376 00:02:00.860 09:19:36 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733732376 00:02:00.860 09:19:36 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733732376 00:02:00.860 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733732376_collect-cpu-load.pm.log 00:02:00.860 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733732376_collect-vmstat.pm.log 00:02:00.860 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733732376_collect-cpu-temp.pm.log 00:02:00.860 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733732376_collect-bmc-pm.bmc.pm.log 00:02:01.801 09:19:37 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:01.801 09:19:37 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:01.801 09:19:37 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:01.801 09:19:37 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:01.801 09:19:37 -- spdk/autobuild.sh@16 -- $ date -u 00:02:01.801 Mon Dec 9 08:19:37 AM UTC 2024 00:02:01.801 09:19:37 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:01.801 v25.01-pre-311-ga2f5e1c2d 00:02:01.801 09:19:37 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:01.801 09:19:37 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:01.801 09:19:37 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:01.801 09:19:37 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:01.801 09:19:37 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:01.801 09:19:37 -- common/autotest_common.sh@10 -- $ set +x 00:02:01.801 ************************************ 00:02:01.801 START TEST ubsan 00:02:01.801 ************************************ 00:02:01.801 09:19:37 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:01.801 using ubsan 00:02:01.801 00:02:01.801 real 0m0.000s 00:02:01.801 user 0m0.000s 00:02:01.801 sys 0m0.000s 00:02:01.801 09:19:37 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:01.801 09:19:37 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:01.801 ************************************ 00:02:01.801 END TEST ubsan 00:02:01.801 ************************************ 00:02:01.801 09:19:37 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:02:01.801 09:19:37 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:01.801 09:19:37 -- common/autobuild_common.sh@449 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:01.801 09:19:37 -- common/autotest_common.sh@1105 -- $ '[' 2 -le 1 ']' 00:02:01.801 09:19:37 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:01.801 09:19:37 -- common/autotest_common.sh@10 -- $ set +x 00:02:02.062 ************************************ 00:02:02.062 START TEST build_native_dpdk 00:02:02.062 ************************************ 00:02:02.062 09:19:37 build_native_dpdk -- common/autotest_common.sh@1129 -- $ _build_native_dpdk 00:02:02.062 09:19:37 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:02.062 09:19:37 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:02.062 09:19:37 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:02.062 09:19:37 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:02.062 09:19:37 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:02.062 09:19:37 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:02.062 09:19:37 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:02.062 09:19:37 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:02.062 09:19:37 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:02.062 09:19:37 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:02.062 09:19:37 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:02.062 09:19:37 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:02.062 09:19:37 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:02.062 09:19:37 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:02.062 09:19:37 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:02.062 09:19:37 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:02.062 09:19:37 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:02:02.062 09:19:37 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:02:02.062 09:19:37 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:02.062 09:19:37 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:02:02.062 eeb0605f11 version: 23.11.0 00:02:02.062 238778122a doc: update release notes for 23.11 00:02:02.062 46aa6b3cfc doc: fix description of RSS features 00:02:02.062 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:02:02.062 7e421ae345 devtools: support skipping forbid rule check 00:02:02.062 09:19:37 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:02.062 09:19:37 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:02.062 09:19:37 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:02:02.062 09:19:37 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:02.062 09:19:37 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:02.062 09:19:37 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:02.062 09:19:37 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:02.062 09:19:37 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:02.062 09:19:37 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:02.062 09:19:37 build_native_dpdk -- common/autobuild_common.sh@102 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base" "power/acpi" "power/amd_pstate" "power/cppc" "power/intel_pstate" "power/intel_uncore" "power/kvm_vm") 00:02:02.062 09:19:37 build_native_dpdk -- common/autobuild_common.sh@103 -- $ local mlx5_libs_added=n 00:02:02.062 09:19:37 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:02:02.062 09:19:37 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:02:02.062 09:19:37 build_native_dpdk -- common/autobuild_common.sh@146 -- $ [[ 0 -eq 1 ]] 00:02:02.062 09:19:37 build_native_dpdk -- common/autobuild_common.sh@174 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:02:02.062 09:19:37 build_native_dpdk -- common/autobuild_common.sh@175 -- $ uname -s 00:02:02.062 09:19:37 build_native_dpdk -- common/autobuild_common.sh@175 -- $ '[' Linux = Linux ']' 00:02:02.062 09:19:37 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 23.11.0 21.11.0 00:02:02.062 09:19:37 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:02:02.062 09:19:37 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:02.062 09:19:37 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:02.062 09:19:37 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:02.062 09:19:37 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:02.062 09:19:37 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:02.062 09:19:37 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:02.062 09:19:37 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:02.062 09:19:37 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:02.062 09:19:37 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:02.062 09:19:37 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:02.062 09:19:37 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:02.062 09:19:37 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:02.062 09:19:37 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:02.062 09:19:37 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:02.062 09:19:37 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:02.062 09:19:37 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:02.062 09:19:37 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:02.062 09:19:37 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:02.062 09:19:37 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:02.062 09:19:37 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:02:02.062 09:19:37 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:02:02.062 09:19:37 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:02.062 09:19:37 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:02:02.062 09:19:37 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:02:02.062 09:19:37 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:02.062 09:19:37 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:02.062 09:19:37 build_native_dpdk -- common/autobuild_common.sh@180 -- $ patch -p1 00:02:02.062 patching file config/rte_config.h 00:02:02.062 Hunk #1 succeeded at 60 (offset 1 line). 00:02:02.062 09:19:37 build_native_dpdk -- common/autobuild_common.sh@183 -- $ lt 23.11.0 24.07.0 00:02:02.062 09:19:37 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 24.07.0 00:02:02.062 09:19:37 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:02.062 09:19:37 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:02.062 09:19:37 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:02.062 09:19:37 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:02.062 09:19:37 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:02.062 09:19:37 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:02.062 09:19:37 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:02.062 09:19:37 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:02.062 09:19:37 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:02.062 09:19:37 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:02.062 09:19:37 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:02.062 09:19:37 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:02.062 09:19:37 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:02.062 09:19:37 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:02.062 09:19:37 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:02.062 09:19:37 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:02.062 09:19:37 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:02.062 09:19:37 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:02.062 09:19:37 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:02.062 09:19:37 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:02.062 09:19:37 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:02.062 09:19:37 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:02.062 09:19:37 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:02.062 09:19:37 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:02.062 09:19:37 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:02.062 09:19:37 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:02.062 09:19:37 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:02:02.063 09:19:37 build_native_dpdk -- common/autobuild_common.sh@184 -- $ patch -p1 00:02:02.063 patching file lib/pcapng/rte_pcapng.c 00:02:02.063 09:19:37 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ge 23.11.0 24.07.0 00:02:02.063 09:19:37 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 23.11.0 '>=' 24.07.0 00:02:02.063 09:19:37 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:02.063 09:19:37 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:02.063 09:19:37 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:02.063 09:19:37 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:02.063 09:19:37 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:02.063 09:19:37 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:02.063 09:19:37 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:02:02.063 09:19:37 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:02.063 09:19:37 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:02.063 09:19:37 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:02.063 09:19:37 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:02.063 09:19:37 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:02:02.063 09:19:37 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:02.063 09:19:37 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:02.063 09:19:37 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:02.063 09:19:37 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:02.063 09:19:37 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:02.063 09:19:37 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:02.063 09:19:37 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:02.063 09:19:37 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:02.063 09:19:37 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:02.063 09:19:37 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:02.063 09:19:37 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:02.063 09:19:37 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:02.063 09:19:37 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:02.063 09:19:37 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:02.063 09:19:37 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:02:02.063 09:19:37 build_native_dpdk -- common/autobuild_common.sh@190 -- $ dpdk_kmods=false 00:02:02.063 09:19:37 build_native_dpdk -- common/autobuild_common.sh@191 -- $ uname -s 00:02:02.063 09:19:37 build_native_dpdk -- common/autobuild_common.sh@191 -- $ '[' Linux = FreeBSD ']' 00:02:02.063 09:19:37 build_native_dpdk -- common/autobuild_common.sh@195 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base power/acpi power/amd_pstate power/cppc power/intel_pstate power/intel_uncore power/kvm_vm 00:02:02.063 09:19:37 build_native_dpdk -- common/autobuild_common.sh@195 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:02:06.276 The Meson build system 00:02:06.276 Version: 1.5.0 00:02:06.276 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:02:06.276 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:02:06.276 Build type: native build 00:02:06.276 Program cat found: YES (/usr/bin/cat) 00:02:06.276 Project name: DPDK 00:02:06.276 Project version: 23.11.0 00:02:06.276 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:06.276 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:06.276 Host machine cpu family: x86_64 00:02:06.276 Host machine cpu: x86_64 00:02:06.276 Message: ## Building in Developer Mode ## 00:02:06.276 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:06.276 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:02:06.276 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:02:06.276 Program python3 found: YES (/usr/bin/python3) 00:02:06.276 Program cat found: YES (/usr/bin/cat) 00:02:06.276 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:06.276 Compiler for C supports arguments -march=native: YES 00:02:06.276 Checking for size of "void *" : 8 00:02:06.276 Checking for size of "void *" : 8 (cached) 00:02:06.276 Library m found: YES 00:02:06.276 Library numa found: YES 00:02:06.276 Has header "numaif.h" : YES 00:02:06.276 Library fdt found: NO 00:02:06.276 Library execinfo found: NO 00:02:06.276 Has header "execinfo.h" : YES 00:02:06.276 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:06.276 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:06.276 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:06.276 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:06.276 Run-time dependency openssl found: YES 3.1.1 00:02:06.276 Run-time dependency libpcap found: YES 1.10.4 00:02:06.276 Has header "pcap.h" with dependency libpcap: YES 00:02:06.276 Compiler for C supports arguments -Wcast-qual: YES 00:02:06.276 Compiler for C supports arguments -Wdeprecated: YES 00:02:06.276 Compiler for C supports arguments -Wformat: YES 00:02:06.276 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:06.276 Compiler for C supports arguments -Wformat-security: NO 00:02:06.276 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:06.276 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:06.276 Compiler for C supports arguments -Wnested-externs: YES 00:02:06.276 Compiler for C supports arguments -Wold-style-definition: YES 00:02:06.276 Compiler for C supports arguments -Wpointer-arith: YES 00:02:06.276 Compiler for C supports arguments -Wsign-compare: YES 00:02:06.276 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:06.276 Compiler for C supports arguments -Wundef: YES 00:02:06.276 Compiler for C supports arguments -Wwrite-strings: YES 00:02:06.276 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:06.276 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:06.276 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:06.276 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:06.276 Program objdump found: YES (/usr/bin/objdump) 00:02:06.276 Compiler for C supports arguments -mavx512f: YES 00:02:06.276 Checking if "AVX512 checking" compiles: YES 00:02:06.276 Fetching value of define "__SSE4_2__" : 1 00:02:06.276 Fetching value of define "__AES__" : 1 00:02:06.276 Fetching value of define "__AVX__" : 1 00:02:06.276 Fetching value of define "__AVX2__" : 1 00:02:06.276 Fetching value of define "__AVX512BW__" : 1 00:02:06.276 Fetching value of define "__AVX512CD__" : 1 00:02:06.276 Fetching value of define "__AVX512DQ__" : 1 00:02:06.276 Fetching value of define "__AVX512F__" : 1 00:02:06.276 Fetching value of define "__AVX512VL__" : 1 00:02:06.276 Fetching value of define "__PCLMUL__" : 1 00:02:06.276 Fetching value of define "__RDRND__" : 1 00:02:06.276 Fetching value of define "__RDSEED__" : 1 00:02:06.276 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:06.276 Fetching value of define "__znver1__" : (undefined) 00:02:06.276 Fetching value of define "__znver2__" : (undefined) 00:02:06.276 Fetching value of define "__znver3__" : (undefined) 00:02:06.276 Fetching value of define "__znver4__" : (undefined) 00:02:06.276 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:06.277 Message: lib/log: Defining dependency "log" 00:02:06.277 Message: lib/kvargs: Defining dependency "kvargs" 00:02:06.277 Message: lib/telemetry: Defining dependency "telemetry" 00:02:06.277 Checking for function "getentropy" : NO 00:02:06.277 Message: lib/eal: Defining dependency "eal" 00:02:06.277 Message: lib/ring: Defining dependency "ring" 00:02:06.277 Message: lib/rcu: Defining dependency "rcu" 00:02:06.277 Message: lib/mempool: Defining dependency "mempool" 00:02:06.277 Message: lib/mbuf: Defining dependency "mbuf" 00:02:06.277 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:06.277 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:06.277 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:06.277 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:06.277 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:06.277 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:06.277 Compiler for C supports arguments -mpclmul: YES 00:02:06.277 Compiler for C supports arguments -maes: YES 00:02:06.277 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:06.277 Compiler for C supports arguments -mavx512bw: YES 00:02:06.277 Compiler for C supports arguments -mavx512dq: YES 00:02:06.277 Compiler for C supports arguments -mavx512vl: YES 00:02:06.277 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:06.277 Compiler for C supports arguments -mavx2: YES 00:02:06.277 Compiler for C supports arguments -mavx: YES 00:02:06.277 Message: lib/net: Defining dependency "net" 00:02:06.277 Message: lib/meter: Defining dependency "meter" 00:02:06.277 Message: lib/ethdev: Defining dependency "ethdev" 00:02:06.277 Message: lib/pci: Defining dependency "pci" 00:02:06.277 Message: lib/cmdline: Defining dependency "cmdline" 00:02:06.277 Message: lib/metrics: Defining dependency "metrics" 00:02:06.277 Message: lib/hash: Defining dependency "hash" 00:02:06.277 Message: lib/timer: Defining dependency "timer" 00:02:06.277 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:06.277 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:06.277 Fetching value of define "__AVX512CD__" : 1 (cached) 00:02:06.277 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:06.277 Message: lib/acl: Defining dependency "acl" 00:02:06.277 Message: lib/bbdev: Defining dependency "bbdev" 00:02:06.277 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:06.277 Run-time dependency libelf found: YES 0.191 00:02:06.277 Message: lib/bpf: Defining dependency "bpf" 00:02:06.277 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:06.277 Message: lib/compressdev: Defining dependency "compressdev" 00:02:06.277 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:06.277 Message: lib/distributor: Defining dependency "distributor" 00:02:06.277 Message: lib/dmadev: Defining dependency "dmadev" 00:02:06.277 Message: lib/efd: Defining dependency "efd" 00:02:06.277 Message: lib/eventdev: Defining dependency "eventdev" 00:02:06.277 Message: lib/dispatcher: Defining dependency "dispatcher" 00:02:06.277 Message: lib/gpudev: Defining dependency "gpudev" 00:02:06.277 Message: lib/gro: Defining dependency "gro" 00:02:06.277 Message: lib/gso: Defining dependency "gso" 00:02:06.277 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:06.277 Message: lib/jobstats: Defining dependency "jobstats" 00:02:06.277 Message: lib/latencystats: Defining dependency "latencystats" 00:02:06.277 Message: lib/lpm: Defining dependency "lpm" 00:02:06.277 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:06.277 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:06.277 Fetching value of define "__AVX512IFMA__" : 1 00:02:06.277 Message: lib/member: Defining dependency "member" 00:02:06.277 Message: lib/pcapng: Defining dependency "pcapng" 00:02:06.277 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:06.277 Message: lib/power: Defining dependency "power" 00:02:06.277 Message: lib/rawdev: Defining dependency "rawdev" 00:02:06.277 Message: lib/regexdev: Defining dependency "regexdev" 00:02:06.277 Message: lib/mldev: Defining dependency "mldev" 00:02:06.277 Message: lib/rib: Defining dependency "rib" 00:02:06.277 Message: lib/reorder: Defining dependency "reorder" 00:02:06.277 Message: lib/sched: Defining dependency "sched" 00:02:06.277 Message: lib/security: Defining dependency "security" 00:02:06.277 Message: lib/stack: Defining dependency "stack" 00:02:06.277 Has header "linux/userfaultfd.h" : YES 00:02:06.277 Has header "linux/vduse.h" : YES 00:02:06.277 Message: lib/vhost: Defining dependency "vhost" 00:02:06.277 Message: lib/ipsec: Defining dependency "ipsec" 00:02:06.277 Message: lib/pdcp: Defining dependency "pdcp" 00:02:06.277 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:06.277 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:06.277 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:06.277 Message: lib/fib: Defining dependency "fib" 00:02:06.277 Message: lib/port: Defining dependency "port" 00:02:06.277 Message: lib/pdump: Defining dependency "pdump" 00:02:06.277 Message: lib/table: Defining dependency "table" 00:02:06.277 Message: lib/pipeline: Defining dependency "pipeline" 00:02:06.277 Message: lib/graph: Defining dependency "graph" 00:02:06.277 Message: lib/node: Defining dependency "node" 00:02:06.277 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:06.277 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:06.277 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:07.666 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:07.666 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:07.666 Compiler for C supports arguments -Wno-unused-value: YES 00:02:07.666 Compiler for C supports arguments -Wno-format: YES 00:02:07.666 Compiler for C supports arguments -Wno-format-security: YES 00:02:07.666 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:07.666 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:07.666 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:07.666 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:07.666 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:07.666 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:07.666 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:07.666 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:07.666 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:07.666 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:07.666 Has header "sys/epoll.h" : YES 00:02:07.666 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:07.666 Configuring doxy-api-html.conf using configuration 00:02:07.666 Configuring doxy-api-man.conf using configuration 00:02:07.666 Program mandb found: YES (/usr/bin/mandb) 00:02:07.666 Program sphinx-build found: NO 00:02:07.666 Configuring rte_build_config.h using configuration 00:02:07.666 Message: 00:02:07.666 ================= 00:02:07.666 Applications Enabled 00:02:07.666 ================= 00:02:07.666 00:02:07.666 apps: 00:02:07.666 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:02:07.666 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:02:07.666 test-pmd, test-regex, test-sad, test-security-perf, 00:02:07.666 00:02:07.666 Message: 00:02:07.666 ================= 00:02:07.666 Libraries Enabled 00:02:07.666 ================= 00:02:07.666 00:02:07.666 libs: 00:02:07.666 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:07.666 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:02:07.666 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:02:07.666 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:02:07.666 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:02:07.666 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:02:07.666 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:02:07.666 00:02:07.666 00:02:07.666 Message: 00:02:07.666 =============== 00:02:07.666 Drivers Enabled 00:02:07.666 =============== 00:02:07.666 00:02:07.666 common: 00:02:07.666 00:02:07.666 bus: 00:02:07.666 pci, vdev, 00:02:07.666 mempool: 00:02:07.666 ring, 00:02:07.666 dma: 00:02:07.666 00:02:07.666 net: 00:02:07.666 i40e, 00:02:07.666 raw: 00:02:07.666 00:02:07.666 crypto: 00:02:07.666 00:02:07.666 compress: 00:02:07.666 00:02:07.666 regex: 00:02:07.666 00:02:07.666 ml: 00:02:07.666 00:02:07.666 vdpa: 00:02:07.666 00:02:07.666 event: 00:02:07.666 00:02:07.666 baseband: 00:02:07.666 00:02:07.666 gpu: 00:02:07.666 00:02:07.666 00:02:07.666 Message: 00:02:07.666 ================= 00:02:07.666 Content Skipped 00:02:07.666 ================= 00:02:07.666 00:02:07.666 apps: 00:02:07.666 00:02:07.666 libs: 00:02:07.666 00:02:07.666 drivers: 00:02:07.666 common/cpt: not in enabled drivers build config 00:02:07.666 common/dpaax: not in enabled drivers build config 00:02:07.666 common/iavf: not in enabled drivers build config 00:02:07.666 common/idpf: not in enabled drivers build config 00:02:07.666 common/mvep: not in enabled drivers build config 00:02:07.666 common/octeontx: not in enabled drivers build config 00:02:07.666 bus/auxiliary: not in enabled drivers build config 00:02:07.666 bus/cdx: not in enabled drivers build config 00:02:07.666 bus/dpaa: not in enabled drivers build config 00:02:07.666 bus/fslmc: not in enabled drivers build config 00:02:07.666 bus/ifpga: not in enabled drivers build config 00:02:07.666 bus/platform: not in enabled drivers build config 00:02:07.666 bus/vmbus: not in enabled drivers build config 00:02:07.666 common/cnxk: not in enabled drivers build config 00:02:07.666 common/mlx5: not in enabled drivers build config 00:02:07.666 common/nfp: not in enabled drivers build config 00:02:07.666 common/qat: not in enabled drivers build config 00:02:07.666 common/sfc_efx: not in enabled drivers build config 00:02:07.666 mempool/bucket: not in enabled drivers build config 00:02:07.666 mempool/cnxk: not in enabled drivers build config 00:02:07.666 mempool/dpaa: not in enabled drivers build config 00:02:07.666 mempool/dpaa2: not in enabled drivers build config 00:02:07.666 mempool/octeontx: not in enabled drivers build config 00:02:07.666 mempool/stack: not in enabled drivers build config 00:02:07.666 dma/cnxk: not in enabled drivers build config 00:02:07.666 dma/dpaa: not in enabled drivers build config 00:02:07.666 dma/dpaa2: not in enabled drivers build config 00:02:07.666 dma/hisilicon: not in enabled drivers build config 00:02:07.666 dma/idxd: not in enabled drivers build config 00:02:07.666 dma/ioat: not in enabled drivers build config 00:02:07.666 dma/skeleton: not in enabled drivers build config 00:02:07.666 net/af_packet: not in enabled drivers build config 00:02:07.666 net/af_xdp: not in enabled drivers build config 00:02:07.666 net/ark: not in enabled drivers build config 00:02:07.666 net/atlantic: not in enabled drivers build config 00:02:07.666 net/avp: not in enabled drivers build config 00:02:07.666 net/axgbe: not in enabled drivers build config 00:02:07.666 net/bnx2x: not in enabled drivers build config 00:02:07.666 net/bnxt: not in enabled drivers build config 00:02:07.666 net/bonding: not in enabled drivers build config 00:02:07.666 net/cnxk: not in enabled drivers build config 00:02:07.666 net/cpfl: not in enabled drivers build config 00:02:07.666 net/cxgbe: not in enabled drivers build config 00:02:07.666 net/dpaa: not in enabled drivers build config 00:02:07.666 net/dpaa2: not in enabled drivers build config 00:02:07.666 net/e1000: not in enabled drivers build config 00:02:07.666 net/ena: not in enabled drivers build config 00:02:07.666 net/enetc: not in enabled drivers build config 00:02:07.666 net/enetfec: not in enabled drivers build config 00:02:07.666 net/enic: not in enabled drivers build config 00:02:07.666 net/failsafe: not in enabled drivers build config 00:02:07.666 net/fm10k: not in enabled drivers build config 00:02:07.666 net/gve: not in enabled drivers build config 00:02:07.666 net/hinic: not in enabled drivers build config 00:02:07.666 net/hns3: not in enabled drivers build config 00:02:07.666 net/iavf: not in enabled drivers build config 00:02:07.666 net/ice: not in enabled drivers build config 00:02:07.666 net/idpf: not in enabled drivers build config 00:02:07.666 net/igc: not in enabled drivers build config 00:02:07.666 net/ionic: not in enabled drivers build config 00:02:07.666 net/ipn3ke: not in enabled drivers build config 00:02:07.666 net/ixgbe: not in enabled drivers build config 00:02:07.666 net/mana: not in enabled drivers build config 00:02:07.666 net/memif: not in enabled drivers build config 00:02:07.666 net/mlx4: not in enabled drivers build config 00:02:07.666 net/mlx5: not in enabled drivers build config 00:02:07.666 net/mvneta: not in enabled drivers build config 00:02:07.666 net/mvpp2: not in enabled drivers build config 00:02:07.666 net/netvsc: not in enabled drivers build config 00:02:07.666 net/nfb: not in enabled drivers build config 00:02:07.666 net/nfp: not in enabled drivers build config 00:02:07.666 net/ngbe: not in enabled drivers build config 00:02:07.666 net/null: not in enabled drivers build config 00:02:07.666 net/octeontx: not in enabled drivers build config 00:02:07.666 net/octeon_ep: not in enabled drivers build config 00:02:07.666 net/pcap: not in enabled drivers build config 00:02:07.666 net/pfe: not in enabled drivers build config 00:02:07.666 net/qede: not in enabled drivers build config 00:02:07.666 net/ring: not in enabled drivers build config 00:02:07.666 net/sfc: not in enabled drivers build config 00:02:07.666 net/softnic: not in enabled drivers build config 00:02:07.666 net/tap: not in enabled drivers build config 00:02:07.666 net/thunderx: not in enabled drivers build config 00:02:07.666 net/txgbe: not in enabled drivers build config 00:02:07.666 net/vdev_netvsc: not in enabled drivers build config 00:02:07.666 net/vhost: not in enabled drivers build config 00:02:07.666 net/virtio: not in enabled drivers build config 00:02:07.666 net/vmxnet3: not in enabled drivers build config 00:02:07.666 raw/cnxk_bphy: not in enabled drivers build config 00:02:07.666 raw/cnxk_gpio: not in enabled drivers build config 00:02:07.666 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:07.666 raw/ifpga: not in enabled drivers build config 00:02:07.666 raw/ntb: not in enabled drivers build config 00:02:07.666 raw/skeleton: not in enabled drivers build config 00:02:07.666 crypto/armv8: not in enabled drivers build config 00:02:07.666 crypto/bcmfs: not in enabled drivers build config 00:02:07.666 crypto/caam_jr: not in enabled drivers build config 00:02:07.666 crypto/ccp: not in enabled drivers build config 00:02:07.666 crypto/cnxk: not in enabled drivers build config 00:02:07.667 crypto/dpaa_sec: not in enabled drivers build config 00:02:07.667 crypto/dpaa2_sec: not in enabled drivers build config 00:02:07.667 crypto/ipsec_mb: not in enabled drivers build config 00:02:07.667 crypto/mlx5: not in enabled drivers build config 00:02:07.667 crypto/mvsam: not in enabled drivers build config 00:02:07.667 crypto/nitrox: not in enabled drivers build config 00:02:07.667 crypto/null: not in enabled drivers build config 00:02:07.667 crypto/octeontx: not in enabled drivers build config 00:02:07.667 crypto/openssl: not in enabled drivers build config 00:02:07.667 crypto/scheduler: not in enabled drivers build config 00:02:07.667 crypto/uadk: not in enabled drivers build config 00:02:07.667 crypto/virtio: not in enabled drivers build config 00:02:07.667 compress/isal: not in enabled drivers build config 00:02:07.667 compress/mlx5: not in enabled drivers build config 00:02:07.667 compress/octeontx: not in enabled drivers build config 00:02:07.667 compress/zlib: not in enabled drivers build config 00:02:07.667 regex/mlx5: not in enabled drivers build config 00:02:07.667 regex/cn9k: not in enabled drivers build config 00:02:07.667 ml/cnxk: not in enabled drivers build config 00:02:07.667 vdpa/ifc: not in enabled drivers build config 00:02:07.667 vdpa/mlx5: not in enabled drivers build config 00:02:07.667 vdpa/nfp: not in enabled drivers build config 00:02:07.667 vdpa/sfc: not in enabled drivers build config 00:02:07.667 event/cnxk: not in enabled drivers build config 00:02:07.667 event/dlb2: not in enabled drivers build config 00:02:07.667 event/dpaa: not in enabled drivers build config 00:02:07.667 event/dpaa2: not in enabled drivers build config 00:02:07.667 event/dsw: not in enabled drivers build config 00:02:07.667 event/opdl: not in enabled drivers build config 00:02:07.667 event/skeleton: not in enabled drivers build config 00:02:07.667 event/sw: not in enabled drivers build config 00:02:07.667 event/octeontx: not in enabled drivers build config 00:02:07.667 baseband/acc: not in enabled drivers build config 00:02:07.667 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:07.667 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:07.667 baseband/la12xx: not in enabled drivers build config 00:02:07.667 baseband/null: not in enabled drivers build config 00:02:07.667 baseband/turbo_sw: not in enabled drivers build config 00:02:07.667 gpu/cuda: not in enabled drivers build config 00:02:07.667 00:02:07.667 00:02:07.667 Build targets in project: 215 00:02:07.667 00:02:07.667 DPDK 23.11.0 00:02:07.667 00:02:07.667 User defined options 00:02:07.667 libdir : lib 00:02:07.667 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:07.667 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:07.667 c_link_args : 00:02:07.667 enable_docs : false 00:02:07.667 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:02:07.667 enable_kmods : false 00:02:07.667 machine : native 00:02:07.667 tests : false 00:02:07.667 00:02:07.667 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:07.667 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:07.667 09:19:42 build_native_dpdk -- common/autobuild_common.sh@199 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j144 00:02:07.667 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:02:07.932 [1/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:07.932 [2/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:07.932 [3/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:07.932 [4/705] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:07.932 [5/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:07.932 [6/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:07.932 [7/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:07.932 [8/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:07.932 [9/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:07.932 [10/705] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:07.932 [11/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:07.932 [12/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:07.932 [13/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:07.932 [14/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:07.932 [15/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:07.932 [16/705] Linking static target lib/librte_kvargs.a 00:02:07.932 [17/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:08.191 [18/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:08.191 [19/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:08.191 [20/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:08.191 [21/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:08.191 [22/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:08.191 [23/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:08.191 [24/705] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:08.191 [25/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:08.191 [26/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:08.191 [27/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:08.191 [28/705] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:08.191 [29/705] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:08.191 [30/705] Linking static target lib/librte_pci.a 00:02:08.191 [31/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:08.191 [32/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:08.191 [33/705] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:08.191 [34/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:08.191 [35/705] Linking static target lib/librte_log.a 00:02:08.452 [36/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:08.452 [37/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:08.452 [38/705] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:08.452 [39/705] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.452 [40/705] Linking static target lib/librte_cfgfile.a 00:02:08.452 [41/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:08.452 [42/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:08.452 [43/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:08.452 [44/705] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.452 [45/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:08.452 [46/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:08.452 [47/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:08.452 [48/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:08.711 [49/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:08.712 [50/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:08.712 [51/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:08.712 [52/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:08.712 [53/705] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:08.712 [54/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:08.712 [55/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:08.712 [56/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:08.712 [57/705] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:08.712 [58/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:08.712 [59/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:08.712 [60/705] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:08.712 [61/705] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:08.712 [62/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:08.712 [63/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:08.712 [64/705] Linking static target lib/librte_meter.a 00:02:08.712 [65/705] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:08.712 [66/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:08.712 [67/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:08.712 [68/705] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:08.712 [69/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:08.712 [70/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:08.712 [71/705] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:08.712 [72/705] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:08.712 [73/705] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:08.712 [74/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:08.712 [75/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:08.712 [76/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:08.712 [77/705] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:08.712 [78/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:08.712 [79/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:08.712 [80/705] Linking static target lib/librte_ring.a 00:02:08.712 [81/705] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:08.712 [82/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:08.712 [83/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:08.712 [84/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:08.712 [85/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:08.712 [86/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:08.712 [87/705] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:08.712 [88/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:08.712 [89/705] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:08.712 [90/705] Linking static target lib/librte_cmdline.a 00:02:08.712 [91/705] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:08.712 [92/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:08.712 [93/705] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:08.712 [94/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:08.712 [95/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:08.712 [96/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:08.712 [97/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:08.712 [98/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:08.712 [99/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:08.712 [100/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:08.712 [101/705] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:08.712 [102/705] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:08.712 [103/705] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:08.712 [104/705] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:08.712 [105/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:08.973 [106/705] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:08.973 [107/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:08.973 [108/705] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:08.973 [109/705] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:08.973 [110/705] Linking static target lib/librte_bitratestats.a 00:02:08.973 [111/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:08.973 [112/705] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:08.973 [113/705] Linking static target lib/librte_metrics.a 00:02:08.973 [114/705] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:08.973 [115/705] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:08.973 [116/705] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:08.973 [117/705] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:08.973 [118/705] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:08.973 [119/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:08.973 [120/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:08.973 [121/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:08.973 [122/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:08.973 [123/705] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:08.973 [124/705] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:08.973 [125/705] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:08.973 [126/705] Linking static target lib/librte_net.a 00:02:08.973 [127/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:08.973 [128/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:08.973 [129/705] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:08.973 [130/705] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:08.973 [131/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:08.973 [132/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:08.973 [133/705] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:08.973 [134/705] Linking static target lib/librte_compressdev.a 00:02:08.973 [135/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:09.237 [136/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:09.237 [137/705] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:09.237 [138/705] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.237 [139/705] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.237 [140/705] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.237 [141/705] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:09.237 [142/705] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:09.237 [143/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:09.237 [144/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:09.237 [145/705] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:09.237 [146/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:09.237 [147/705] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:02:09.237 [148/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:09.237 [149/705] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:09.237 [150/705] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:09.237 [151/705] Linking static target lib/librte_timer.a 00:02:09.237 [152/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:09.237 [153/705] Linking target lib/librte_log.so.24.0 00:02:09.237 [154/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:09.237 [155/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:09.237 [156/705] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.237 [157/705] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:02:09.237 [158/705] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:09.237 [159/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:09.237 [160/705] Linking static target lib/librte_dispatcher.a 00:02:09.237 [161/705] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:09.237 [162/705] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.237 [163/705] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:09.237 [164/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:09.237 [165/705] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:09.237 [166/705] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:09.237 [167/705] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:09.237 [168/705] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:09.237 [169/705] Linking static target lib/librte_gpudev.a 00:02:09.237 [170/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:09.237 [171/705] Linking static target lib/librte_jobstats.a 00:02:09.237 [172/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:09.237 [173/705] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:09.237 [174/705] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:09.237 [175/705] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:09.237 [176/705] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:09.237 [177/705] Linking static target lib/librte_bbdev.a 00:02:09.237 [178/705] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:09.237 [179/705] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:09.502 [180/705] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:09.502 [181/705] Linking static target lib/librte_mempool.a 00:02:09.502 [182/705] Linking static target lib/librte_gro.a 00:02:09.502 [183/705] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:09.502 [184/705] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:09.502 [185/705] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:09.502 [186/705] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:09.502 [187/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:09.502 [188/705] Linking static target lib/librte_distributor.a 00:02:09.502 [189/705] Linking static target lib/librte_dmadev.a 00:02:09.502 [190/705] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:09.502 [191/705] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:09.502 [192/705] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:09.502 [193/705] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:09.502 [194/705] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:02:09.502 [195/705] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.502 [196/705] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:09.502 [197/705] Linking target lib/librte_kvargs.so.24.0 00:02:09.502 [198/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:09.502 [199/705] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:09.502 [200/705] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:09.502 [201/705] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:09.502 [202/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:09.502 [203/705] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:09.502 [204/705] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:02:09.502 [205/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:09.502 [206/705] Linking static target lib/librte_stack.a 00:02:09.502 [207/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:09.502 [208/705] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.502 [209/705] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:09.502 [210/705] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:02:09.502 [211/705] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:09.502 [212/705] Linking static target lib/librte_gso.a 00:02:09.502 [213/705] Compiling C object lib/librte_member.a.p/member_rte_member_sketch_avx512.c.o 00:02:09.502 [214/705] Linking static target lib/librte_latencystats.a 00:02:09.502 [215/705] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:09.502 [216/705] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:09.502 [217/705] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:09.502 [218/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:09.763 [219/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:02:09.763 [220/705] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:09.763 [221/705] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:09.763 [222/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:02:09.763 [223/705] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:09.763 [224/705] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:09.763 [225/705] Linking static target lib/librte_rawdev.a 00:02:09.763 [226/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:09.763 [227/705] Linking static target lib/librte_telemetry.a 00:02:09.763 [228/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:02:09.763 [229/705] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:09.763 [230/705] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:09.763 [231/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:09.763 [232/705] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:09.763 [233/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:02:09.763 [234/705] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:09.763 [235/705] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:02:09.763 [236/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:09.763 [237/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:09.763 [238/705] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:09.763 [239/705] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:09.763 [240/705] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:09.763 [241/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:09.763 [242/705] Linking static target lib/librte_regexdev.a 00:02:09.763 [243/705] Linking static target lib/librte_rcu.a 00:02:09.763 [244/705] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:09.763 [245/705] Linking static target lib/librte_bpf.a 00:02:09.763 [246/705] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:09.763 [247/705] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:09.763 [248/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:02:09.763 [249/705] Linking static target lib/librte_eal.a 00:02:09.763 [250/705] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.763 [251/705] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:02:09.763 [252/705] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:09.763 [253/705] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:09.763 [254/705] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:02:09.763 [255/705] Linking static target lib/librte_mldev.a 00:02:09.763 [256/705] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:09.763 [257/705] Linking static target lib/librte_reorder.a 00:02:09.763 [258/705] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:09.763 [259/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:09.763 [260/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:09.763 [261/705] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.763 [262/705] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.763 [263/705] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:09.763 [264/705] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:02:09.763 [265/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:09.763 [266/705] Linking static target lib/librte_security.a 00:02:09.764 [267/705] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.764 [268/705] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:09.764 [269/705] Linking static target lib/librte_ip_frag.a 00:02:09.764 [270/705] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.764 [271/705] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:09.764 [272/705] Linking static target lib/librte_power.a 00:02:09.764 [273/705] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.764 [274/705] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:10.026 [275/705] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:10.026 [276/705] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:02:10.026 [277/705] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:10.026 [278/705] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:10.026 [279/705] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:10.026 [280/705] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.026 [281/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:10.026 [282/705] Linking static target lib/librte_pcapng.a 00:02:10.026 [283/705] Linking static target lib/librte_mbuf.a 00:02:10.026 [284/705] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:10.026 [285/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:10.026 [286/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:10.026 [287/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:10.026 [288/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:10.026 [289/705] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.026 [290/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:10.026 [291/705] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:10.026 [292/705] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:10.026 [293/705] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.026 [294/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:10.026 [295/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:10.026 [296/705] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:10.026 [297/705] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:10.026 [298/705] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:02:10.026 [299/705] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:10.026 [300/705] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.026 [301/705] Linking static target lib/librte_efd.a 00:02:10.026 [302/705] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:10.026 [303/705] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:10.287 [304/705] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:10.287 [305/705] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:10.287 [306/705] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:10.287 [307/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:10.287 [308/705] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:10.287 [309/705] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:10.287 [310/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:10.287 [311/705] Linking static target lib/librte_rib.a 00:02:10.287 [312/705] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:10.287 [313/705] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:10.287 [314/705] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:10.287 [315/705] Linking static target lib/librte_lpm.a 00:02:10.287 [316/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:10.287 [317/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:10.287 [318/705] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:10.287 [319/705] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.287 [320/705] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:10.287 [321/705] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:02:10.287 [322/705] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:10.287 [323/705] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:10.287 [324/705] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:10.287 [325/705] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:10.287 [326/705] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:10.287 [327/705] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.287 [328/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:10.287 [329/705] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:10.287 [330/705] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.287 [331/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:10.287 [332/705] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:10.287 [333/705] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:10.287 [334/705] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.287 [335/705] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:02:10.287 [336/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:10.548 [337/705] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:10.548 [338/705] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.548 [339/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:10.548 [340/705] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:10.548 [341/705] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.548 [342/705] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.548 [343/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:10.548 [344/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:10.548 [345/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:10.548 [346/705] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:10.548 [347/705] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:10.548 [348/705] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:10.548 [349/705] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:10.548 [350/705] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:10.548 [351/705] Linking static target lib/librte_fib.a 00:02:10.548 [352/705] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:02:10.548 [353/705] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:10.548 [354/705] Linking target lib/librte_telemetry.so.24.0 00:02:10.548 [355/705] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:10.548 [356/705] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:10.548 [357/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:10.548 [358/705] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.548 [359/705] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:02:10.548 [360/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:10.548 [361/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:10.548 [362/705] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.548 [363/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:10.548 [364/705] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.548 [365/705] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.548 [366/705] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:02:10.548 [367/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:10.548 [368/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:10.548 [369/705] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:02:10.548 [370/705] Linking static target lib/librte_graph.a 00:02:10.548 [371/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:02:10.548 [372/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:10.548 [373/705] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:02:10.548 [374/705] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:02:10.813 [375/705] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:02:10.813 [376/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:10.813 [377/705] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.813 [378/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:10.813 [379/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:10.813 [380/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:10.813 [381/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:10.813 [382/705] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:10.813 [383/705] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:10.813 [384/705] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:02:10.813 [385/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:10.813 [386/705] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:02:10.813 [387/705] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:02:10.813 [388/705] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:02:10.813 [389/705] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:10.813 [390/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:02:10.813 [391/705] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:10.813 [392/705] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:02:10.813 [393/705] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:02:10.813 [394/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:10.813 [395/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:10.813 [396/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:10.813 [397/705] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.813 [398/705] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:10.813 [399/705] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:02:10.813 [400/705] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:10.813 [401/705] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:02:10.813 [402/705] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:10.813 [403/705] Linking static target lib/librte_pdump.a 00:02:10.813 [404/705] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:10.814 [405/705] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:10.814 [406/705] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.814 [407/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:10.814 [408/705] Linking static target drivers/librte_bus_vdev.a 00:02:11.072 [409/705] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:02:11.072 [410/705] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:11.072 [411/705] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.072 [412/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:11.072 [413/705] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:02:11.072 [414/705] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:11.072 [415/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:11.072 [416/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:11.072 [417/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:11.072 [418/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:02:11.073 [419/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:11.073 [420/705] Linking static target lib/librte_cryptodev.a 00:02:11.073 [421/705] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:02:11.073 [422/705] Linking static target lib/librte_table.a 00:02:11.073 [423/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:11.073 [424/705] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:11.073 [425/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:11.073 [426/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:11.073 [427/705] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:11.073 [428/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:11.073 [429/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:02:11.073 [430/705] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.073 [431/705] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:11.073 [432/705] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.073 [433/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:02:11.073 [434/705] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:11.073 [435/705] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:11.073 [436/705] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.073 [437/705] Linking static target drivers/librte_bus_pci.a 00:02:11.073 [438/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:02:11.073 [439/705] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:11.073 [440/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:02:11.073 [441/705] Linking static target lib/librte_sched.a 00:02:11.073 [442/705] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:02:11.073 [443/705] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:11.073 [444/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:11.073 [445/705] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.073 [446/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:11.073 [447/705] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:11.331 [448/705] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:11.331 [449/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:02:11.331 [450/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:02:11.331 [451/705] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.331 [452/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:11.331 [453/705] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:02:11.331 [454/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:11.331 [455/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:11.331 [456/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:02:11.331 [457/705] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.331 [458/705] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:11.331 [459/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:11.331 [460/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:11.331 [461/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:11.331 [462/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:02:11.331 [463/705] Linking static target lib/librte_node.a 00:02:11.331 [464/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:11.332 [465/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:02:11.332 [466/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:11.332 [467/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:11.332 [468/705] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:11.332 [469/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:11.332 [470/705] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:11.332 [471/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:02:11.332 [472/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:11.332 [473/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:11.332 [474/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:11.332 [475/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:11.332 [476/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:11.332 [477/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:11.332 [478/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:11.332 [479/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:11.332 [480/705] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:11.332 [481/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:11.332 [482/705] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:11.332 [483/705] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:02:11.332 [484/705] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:11.332 [485/705] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:11.332 [486/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:11.332 [487/705] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:11.332 [488/705] Linking static target lib/librte_ipsec.a 00:02:11.332 [489/705] Linking static target lib/librte_member.a 00:02:11.332 [490/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:11.332 [491/705] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:11.332 [492/705] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:11.591 [493/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:11.591 [494/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:02:11.591 [495/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:11.591 [496/705] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:11.592 [497/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:11.592 [498/705] Linking static target drivers/librte_mempool_ring.a 00:02:11.592 [499/705] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:02:11.592 [500/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:02:11.592 [501/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:11.592 [502/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:11.592 [503/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:11.592 [504/705] Linking static target lib/librte_pdcp.a 00:02:11.592 [505/705] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.592 [506/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:11.592 [507/705] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:11.592 [508/705] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:11.592 [509/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:11.592 [510/705] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:11.592 [511/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:11.592 [512/705] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:11.592 [513/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:11.592 [514/705] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:11.592 [515/705] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:02:11.592 [516/705] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:11.592 [517/705] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:11.592 [518/705] Linking static target lib/acl/libavx2_tmp.a 00:02:11.592 [519/705] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.592 [520/705] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:11.592 [521/705] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:11.592 [522/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:11.592 [523/705] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.592 [524/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:11.592 [525/705] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:02:11.853 [526/705] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:11.853 [527/705] Linking static target lib/librte_acl.a 00:02:11.853 [528/705] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:11.853 [529/705] Linking static target lib/librte_port.a 00:02:11.853 [530/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:11.853 [531/705] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:11.853 [532/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:11.853 [533/705] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:11.853 [534/705] Linking static target lib/librte_hash.a 00:02:11.853 [535/705] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:11.853 [536/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:11.853 [537/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:11.853 [538/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:11.853 [539/705] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.853 [540/705] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:11.853 [541/705] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:11.853 [542/705] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:11.853 [543/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:11.853 [544/705] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:11.853 [545/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:11.853 [546/705] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.853 [547/705] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:11.853 [548/705] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:11.853 [549/705] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.853 [550/705] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:11.853 [551/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:11.853 [552/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:11.853 [553/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:12.115 [554/705] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.115 [555/705] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.115 [556/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:12.115 [557/705] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:02:12.115 [558/705] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.115 [559/705] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:02:12.115 [560/705] Linking static target lib/librte_eventdev.a 00:02:12.115 [561/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:12.115 [562/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:12.115 [563/705] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:12.115 [564/705] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.115 [565/705] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:12.376 [566/705] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:12.376 [567/705] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:12.376 [568/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:02:12.637 [569/705] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.637 [570/705] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:12.637 [571/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:12.637 [572/705] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.637 [573/705] Linking static target lib/librte_ethdev.a 00:02:12.897 [574/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:12.897 [575/705] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.897 [576/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:13.158 [577/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:13.158 [578/705] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:13.728 [579/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:13.728 [580/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:13.728 [581/705] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:14.022 [582/705] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:14.023 [583/705] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:14.023 [584/705] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:14.023 [585/705] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:14.342 [586/705] Linking static target drivers/librte_net_i40e.a 00:02:14.914 [587/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:15.175 [588/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:15.175 [589/705] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.748 [590/705] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.958 [591/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:19.958 [592/705] Linking static target lib/librte_pipeline.a 00:02:20.529 [593/705] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:20.790 [594/705] Linking static target lib/librte_vhost.a 00:02:21.050 [595/705] Linking target app/dpdk-test-dma-perf 00:02:21.050 [596/705] Linking target app/dpdk-test-gpudev 00:02:21.050 [597/705] Linking target app/dpdk-test-mldev 00:02:21.050 [598/705] Linking target app/dpdk-test-fib 00:02:21.050 [599/705] Linking target app/dpdk-test-pipeline 00:02:21.050 [600/705] Linking target app/dpdk-test-security-perf 00:02:21.050 [601/705] Linking target app/dpdk-testpmd 00:02:21.050 [602/705] Linking target app/dpdk-test-cmdline 00:02:21.050 [603/705] Linking target app/dpdk-pdump 00:02:21.050 [604/705] Linking target app/dpdk-test-acl 00:02:21.050 [605/705] Linking target app/dpdk-proc-info 00:02:21.050 [606/705] Linking target app/dpdk-dumpcap 00:02:21.050 [607/705] Linking target app/dpdk-test-compress-perf 00:02:21.050 [608/705] Linking target app/dpdk-graph 00:02:21.050 [609/705] Linking target app/dpdk-test-regex 00:02:21.050 [610/705] Linking target app/dpdk-test-flow-perf 00:02:21.050 [611/705] Linking target app/dpdk-test-crypto-perf 00:02:21.050 [612/705] Linking target app/dpdk-test-sad 00:02:21.050 [613/705] Linking target app/dpdk-test-bbdev 00:02:21.050 [614/705] Linking target app/dpdk-test-eventdev 00:02:21.309 [615/705] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.309 [616/705] Linking target lib/librte_eal.so.24.0 00:02:21.309 [617/705] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.309 [618/705] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:21.569 [619/705] Linking target lib/librte_pci.so.24.0 00:02:21.569 [620/705] Linking target lib/librte_ring.so.24.0 00:02:21.569 [621/705] Linking target lib/librte_meter.so.24.0 00:02:21.569 [622/705] Linking target lib/librte_timer.so.24.0 00:02:21.569 [623/705] Linking target lib/librte_cfgfile.so.24.0 00:02:21.569 [624/705] Linking target lib/librte_stack.so.24.0 00:02:21.569 [625/705] Linking target lib/librte_jobstats.so.24.0 00:02:21.569 [626/705] Linking target lib/librte_dmadev.so.24.0 00:02:21.569 [627/705] Linking target lib/librte_rawdev.so.24.0 00:02:21.569 [628/705] Linking target drivers/librte_bus_vdev.so.24.0 00:02:21.569 [629/705] Linking target lib/librte_acl.so.24.0 00:02:21.569 [630/705] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:21.569 [631/705] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:21.569 [632/705] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:21.569 [633/705] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:21.569 [634/705] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:21.569 [635/705] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:02:21.569 [636/705] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:02:21.569 [637/705] Linking target lib/librte_mempool.so.24.0 00:02:21.569 [638/705] Linking target lib/librte_rcu.so.24.0 00:02:21.569 [639/705] Linking target drivers/librte_bus_pci.so.24.0 00:02:21.828 [640/705] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:21.828 [641/705] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:21.828 [642/705] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:02:21.828 [643/705] Linking target lib/librte_mbuf.so.24.0 00:02:21.828 [644/705] Linking target drivers/librte_mempool_ring.so.24.0 00:02:21.828 [645/705] Linking target lib/librte_rib.so.24.0 00:02:21.828 [646/705] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:21.828 [647/705] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:02:22.089 [648/705] Linking target lib/librte_regexdev.so.24.0 00:02:22.089 [649/705] Linking target lib/librte_net.so.24.0 00:02:22.089 [650/705] Linking target lib/librte_bbdev.so.24.0 00:02:22.089 [651/705] Linking target lib/librte_compressdev.so.24.0 00:02:22.089 [652/705] Linking target lib/librte_distributor.so.24.0 00:02:22.089 [653/705] Linking target lib/librte_gpudev.so.24.0 00:02:22.089 [654/705] Linking target lib/librte_mldev.so.24.0 00:02:22.089 [655/705] Linking target lib/librte_reorder.so.24.0 00:02:22.089 [656/705] Linking target lib/librte_cryptodev.so.24.0 00:02:22.089 [657/705] Linking target lib/librte_sched.so.24.0 00:02:22.089 [658/705] Linking target lib/librte_fib.so.24.0 00:02:22.089 [659/705] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:02:22.089 [660/705] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:22.089 [661/705] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:22.089 [662/705] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:02:22.089 [663/705] Linking target lib/librte_hash.so.24.0 00:02:22.089 [664/705] Linking target lib/librte_security.so.24.0 00:02:22.089 [665/705] Linking target lib/librte_cmdline.so.24.0 00:02:22.089 [666/705] Linking target lib/librte_ethdev.so.24.0 00:02:22.349 [667/705] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:22.349 [668/705] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:02:22.349 [669/705] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:22.349 [670/705] Linking target lib/librte_lpm.so.24.0 00:02:22.349 [671/705] Linking target lib/librte_efd.so.24.0 00:02:22.349 [672/705] Linking target lib/librte_member.so.24.0 00:02:22.349 [673/705] Linking target lib/librte_ipsec.so.24.0 00:02:22.349 [674/705] Linking target lib/librte_pdcp.so.24.0 00:02:22.349 [675/705] Linking target lib/librte_metrics.so.24.0 00:02:22.349 [676/705] Linking target lib/librte_bpf.so.24.0 00:02:22.349 [677/705] Linking target lib/librte_pcapng.so.24.0 00:02:22.349 [678/705] Linking target lib/librte_ip_frag.so.24.0 00:02:22.349 [679/705] Linking target lib/librte_gso.so.24.0 00:02:22.349 [680/705] Linking target lib/librte_gro.so.24.0 00:02:22.350 [681/705] Linking target lib/librte_power.so.24.0 00:02:22.350 [682/705] Linking target lib/librte_eventdev.so.24.0 00:02:22.350 [683/705] Linking target drivers/librte_net_i40e.so.24.0 00:02:22.656 [684/705] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:02:22.656 [685/705] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:02:22.656 [686/705] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:02:22.656 [687/705] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:02:22.656 [688/705] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:02:22.656 [689/705] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:02:22.656 [690/705] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:02:22.656 [691/705] Linking target lib/librte_bitratestats.so.24.0 00:02:22.656 [692/705] Linking target lib/librte_latencystats.so.24.0 00:02:22.656 [693/705] Linking target lib/librte_graph.so.24.0 00:02:22.656 [694/705] Linking target lib/librte_pdump.so.24.0 00:02:22.656 [695/705] Linking target lib/librte_dispatcher.so.24.0 00:02:22.656 [696/705] Linking target lib/librte_port.so.24.0 00:02:22.656 [697/705] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:02:22.656 [698/705] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.656 [699/705] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:02:22.656 [700/705] Linking target lib/librte_node.so.24.0 00:02:22.917 [701/705] Linking target lib/librte_vhost.so.24.0 00:02:22.917 [702/705] Linking target lib/librte_table.so.24.0 00:02:22.917 [703/705] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:02:24.832 [704/705] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.832 [705/705] Linking target lib/librte_pipeline.so.24.0 00:02:24.832 09:20:00 build_native_dpdk -- common/autobuild_common.sh@201 -- $ uname -s 00:02:24.832 09:20:00 build_native_dpdk -- common/autobuild_common.sh@201 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:24.832 09:20:00 build_native_dpdk -- common/autobuild_common.sh@214 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j144 install 00:02:25.092 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:02:25.092 [0/1] Installing files. 00:02:25.357 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:02:25.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:25.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:25.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:25.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:25.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:25.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:25.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:25.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:25.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:25.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:25.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:25.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:25.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:25.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:25.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:25.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:25.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:25.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:25.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:25.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:25.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:25.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:25.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:02:25.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:02:25.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:02:25.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:02:25.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:25.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:25.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:25.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:25.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:25.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:25.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:25.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:25.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:25.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:25.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:25.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:25.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:25.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:25.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:25.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:25.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:25.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:25.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:25.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:25.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:25.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:25.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:25.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:25.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:25.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:25.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:25.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:25.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:25.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:25.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:25.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:25.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:25.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:25.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:25.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:25.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:25.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:25.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:25.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:25.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:25.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:25.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:25.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:25.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:25.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:25.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:25.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:25.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:25.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:25.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:25.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:25.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:25.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:25.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:25.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:25.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:25.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:25.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:25.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:25.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:25.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:25.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:25.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:25.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:25.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:25.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:25.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:25.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:25.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:25.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:25.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:25.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:25.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:25.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:25.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:25.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:25.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:25.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:25.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:25.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:25.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:25.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:25.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:25.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:25.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:25.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:25.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:02:25.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:25.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:25.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:25.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:25.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:25.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:25.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:25.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:25.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:25.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:25.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:25.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:25.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:25.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:25.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:25.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:25.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:25.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:25.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:25.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:25.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:25.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:25.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:25.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:25.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:25.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:25.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:25.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:25.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:25.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:25.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:25.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:25.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:25.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:25.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:25.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:25.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:25.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:25.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:25.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:25.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:25.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:25.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:25.359 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:25.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:25.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:25.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:25.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:25.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:25.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:25.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:25.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:25.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:25.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:25.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:02:25.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:25.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:25.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:25.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:25.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:25.360 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:25.361 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec_sa.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:25.362 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:25.362 Installing lib/librte_log.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.362 Installing lib/librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.362 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.362 Installing lib/librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.362 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.362 Installing lib/librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.362 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.362 Installing lib/librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.362 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.362 Installing lib/librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.362 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.362 Installing lib/librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.362 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.362 Installing lib/librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.363 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.363 Installing lib/librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.363 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.363 Installing lib/librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.363 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.363 Installing lib/librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.363 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.363 Installing lib/librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.363 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.363 Installing lib/librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.363 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.363 Installing lib/librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.363 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.363 Installing lib/librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.363 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.363 Installing lib/librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.363 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.363 Installing lib/librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.363 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.363 Installing lib/librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.363 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.363 Installing lib/librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.363 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.363 Installing lib/librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.363 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.363 Installing lib/librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.363 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.363 Installing lib/librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.363 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.363 Installing lib/librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.363 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.363 Installing lib/librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.363 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.363 Installing lib/librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.363 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.363 Installing lib/librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.363 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.363 Installing lib/librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.363 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.363 Installing lib/librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.363 Installing lib/librte_dispatcher.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.363 Installing lib/librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.363 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.363 Installing lib/librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.363 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.363 Installing lib/librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.363 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.363 Installing lib/librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.363 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.363 Installing lib/librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.363 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.363 Installing lib/librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.363 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.363 Installing lib/librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.363 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.363 Installing lib/librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.363 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.363 Installing lib/librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.363 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.363 Installing lib/librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.363 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.363 Installing lib/librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.363 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.363 Installing lib/librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.363 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.363 Installing lib/librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.363 Installing lib/librte_mldev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.363 Installing lib/librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.363 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.363 Installing lib/librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.363 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.628 Installing lib/librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.628 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.628 Installing lib/librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.628 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.628 Installing lib/librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.628 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.628 Installing lib/librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.628 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.628 Installing lib/librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.628 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.628 Installing lib/librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.628 Installing lib/librte_pdcp.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.628 Installing lib/librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.628 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.628 Installing lib/librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.628 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.628 Installing lib/librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.628 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.628 Installing lib/librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.628 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.628 Installing lib/librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.628 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.628 Installing lib/librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.628 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.628 Installing lib/librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.628 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.628 Installing lib/librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.628 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.628 Installing drivers/librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:25.628 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.628 Installing drivers/librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:25.628 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.628 Installing drivers/librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:25.628 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:25.628 Installing drivers/librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:25.628 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:25.628 Installing app/dpdk-graph to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:25.628 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:25.628 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:25.628 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:25.628 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:25.628 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:25.628 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:25.628 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:25.628 Installing app/dpdk-test-dma-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:25.628 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:25.628 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:25.628 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:25.628 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:25.628 Installing app/dpdk-test-mldev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:25.628 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:25.628 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:25.628 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:25.628 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:25.628 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:25.628 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.628 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/log/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.628 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.628 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.628 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:25.628 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:25.628 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:25.628 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:25.628 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:25.628 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:25.628 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:25.628 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:25.628 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:25.628 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:25.628 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:25.628 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:25.628 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.628 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.628 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.628 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.628 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.628 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.628 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.628 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.628 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.628 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.628 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.628 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.628 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.628 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.628 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.628 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.628 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.628 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.628 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.628 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.629 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.629 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.629 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.629 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.629 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.629 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.629 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.629 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.629 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.629 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.629 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.629 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.629 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.629 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.629 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.629 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.629 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.629 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.629 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.629 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.629 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lock_annotations.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.629 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.629 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.629 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.629 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.629 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.629 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.629 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.629 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.629 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.629 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.629 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.629 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.629 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.629 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.629 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_stdatomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.629 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.629 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.629 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.629 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.629 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.629 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.629 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.629 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.629 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.629 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.629 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.629 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.629 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.629 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.629 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.629 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.629 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.629 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.629 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.629 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.629 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.629 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.629 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.629 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.629 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.629 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.629 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.629 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.629 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.629 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.629 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.629 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.629 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.629 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.629 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.629 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.629 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.629 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_dtls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.629 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.629 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.629 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.629 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.629 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.629 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.629 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.629 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.629 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.630 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.630 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.630 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.630 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.630 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.630 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_pdcp_hdr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.630 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.630 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.630 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.630 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.630 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.630 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.630 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.630 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.630 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.630 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.630 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.630 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.630 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.630 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.630 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.630 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.630 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.630 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.630 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.630 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.630 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.630 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.630 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.630 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.630 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.630 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.630 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.630 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.630 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.630 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.630 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.630 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.630 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.630 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.630 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.630 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.630 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.630 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.630 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.630 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.630 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.630 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.630 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.630 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.630 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.630 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.630 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.630 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.630 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.630 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.630 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.630 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.630 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.630 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.630 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.630 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.630 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.630 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.630 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.630 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.630 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.630 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.630 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.630 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.630 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.630 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.630 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_dma_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.630 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.630 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.630 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.630 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.630 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.630 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.630 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dispatcher/rte_dispatcher.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.631 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_rtc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip6_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_udp4_input_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/dpdk-cmdline-gen.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:25.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:25.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:25.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:25.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:25.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-rss-flows.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:25.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:25.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:25.632 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:25.632 Installing symlink pointing to librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so.24 00:02:25.632 Installing symlink pointing to librte_log.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so 00:02:25.632 Installing symlink pointing to librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.24 00:02:25.632 Installing symlink pointing to librte_kvargs.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:02:25.632 Installing symlink pointing to librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.24 00:02:25.632 Installing symlink pointing to librte_telemetry.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:02:25.632 Installing symlink pointing to librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.24 00:02:25.632 Installing symlink pointing to librte_eal.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:02:25.632 Installing symlink pointing to librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.24 00:02:25.632 Installing symlink pointing to librte_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:02:25.632 Installing symlink pointing to librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.24 00:02:25.632 Installing symlink pointing to librte_rcu.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:02:25.632 Installing symlink pointing to librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.24 00:02:25.632 Installing symlink pointing to librte_mempool.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:02:25.632 Installing symlink pointing to librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.24 00:02:25.632 Installing symlink pointing to librte_mbuf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:02:25.632 Installing symlink pointing to librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.24 00:02:25.632 Installing symlink pointing to librte_net.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:02:25.632 Installing symlink pointing to librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.24 00:02:25.632 Installing symlink pointing to librte_meter.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:02:25.632 Installing symlink pointing to librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.24 00:02:25.632 Installing symlink pointing to librte_ethdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:02:25.632 Installing symlink pointing to librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.24 00:02:25.632 Installing symlink pointing to librte_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:02:25.632 Installing symlink pointing to librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.24 00:02:25.632 Installing symlink pointing to librte_cmdline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:02:25.632 Installing symlink pointing to librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.24 00:02:25.632 Installing symlink pointing to librte_metrics.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:02:25.632 Installing symlink pointing to librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.24 00:02:25.632 Installing symlink pointing to librte_hash.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:02:25.632 Installing symlink pointing to librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.24 00:02:25.632 Installing symlink pointing to librte_timer.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:02:25.632 Installing symlink pointing to librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.24 00:02:25.632 Installing symlink pointing to librte_acl.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:02:25.632 Installing symlink pointing to librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.24 00:02:25.632 Installing symlink pointing to librte_bbdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:02:25.632 Installing symlink pointing to librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.24 00:02:25.632 Installing symlink pointing to librte_bitratestats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:02:25.632 Installing symlink pointing to librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.24 00:02:25.632 Installing symlink pointing to librte_bpf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:02:25.632 Installing symlink pointing to librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.24 00:02:25.632 Installing symlink pointing to librte_cfgfile.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:02:25.633 Installing symlink pointing to librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.24 00:02:25.633 Installing symlink pointing to librte_compressdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:02:25.633 Installing symlink pointing to librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.24 00:02:25.633 Installing symlink pointing to librte_cryptodev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:02:25.633 Installing symlink pointing to librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.24 00:02:25.633 Installing symlink pointing to librte_distributor.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:02:25.633 Installing symlink pointing to librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.24 00:02:25.633 Installing symlink pointing to librte_dmadev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:02:25.633 Installing symlink pointing to librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.24 00:02:25.633 Installing symlink pointing to librte_efd.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:02:25.633 Installing symlink pointing to librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.24 00:02:25.633 Installing symlink pointing to librte_eventdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:02:25.633 Installing symlink pointing to librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so.24 00:02:25.633 Installing symlink pointing to librte_dispatcher.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so 00:02:25.633 Installing symlink pointing to librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.24 00:02:25.633 Installing symlink pointing to librte_gpudev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:02:25.633 Installing symlink pointing to librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.24 00:02:25.633 Installing symlink pointing to librte_gro.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:02:25.633 Installing symlink pointing to librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.24 00:02:25.633 Installing symlink pointing to librte_gso.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:02:25.633 Installing symlink pointing to librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.24 00:02:25.633 Installing symlink pointing to librte_ip_frag.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:02:25.633 Installing symlink pointing to librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.24 00:02:25.633 Installing symlink pointing to librte_jobstats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:02:25.633 Installing symlink pointing to librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.24 00:02:25.633 Installing symlink pointing to librte_latencystats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:02:25.633 Installing symlink pointing to librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.24 00:02:25.633 Installing symlink pointing to librte_lpm.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:02:25.633 Installing symlink pointing to librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.24 00:02:25.633 Installing symlink pointing to librte_member.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:02:25.633 Installing symlink pointing to librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.24 00:02:25.633 Installing symlink pointing to librte_pcapng.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:02:25.633 Installing symlink pointing to librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.24 00:02:25.633 Installing symlink pointing to librte_power.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:02:25.633 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:02:25.633 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:02:25.633 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:02:25.633 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:02:25.633 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:02:25.633 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:02:25.633 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:02:25.633 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:02:25.633 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:02:25.633 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:02:25.633 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:02:25.633 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:02:25.633 Installing symlink pointing to librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.24 00:02:25.633 Installing symlink pointing to librte_rawdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:02:25.633 Installing symlink pointing to librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.24 00:02:25.633 Installing symlink pointing to librte_regexdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:02:25.633 Installing symlink pointing to librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so.24 00:02:25.633 Installing symlink pointing to librte_mldev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so 00:02:25.633 Installing symlink pointing to librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.24 00:02:25.633 Installing symlink pointing to librte_rib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:02:25.633 Installing symlink pointing to librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.24 00:02:25.633 Installing symlink pointing to librte_reorder.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:02:25.633 Installing symlink pointing to librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.24 00:02:25.633 Installing symlink pointing to librte_sched.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:02:25.633 Installing symlink pointing to librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.24 00:02:25.633 Installing symlink pointing to librte_security.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:02:25.633 Installing symlink pointing to librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.24 00:02:25.633 Installing symlink pointing to librte_stack.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:02:25.633 Installing symlink pointing to librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.24 00:02:25.633 Installing symlink pointing to librte_vhost.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:02:25.633 Installing symlink pointing to librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.24 00:02:25.633 Installing symlink pointing to librte_ipsec.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:02:25.633 Installing symlink pointing to librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so.24 00:02:25.633 Installing symlink pointing to librte_pdcp.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so 00:02:25.633 Installing symlink pointing to librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.24 00:02:25.633 Installing symlink pointing to librte_fib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:02:25.633 Installing symlink pointing to librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.24 00:02:25.633 Installing symlink pointing to librte_port.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:02:25.633 Installing symlink pointing to librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.24 00:02:25.633 Installing symlink pointing to librte_pdump.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:02:25.633 Installing symlink pointing to librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.24 00:02:25.633 Installing symlink pointing to librte_table.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:02:25.633 Installing symlink pointing to librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.24 00:02:25.633 Installing symlink pointing to librte_pipeline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:02:25.633 Installing symlink pointing to librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.24 00:02:25.633 Installing symlink pointing to librte_graph.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:02:25.633 Installing symlink pointing to librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.24 00:02:25.633 Installing symlink pointing to librte_node.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:02:25.633 Installing symlink pointing to librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:02:25.633 Installing symlink pointing to librte_bus_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:02:25.633 Installing symlink pointing to librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:02:25.633 Installing symlink pointing to librte_bus_vdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:02:25.633 Installing symlink pointing to librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:02:25.633 Installing symlink pointing to librte_mempool_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:02:25.633 Installing symlink pointing to librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:02:25.633 Installing symlink pointing to librte_net_i40e.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:02:25.633 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:02:25.895 09:20:01 build_native_dpdk -- common/autobuild_common.sh@220 -- $ cat 00:02:25.895 09:20:01 build_native_dpdk -- common/autobuild_common.sh@225 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:25.895 00:02:25.895 real 0m23.831s 00:02:25.895 user 7m12.084s 00:02:25.895 sys 3m22.505s 00:02:25.895 09:20:01 build_native_dpdk -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:25.895 09:20:01 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:02:25.895 ************************************ 00:02:25.895 END TEST build_native_dpdk 00:02:25.895 ************************************ 00:02:25.895 09:20:01 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:25.895 09:20:01 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:25.895 09:20:01 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:25.895 09:20:01 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:25.895 09:20:01 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:25.895 09:20:01 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:25.895 09:20:01 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:25.895 09:20:01 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:02:25.895 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:02:26.155 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:26.155 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:26.155 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:26.415 Using 'verbs' RDMA provider 00:02:42.267 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:54.545 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:55.116 Creating mk/config.mk...done. 00:02:55.116 Creating mk/cc.flags.mk...done. 00:02:55.116 Type 'make' to build. 00:02:55.116 09:20:30 -- spdk/autobuild.sh@70 -- $ run_test make make -j144 00:02:55.116 09:20:30 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:55.116 09:20:30 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:55.116 09:20:30 -- common/autotest_common.sh@10 -- $ set +x 00:02:55.116 ************************************ 00:02:55.116 START TEST make 00:02:55.116 ************************************ 00:02:55.116 09:20:30 make -- common/autotest_common.sh@1129 -- $ make -j144 00:02:55.687 make[1]: Nothing to be done for 'all'. 00:02:57.077 The Meson build system 00:02:57.077 Version: 1.5.0 00:02:57.077 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:57.077 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:57.077 Build type: native build 00:02:57.077 Project name: libvfio-user 00:02:57.077 Project version: 0.0.1 00:02:57.077 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:57.077 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:57.077 Host machine cpu family: x86_64 00:02:57.077 Host machine cpu: x86_64 00:02:57.077 Run-time dependency threads found: YES 00:02:57.077 Library dl found: YES 00:02:57.077 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:57.077 Run-time dependency json-c found: YES 0.17 00:02:57.077 Run-time dependency cmocka found: YES 1.1.7 00:02:57.077 Program pytest-3 found: NO 00:02:57.077 Program flake8 found: NO 00:02:57.077 Program misspell-fixer found: NO 00:02:57.077 Program restructuredtext-lint found: NO 00:02:57.077 Program valgrind found: YES (/usr/bin/valgrind) 00:02:57.077 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:57.077 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:57.077 Compiler for C supports arguments -Wwrite-strings: YES 00:02:57.077 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:57.077 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:57.077 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:57.077 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:57.077 Build targets in project: 8 00:02:57.077 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:57.077 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:57.077 00:02:57.077 libvfio-user 0.0.1 00:02:57.077 00:02:57.077 User defined options 00:02:57.077 buildtype : debug 00:02:57.077 default_library: shared 00:02:57.077 libdir : /usr/local/lib 00:02:57.077 00:02:57.077 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:57.077 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:57.338 [1/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:57.338 [2/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:57.338 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:57.338 [4/37] Compiling C object samples/null.p/null.c.o 00:02:57.338 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:57.338 [6/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:57.338 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:57.338 [8/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:57.338 [9/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:57.338 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:57.338 [11/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:57.338 [12/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:57.338 [13/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:57.338 [14/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:57.338 [15/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:57.338 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:57.338 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:57.338 [18/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:57.338 [19/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:57.338 [20/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:57.338 [21/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:57.338 [22/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:57.338 [23/37] Compiling C object samples/server.p/server.c.o 00:02:57.338 [24/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:57.338 [25/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:57.338 [26/37] Compiling C object samples/client.p/client.c.o 00:02:57.338 [27/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:57.338 [28/37] Linking target samples/client 00:02:57.338 [29/37] Linking target test/unit_tests 00:02:57.598 [30/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:57.598 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:02:57.598 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:57.598 [33/37] Linking target samples/null 00:02:57.598 [34/37] Linking target samples/lspci 00:02:57.598 [35/37] Linking target samples/server 00:02:57.598 [36/37] Linking target samples/gpio-pci-idio-16 00:02:57.598 [37/37] Linking target samples/shadow_ioeventfd_server 00:02:57.598 INFO: autodetecting backend as ninja 00:02:57.598 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:57.858 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:58.116 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:58.116 ninja: no work to do. 00:03:20.071 CC lib/log/log.o 00:03:20.071 CC lib/log/log_flags.o 00:03:20.071 CC lib/log/log_deprecated.o 00:03:20.071 CC lib/ut/ut.o 00:03:20.071 CC lib/ut_mock/mock.o 00:03:20.071 LIB libspdk_ut.a 00:03:20.071 LIB libspdk_ut_mock.a 00:03:20.071 LIB libspdk_log.a 00:03:20.071 SO libspdk_ut.so.2.0 00:03:20.071 SO libspdk_ut_mock.so.6.0 00:03:20.071 SO libspdk_log.so.7.1 00:03:20.071 SYMLINK libspdk_ut.so 00:03:20.071 SYMLINK libspdk_ut_mock.so 00:03:20.071 SYMLINK libspdk_log.so 00:03:20.071 CC lib/ioat/ioat.o 00:03:20.071 CC lib/util/base64.o 00:03:20.071 CC lib/util/bit_array.o 00:03:20.071 CC lib/util/cpuset.o 00:03:20.071 CXX lib/trace_parser/trace.o 00:03:20.071 CC lib/util/crc16.o 00:03:20.071 CC lib/util/crc32.o 00:03:20.071 CC lib/dma/dma.o 00:03:20.071 CC lib/util/crc32c.o 00:03:20.071 CC lib/util/crc32_ieee.o 00:03:20.071 CC lib/util/crc64.o 00:03:20.071 CC lib/util/dif.o 00:03:20.071 CC lib/util/fd.o 00:03:20.071 CC lib/util/fd_group.o 00:03:20.071 CC lib/util/file.o 00:03:20.071 CC lib/util/hexlify.o 00:03:20.071 CC lib/util/net.o 00:03:20.071 CC lib/util/iov.o 00:03:20.071 CC lib/util/math.o 00:03:20.071 CC lib/util/pipe.o 00:03:20.071 CC lib/util/strerror_tls.o 00:03:20.071 CC lib/util/string.o 00:03:20.071 CC lib/util/uuid.o 00:03:20.071 CC lib/util/xor.o 00:03:20.071 CC lib/util/zipf.o 00:03:20.071 CC lib/util/md5.o 00:03:20.332 CC lib/vfio_user/host/vfio_user_pci.o 00:03:20.332 CC lib/vfio_user/host/vfio_user.o 00:03:20.332 LIB libspdk_dma.a 00:03:20.332 SO libspdk_dma.so.5.0 00:03:20.332 LIB libspdk_ioat.a 00:03:20.332 SO libspdk_ioat.so.7.0 00:03:20.332 SYMLINK libspdk_dma.so 00:03:20.332 SYMLINK libspdk_ioat.so 00:03:20.592 LIB libspdk_vfio_user.a 00:03:20.592 SO libspdk_vfio_user.so.5.0 00:03:20.592 LIB libspdk_util.a 00:03:20.592 SYMLINK libspdk_vfio_user.so 00:03:20.592 SO libspdk_util.so.10.1 00:03:20.853 SYMLINK libspdk_util.so 00:03:20.853 LIB libspdk_trace_parser.a 00:03:20.853 SO libspdk_trace_parser.so.6.0 00:03:21.113 SYMLINK libspdk_trace_parser.so 00:03:21.113 CC lib/conf/conf.o 00:03:21.113 CC lib/rdma_utils/rdma_utils.o 00:03:21.113 CC lib/env_dpdk/env.o 00:03:21.113 CC lib/json/json_parse.o 00:03:21.113 CC lib/env_dpdk/memory.o 00:03:21.113 CC lib/json/json_util.o 00:03:21.113 CC lib/env_dpdk/pci.o 00:03:21.113 CC lib/vmd/vmd.o 00:03:21.113 CC lib/json/json_write.o 00:03:21.113 CC lib/env_dpdk/init.o 00:03:21.113 CC lib/vmd/led.o 00:03:21.113 CC lib/env_dpdk/threads.o 00:03:21.113 CC lib/env_dpdk/pci_ioat.o 00:03:21.113 CC lib/env_dpdk/pci_virtio.o 00:03:21.113 CC lib/env_dpdk/pci_vmd.o 00:03:21.113 CC lib/idxd/idxd.o 00:03:21.113 CC lib/env_dpdk/pci_idxd.o 00:03:21.113 CC lib/idxd/idxd_user.o 00:03:21.113 CC lib/env_dpdk/pci_event.o 00:03:21.113 CC lib/idxd/idxd_kernel.o 00:03:21.113 CC lib/env_dpdk/sigbus_handler.o 00:03:21.113 CC lib/env_dpdk/pci_dpdk.o 00:03:21.113 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:21.113 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:21.374 LIB libspdk_conf.a 00:03:21.374 LIB libspdk_rdma_utils.a 00:03:21.374 SO libspdk_conf.so.6.0 00:03:21.374 LIB libspdk_json.a 00:03:21.374 SO libspdk_rdma_utils.so.1.0 00:03:21.374 SO libspdk_json.so.6.0 00:03:21.636 SYMLINK libspdk_conf.so 00:03:21.636 SYMLINK libspdk_rdma_utils.so 00:03:21.636 SYMLINK libspdk_json.so 00:03:21.636 LIB libspdk_idxd.a 00:03:21.636 LIB libspdk_vmd.a 00:03:21.636 SO libspdk_idxd.so.12.1 00:03:21.898 SO libspdk_vmd.so.6.0 00:03:21.898 SYMLINK libspdk_idxd.so 00:03:21.898 SYMLINK libspdk_vmd.so 00:03:21.898 CC lib/rdma_provider/common.o 00:03:21.898 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:21.898 CC lib/jsonrpc/jsonrpc_server.o 00:03:21.898 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:21.898 CC lib/jsonrpc/jsonrpc_client.o 00:03:21.898 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:22.160 LIB libspdk_rdma_provider.a 00:03:22.160 SO libspdk_rdma_provider.so.7.0 00:03:22.160 LIB libspdk_jsonrpc.a 00:03:22.160 SO libspdk_jsonrpc.so.6.0 00:03:22.160 SYMLINK libspdk_rdma_provider.so 00:03:22.421 SYMLINK libspdk_jsonrpc.so 00:03:22.421 LIB libspdk_env_dpdk.a 00:03:22.421 SO libspdk_env_dpdk.so.15.1 00:03:22.683 SYMLINK libspdk_env_dpdk.so 00:03:22.683 CC lib/rpc/rpc.o 00:03:22.944 LIB libspdk_rpc.a 00:03:22.944 SO libspdk_rpc.so.6.0 00:03:22.944 SYMLINK libspdk_rpc.so 00:03:23.206 CC lib/notify/notify.o 00:03:23.206 CC lib/notify/notify_rpc.o 00:03:23.467 CC lib/trace/trace.o 00:03:23.467 CC lib/trace/trace_flags.o 00:03:23.467 CC lib/trace/trace_rpc.o 00:03:23.467 CC lib/keyring/keyring.o 00:03:23.467 CC lib/keyring/keyring_rpc.o 00:03:23.467 LIB libspdk_notify.a 00:03:23.467 SO libspdk_notify.so.6.0 00:03:23.729 LIB libspdk_keyring.a 00:03:23.729 LIB libspdk_trace.a 00:03:23.729 SYMLINK libspdk_notify.so 00:03:23.729 SO libspdk_keyring.so.2.0 00:03:23.729 SO libspdk_trace.so.11.0 00:03:23.729 SYMLINK libspdk_keyring.so 00:03:23.729 SYMLINK libspdk_trace.so 00:03:23.990 CC lib/sock/sock.o 00:03:23.991 CC lib/sock/sock_rpc.o 00:03:23.991 CC lib/thread/thread.o 00:03:23.991 CC lib/thread/iobuf.o 00:03:24.564 LIB libspdk_sock.a 00:03:24.564 SO libspdk_sock.so.10.0 00:03:24.564 SYMLINK libspdk_sock.so 00:03:24.825 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:24.825 CC lib/nvme/nvme_ctrlr.o 00:03:24.825 CC lib/nvme/nvme_fabric.o 00:03:24.825 CC lib/nvme/nvme_ns_cmd.o 00:03:24.825 CC lib/nvme/nvme_ns.o 00:03:24.825 CC lib/nvme/nvme_pcie_common.o 00:03:24.825 CC lib/nvme/nvme_pcie.o 00:03:24.825 CC lib/nvme/nvme_qpair.o 00:03:24.825 CC lib/nvme/nvme.o 00:03:24.825 CC lib/nvme/nvme_quirks.o 00:03:24.825 CC lib/nvme/nvme_transport.o 00:03:24.825 CC lib/nvme/nvme_discovery.o 00:03:24.825 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:24.825 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:24.825 CC lib/nvme/nvme_tcp.o 00:03:24.825 CC lib/nvme/nvme_opal.o 00:03:24.825 CC lib/nvme/nvme_io_msg.o 00:03:24.825 CC lib/nvme/nvme_poll_group.o 00:03:24.825 CC lib/nvme/nvme_zns.o 00:03:24.825 CC lib/nvme/nvme_stubs.o 00:03:24.825 CC lib/nvme/nvme_auth.o 00:03:24.825 CC lib/nvme/nvme_cuse.o 00:03:25.085 CC lib/nvme/nvme_vfio_user.o 00:03:25.085 CC lib/nvme/nvme_rdma.o 00:03:25.346 LIB libspdk_thread.a 00:03:25.346 SO libspdk_thread.so.11.0 00:03:25.606 SYMLINK libspdk_thread.so 00:03:25.866 CC lib/init/json_config.o 00:03:25.866 CC lib/init/subsystem.o 00:03:25.866 CC lib/init/subsystem_rpc.o 00:03:25.866 CC lib/init/rpc.o 00:03:25.866 CC lib/virtio/virtio.o 00:03:25.866 CC lib/virtio/virtio_vhost_user.o 00:03:25.866 CC lib/virtio/virtio_vfio_user.o 00:03:25.866 CC lib/virtio/virtio_pci.o 00:03:25.866 CC lib/vfu_tgt/tgt_endpoint.o 00:03:25.866 CC lib/blob/blobstore.o 00:03:25.866 CC lib/blob/request.o 00:03:25.866 CC lib/vfu_tgt/tgt_rpc.o 00:03:25.866 CC lib/blob/zeroes.o 00:03:25.866 CC lib/blob/blob_bs_dev.o 00:03:25.866 CC lib/accel/accel.o 00:03:25.866 CC lib/accel/accel_rpc.o 00:03:25.866 CC lib/accel/accel_sw.o 00:03:25.866 CC lib/fsdev/fsdev.o 00:03:25.866 CC lib/fsdev/fsdev_io.o 00:03:25.866 CC lib/fsdev/fsdev_rpc.o 00:03:26.126 LIB libspdk_init.a 00:03:26.126 SO libspdk_init.so.6.0 00:03:26.126 SYMLINK libspdk_init.so 00:03:26.126 LIB libspdk_virtio.a 00:03:26.126 LIB libspdk_vfu_tgt.a 00:03:26.387 SO libspdk_vfu_tgt.so.3.0 00:03:26.387 SO libspdk_virtio.so.7.0 00:03:26.387 SYMLINK libspdk_vfu_tgt.so 00:03:26.387 SYMLINK libspdk_virtio.so 00:03:26.387 LIB libspdk_fsdev.a 00:03:26.387 SO libspdk_fsdev.so.2.0 00:03:26.387 SYMLINK libspdk_fsdev.so 00:03:26.699 CC lib/event/app.o 00:03:26.699 CC lib/event/reactor.o 00:03:26.699 CC lib/event/log_rpc.o 00:03:26.699 CC lib/event/app_rpc.o 00:03:26.699 CC lib/event/scheduler_static.o 00:03:26.959 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:26.959 LIB libspdk_accel.a 00:03:26.959 SO libspdk_accel.so.16.0 00:03:26.959 LIB libspdk_nvme.a 00:03:26.959 LIB libspdk_event.a 00:03:26.959 SYMLINK libspdk_accel.so 00:03:26.959 SO libspdk_event.so.14.0 00:03:27.219 SO libspdk_nvme.so.15.0 00:03:27.219 SYMLINK libspdk_event.so 00:03:27.219 SYMLINK libspdk_nvme.so 00:03:27.480 CC lib/bdev/bdev.o 00:03:27.480 CC lib/bdev/bdev_rpc.o 00:03:27.480 CC lib/bdev/bdev_zone.o 00:03:27.480 CC lib/bdev/part.o 00:03:27.480 CC lib/bdev/scsi_nvme.o 00:03:27.480 LIB libspdk_fuse_dispatcher.a 00:03:27.480 SO libspdk_fuse_dispatcher.so.1.0 00:03:27.480 SYMLINK libspdk_fuse_dispatcher.so 00:03:28.864 LIB libspdk_blob.a 00:03:28.864 SO libspdk_blob.so.12.0 00:03:28.864 SYMLINK libspdk_blob.so 00:03:29.125 CC lib/blobfs/blobfs.o 00:03:29.125 CC lib/blobfs/tree.o 00:03:29.125 CC lib/lvol/lvol.o 00:03:29.699 LIB libspdk_bdev.a 00:03:29.699 SO libspdk_bdev.so.17.0 00:03:29.699 LIB libspdk_blobfs.a 00:03:29.960 SO libspdk_blobfs.so.11.0 00:03:29.960 SYMLINK libspdk_bdev.so 00:03:29.960 LIB libspdk_lvol.a 00:03:29.960 SYMLINK libspdk_blobfs.so 00:03:29.960 SO libspdk_lvol.so.11.0 00:03:29.960 SYMLINK libspdk_lvol.so 00:03:30.219 CC lib/ftl/ftl_core.o 00:03:30.219 CC lib/ftl/ftl_init.o 00:03:30.219 CC lib/ftl/ftl_layout.o 00:03:30.219 CC lib/ftl/ftl_debug.o 00:03:30.219 CC lib/ftl/ftl_io.o 00:03:30.219 CC lib/ftl/ftl_sb.o 00:03:30.219 CC lib/ftl/ftl_l2p.o 00:03:30.219 CC lib/ftl/ftl_l2p_flat.o 00:03:30.219 CC lib/ublk/ublk.o 00:03:30.219 CC lib/ftl/ftl_nv_cache.o 00:03:30.219 CC lib/scsi/dev.o 00:03:30.219 CC lib/ftl/ftl_band.o 00:03:30.219 CC lib/ublk/ublk_rpc.o 00:03:30.219 CC lib/nvmf/ctrlr.o 00:03:30.219 CC lib/nbd/nbd.o 00:03:30.219 CC lib/ftl/ftl_band_ops.o 00:03:30.219 CC lib/scsi/lun.o 00:03:30.219 CC lib/nbd/nbd_rpc.o 00:03:30.219 CC lib/nvmf/ctrlr_discovery.o 00:03:30.219 CC lib/scsi/port.o 00:03:30.219 CC lib/ftl/ftl_writer.o 00:03:30.219 CC lib/nvmf/ctrlr_bdev.o 00:03:30.219 CC lib/ftl/ftl_rq.o 00:03:30.219 CC lib/scsi/scsi.o 00:03:30.219 CC lib/nvmf/subsystem.o 00:03:30.219 CC lib/scsi/scsi_bdev.o 00:03:30.219 CC lib/ftl/ftl_reloc.o 00:03:30.219 CC lib/nvmf/nvmf.o 00:03:30.219 CC lib/scsi/scsi_pr.o 00:03:30.219 CC lib/ftl/ftl_l2p_cache.o 00:03:30.219 CC lib/nvmf/nvmf_rpc.o 00:03:30.219 CC lib/scsi/scsi_rpc.o 00:03:30.219 CC lib/ftl/ftl_p2l.o 00:03:30.219 CC lib/nvmf/transport.o 00:03:30.219 CC lib/scsi/task.o 00:03:30.219 CC lib/ftl/ftl_p2l_log.o 00:03:30.219 CC lib/nvmf/tcp.o 00:03:30.219 CC lib/ftl/mngt/ftl_mngt.o 00:03:30.219 CC lib/nvmf/stubs.o 00:03:30.219 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:30.219 CC lib/nvmf/mdns_server.o 00:03:30.219 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:30.219 CC lib/nvmf/vfio_user.o 00:03:30.219 CC lib/nvmf/rdma.o 00:03:30.219 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:30.219 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:30.219 CC lib/nvmf/auth.o 00:03:30.219 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:30.219 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:30.219 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:30.219 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:30.219 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:30.219 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:30.219 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:30.219 CC lib/ftl/utils/ftl_conf.o 00:03:30.219 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:30.219 CC lib/ftl/utils/ftl_md.o 00:03:30.219 CC lib/ftl/utils/ftl_mempool.o 00:03:30.219 CC lib/ftl/utils/ftl_bitmap.o 00:03:30.219 CC lib/ftl/utils/ftl_property.o 00:03:30.219 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:30.219 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:30.219 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:30.219 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:30.219 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:30.219 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:30.219 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:30.219 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:30.219 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:30.219 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:30.219 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:30.219 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:30.219 CC lib/ftl/base/ftl_base_dev.o 00:03:30.219 CC lib/ftl/base/ftl_base_bdev.o 00:03:30.219 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:30.219 CC lib/ftl/ftl_trace.o 00:03:30.785 LIB libspdk_nbd.a 00:03:30.785 SO libspdk_nbd.so.7.0 00:03:30.785 LIB libspdk_ublk.a 00:03:31.046 SYMLINK libspdk_nbd.so 00:03:31.046 SO libspdk_ublk.so.3.0 00:03:31.046 LIB libspdk_scsi.a 00:03:31.046 SYMLINK libspdk_ublk.so 00:03:31.046 SO libspdk_scsi.so.9.0 00:03:31.046 SYMLINK libspdk_scsi.so 00:03:31.306 LIB libspdk_ftl.a 00:03:31.566 SO libspdk_ftl.so.9.0 00:03:31.566 CC lib/iscsi/conn.o 00:03:31.566 CC lib/iscsi/init_grp.o 00:03:31.566 CC lib/iscsi/param.o 00:03:31.566 CC lib/iscsi/iscsi.o 00:03:31.566 CC lib/iscsi/tgt_node.o 00:03:31.566 CC lib/iscsi/portal_grp.o 00:03:31.566 CC lib/vhost/vhost.o 00:03:31.566 CC lib/iscsi/iscsi_subsystem.o 00:03:31.566 CC lib/iscsi/iscsi_rpc.o 00:03:31.566 CC lib/vhost/vhost_rpc.o 00:03:31.566 CC lib/iscsi/task.o 00:03:31.566 CC lib/vhost/vhost_scsi.o 00:03:31.566 CC lib/vhost/vhost_blk.o 00:03:31.566 CC lib/vhost/rte_vhost_user.o 00:03:31.826 SYMLINK libspdk_ftl.so 00:03:32.399 LIB libspdk_nvmf.a 00:03:32.399 SO libspdk_nvmf.so.20.0 00:03:32.399 LIB libspdk_vhost.a 00:03:32.399 SYMLINK libspdk_nvmf.so 00:03:32.660 SO libspdk_vhost.so.8.0 00:03:32.660 SYMLINK libspdk_vhost.so 00:03:32.660 LIB libspdk_iscsi.a 00:03:32.922 SO libspdk_iscsi.so.8.0 00:03:32.922 SYMLINK libspdk_iscsi.so 00:03:33.498 CC module/env_dpdk/env_dpdk_rpc.o 00:03:33.498 CC module/vfu_device/vfu_virtio.o 00:03:33.498 CC module/vfu_device/vfu_virtio_blk.o 00:03:33.498 CC module/vfu_device/vfu_virtio_scsi.o 00:03:33.498 CC module/vfu_device/vfu_virtio_fs.o 00:03:33.498 CC module/vfu_device/vfu_virtio_rpc.o 00:03:33.759 LIB libspdk_env_dpdk_rpc.a 00:03:33.759 CC module/accel/ioat/accel_ioat.o 00:03:33.759 CC module/accel/ioat/accel_ioat_rpc.o 00:03:33.759 CC module/accel/dsa/accel_dsa.o 00:03:33.759 CC module/accel/dsa/accel_dsa_rpc.o 00:03:33.759 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:33.759 CC module/blob/bdev/blob_bdev.o 00:03:33.759 CC module/sock/posix/posix.o 00:03:33.759 CC module/accel/error/accel_error.o 00:03:33.759 CC module/accel/iaa/accel_iaa.o 00:03:33.759 CC module/accel/error/accel_error_rpc.o 00:03:33.759 CC module/keyring/linux/keyring.o 00:03:33.759 CC module/accel/iaa/accel_iaa_rpc.o 00:03:33.759 CC module/keyring/linux/keyring_rpc.o 00:03:33.759 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:33.759 CC module/keyring/file/keyring.o 00:03:33.759 CC module/scheduler/gscheduler/gscheduler.o 00:03:33.759 CC module/fsdev/aio/fsdev_aio.o 00:03:33.759 CC module/keyring/file/keyring_rpc.o 00:03:33.759 SO libspdk_env_dpdk_rpc.so.6.0 00:03:33.759 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:33.759 CC module/fsdev/aio/linux_aio_mgr.o 00:03:33.759 SYMLINK libspdk_env_dpdk_rpc.so 00:03:34.020 LIB libspdk_accel_ioat.a 00:03:34.020 LIB libspdk_keyring_file.a 00:03:34.020 LIB libspdk_keyring_linux.a 00:03:34.020 LIB libspdk_scheduler_gscheduler.a 00:03:34.020 LIB libspdk_scheduler_dpdk_governor.a 00:03:34.020 LIB libspdk_scheduler_dynamic.a 00:03:34.020 SO libspdk_scheduler_gscheduler.so.4.0 00:03:34.020 SO libspdk_keyring_linux.so.1.0 00:03:34.020 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:34.020 SO libspdk_accel_ioat.so.6.0 00:03:34.020 SO libspdk_keyring_file.so.2.0 00:03:34.020 SO libspdk_scheduler_dynamic.so.4.0 00:03:34.020 LIB libspdk_accel_error.a 00:03:34.020 LIB libspdk_accel_iaa.a 00:03:34.020 LIB libspdk_accel_dsa.a 00:03:34.020 SO libspdk_accel_error.so.2.0 00:03:34.020 SO libspdk_accel_iaa.so.3.0 00:03:34.020 LIB libspdk_blob_bdev.a 00:03:34.020 SYMLINK libspdk_scheduler_dynamic.so 00:03:34.020 SYMLINK libspdk_accel_ioat.so 00:03:34.020 SYMLINK libspdk_scheduler_gscheduler.so 00:03:34.020 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:34.020 SYMLINK libspdk_keyring_file.so 00:03:34.020 SYMLINK libspdk_keyring_linux.so 00:03:34.020 SO libspdk_accel_dsa.so.5.0 00:03:34.020 SYMLINK libspdk_accel_error.so 00:03:34.020 SO libspdk_blob_bdev.so.12.0 00:03:34.282 SYMLINK libspdk_accel_iaa.so 00:03:34.282 SYMLINK libspdk_accel_dsa.so 00:03:34.282 LIB libspdk_vfu_device.a 00:03:34.282 SYMLINK libspdk_blob_bdev.so 00:03:34.282 SO libspdk_vfu_device.so.3.0 00:03:34.282 SYMLINK libspdk_vfu_device.so 00:03:34.282 LIB libspdk_fsdev_aio.a 00:03:34.576 SO libspdk_fsdev_aio.so.1.0 00:03:34.577 LIB libspdk_sock_posix.a 00:03:34.577 SO libspdk_sock_posix.so.6.0 00:03:34.577 SYMLINK libspdk_fsdev_aio.so 00:03:34.577 SYMLINK libspdk_sock_posix.so 00:03:34.924 CC module/bdev/delay/vbdev_delay.o 00:03:34.924 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:34.924 CC module/bdev/gpt/gpt.o 00:03:34.924 CC module/bdev/null/bdev_null.o 00:03:34.924 CC module/bdev/gpt/vbdev_gpt.o 00:03:34.924 CC module/bdev/null/bdev_null_rpc.o 00:03:34.924 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:34.924 CC module/blobfs/bdev/blobfs_bdev.o 00:03:34.924 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:34.924 CC module/bdev/malloc/bdev_malloc.o 00:03:34.924 CC module/bdev/ftl/bdev_ftl.o 00:03:34.924 CC module/bdev/lvol/vbdev_lvol.o 00:03:34.924 CC module/bdev/error/vbdev_error.o 00:03:34.924 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:34.924 CC module/bdev/passthru/vbdev_passthru.o 00:03:34.924 CC module/bdev/error/vbdev_error_rpc.o 00:03:34.924 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:34.924 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:34.924 CC module/bdev/raid/bdev_raid_rpc.o 00:03:34.924 CC module/bdev/raid/bdev_raid_sb.o 00:03:34.924 CC module/bdev/aio/bdev_aio.o 00:03:34.924 CC module/bdev/nvme/bdev_nvme.o 00:03:34.924 CC module/bdev/raid/bdev_raid.o 00:03:34.924 CC module/bdev/aio/bdev_aio_rpc.o 00:03:34.924 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:34.924 CC module/bdev/raid/raid0.o 00:03:34.924 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:34.924 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:34.924 CC module/bdev/nvme/nvme_rpc.o 00:03:34.924 CC module/bdev/raid/raid1.o 00:03:34.924 CC module/bdev/nvme/bdev_mdns_client.o 00:03:34.924 CC module/bdev/nvme/vbdev_opal.o 00:03:34.924 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:34.924 CC module/bdev/raid/concat.o 00:03:34.924 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:34.924 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:34.924 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:34.924 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:34.924 CC module/bdev/iscsi/bdev_iscsi.o 00:03:34.924 CC module/bdev/split/vbdev_split.o 00:03:34.924 CC module/bdev/split/vbdev_split_rpc.o 00:03:34.924 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:35.251 LIB libspdk_blobfs_bdev.a 00:03:35.251 LIB libspdk_bdev_null.a 00:03:35.251 SO libspdk_blobfs_bdev.so.6.0 00:03:35.251 SO libspdk_bdev_null.so.6.0 00:03:35.251 LIB libspdk_bdev_error.a 00:03:35.251 LIB libspdk_bdev_split.a 00:03:35.251 LIB libspdk_bdev_gpt.a 00:03:35.251 LIB libspdk_bdev_passthru.a 00:03:35.251 SYMLINK libspdk_bdev_null.so 00:03:35.251 SYMLINK libspdk_blobfs_bdev.so 00:03:35.251 SO libspdk_bdev_error.so.6.0 00:03:35.251 LIB libspdk_bdev_ftl.a 00:03:35.251 SO libspdk_bdev_split.so.6.0 00:03:35.251 SO libspdk_bdev_passthru.so.6.0 00:03:35.251 SO libspdk_bdev_gpt.so.6.0 00:03:35.251 LIB libspdk_bdev_malloc.a 00:03:35.251 SO libspdk_bdev_ftl.so.6.0 00:03:35.251 LIB libspdk_bdev_aio.a 00:03:35.251 LIB libspdk_bdev_zone_block.a 00:03:35.251 LIB libspdk_bdev_delay.a 00:03:35.251 SYMLINK libspdk_bdev_split.so 00:03:35.251 SYMLINK libspdk_bdev_error.so 00:03:35.251 SYMLINK libspdk_bdev_passthru.so 00:03:35.251 SO libspdk_bdev_malloc.so.6.0 00:03:35.251 SO libspdk_bdev_aio.so.6.0 00:03:35.251 SO libspdk_bdev_zone_block.so.6.0 00:03:35.251 LIB libspdk_bdev_iscsi.a 00:03:35.251 SO libspdk_bdev_delay.so.6.0 00:03:35.251 SYMLINK libspdk_bdev_gpt.so 00:03:35.251 SYMLINK libspdk_bdev_ftl.so 00:03:35.251 SO libspdk_bdev_iscsi.so.6.0 00:03:35.513 SYMLINK libspdk_bdev_malloc.so 00:03:35.513 SYMLINK libspdk_bdev_aio.so 00:03:35.513 SYMLINK libspdk_bdev_zone_block.so 00:03:35.513 SYMLINK libspdk_bdev_delay.so 00:03:35.513 LIB libspdk_bdev_lvol.a 00:03:35.513 LIB libspdk_bdev_virtio.a 00:03:35.513 SYMLINK libspdk_bdev_iscsi.so 00:03:35.513 SO libspdk_bdev_lvol.so.6.0 00:03:35.513 SO libspdk_bdev_virtio.so.6.0 00:03:35.513 SYMLINK libspdk_bdev_lvol.so 00:03:35.513 SYMLINK libspdk_bdev_virtio.so 00:03:35.774 LIB libspdk_bdev_raid.a 00:03:35.774 SO libspdk_bdev_raid.so.6.0 00:03:36.035 SYMLINK libspdk_bdev_raid.so 00:03:37.416 LIB libspdk_bdev_nvme.a 00:03:37.416 SO libspdk_bdev_nvme.so.7.1 00:03:37.416 SYMLINK libspdk_bdev_nvme.so 00:03:37.985 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:37.985 CC module/event/subsystems/vmd/vmd.o 00:03:37.985 CC module/event/subsystems/sock/sock.o 00:03:37.985 CC module/event/subsystems/iobuf/iobuf.o 00:03:37.985 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:37.985 CC module/event/subsystems/scheduler/scheduler.o 00:03:38.245 CC module/event/subsystems/keyring/keyring.o 00:03:38.245 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:38.245 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:38.245 CC module/event/subsystems/fsdev/fsdev.o 00:03:38.245 LIB libspdk_event_keyring.a 00:03:38.245 LIB libspdk_event_scheduler.a 00:03:38.245 LIB libspdk_event_vmd.a 00:03:38.245 LIB libspdk_event_fsdev.a 00:03:38.246 LIB libspdk_event_iobuf.a 00:03:38.246 LIB libspdk_event_sock.a 00:03:38.246 LIB libspdk_event_vhost_blk.a 00:03:38.246 LIB libspdk_event_vfu_tgt.a 00:03:38.246 SO libspdk_event_scheduler.so.4.0 00:03:38.246 SO libspdk_event_vmd.so.6.0 00:03:38.246 SO libspdk_event_keyring.so.1.0 00:03:38.246 SO libspdk_event_vhost_blk.so.3.0 00:03:38.246 SO libspdk_event_fsdev.so.1.0 00:03:38.246 SO libspdk_event_iobuf.so.3.0 00:03:38.246 SO libspdk_event_sock.so.5.0 00:03:38.246 SO libspdk_event_vfu_tgt.so.3.0 00:03:38.506 SYMLINK libspdk_event_scheduler.so 00:03:38.506 SYMLINK libspdk_event_vmd.so 00:03:38.506 SYMLINK libspdk_event_keyring.so 00:03:38.506 SYMLINK libspdk_event_fsdev.so 00:03:38.506 SYMLINK libspdk_event_vhost_blk.so 00:03:38.506 SYMLINK libspdk_event_sock.so 00:03:38.506 SYMLINK libspdk_event_iobuf.so 00:03:38.506 SYMLINK libspdk_event_vfu_tgt.so 00:03:38.766 CC module/event/subsystems/accel/accel.o 00:03:39.026 LIB libspdk_event_accel.a 00:03:39.026 SO libspdk_event_accel.so.6.0 00:03:39.026 SYMLINK libspdk_event_accel.so 00:03:39.286 CC module/event/subsystems/bdev/bdev.o 00:03:39.546 LIB libspdk_event_bdev.a 00:03:39.546 SO libspdk_event_bdev.so.6.0 00:03:39.546 SYMLINK libspdk_event_bdev.so 00:03:40.117 CC module/event/subsystems/scsi/scsi.o 00:03:40.117 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:40.117 CC module/event/subsystems/nbd/nbd.o 00:03:40.117 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:40.117 CC module/event/subsystems/ublk/ublk.o 00:03:40.117 LIB libspdk_event_nbd.a 00:03:40.117 LIB libspdk_event_ublk.a 00:03:40.117 SO libspdk_event_nbd.so.6.0 00:03:40.117 LIB libspdk_event_scsi.a 00:03:40.117 SO libspdk_event_ublk.so.3.0 00:03:40.117 SO libspdk_event_scsi.so.6.0 00:03:40.379 LIB libspdk_event_nvmf.a 00:03:40.379 SYMLINK libspdk_event_nbd.so 00:03:40.379 SYMLINK libspdk_event_ublk.so 00:03:40.379 SYMLINK libspdk_event_scsi.so 00:03:40.379 SO libspdk_event_nvmf.so.6.0 00:03:40.379 SYMLINK libspdk_event_nvmf.so 00:03:40.640 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:40.640 CC module/event/subsystems/iscsi/iscsi.o 00:03:40.901 LIB libspdk_event_vhost_scsi.a 00:03:40.901 LIB libspdk_event_iscsi.a 00:03:40.901 SO libspdk_event_vhost_scsi.so.3.0 00:03:40.901 SO libspdk_event_iscsi.so.6.0 00:03:40.901 SYMLINK libspdk_event_vhost_scsi.so 00:03:40.901 SYMLINK libspdk_event_iscsi.so 00:03:41.161 SO libspdk.so.6.0 00:03:41.161 SYMLINK libspdk.so 00:03:41.735 CXX app/trace/trace.o 00:03:41.735 CC app/trace_record/trace_record.o 00:03:41.735 CC app/spdk_top/spdk_top.o 00:03:41.735 TEST_HEADER include/spdk/accel.h 00:03:41.735 TEST_HEADER include/spdk/accel_module.h 00:03:41.735 TEST_HEADER include/spdk/assert.h 00:03:41.735 TEST_HEADER include/spdk/barrier.h 00:03:41.735 CC app/spdk_nvme_perf/perf.o 00:03:41.735 TEST_HEADER include/spdk/base64.h 00:03:41.735 TEST_HEADER include/spdk/bdev.h 00:03:41.735 TEST_HEADER include/spdk/bit_array.h 00:03:41.735 TEST_HEADER include/spdk/bdev_module.h 00:03:41.735 TEST_HEADER include/spdk/bdev_zone.h 00:03:41.735 TEST_HEADER include/spdk/bit_pool.h 00:03:41.735 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:41.735 CC app/spdk_lspci/spdk_lspci.o 00:03:41.735 TEST_HEADER include/spdk/blob_bdev.h 00:03:41.735 TEST_HEADER include/spdk/blobfs.h 00:03:41.735 TEST_HEADER include/spdk/blob.h 00:03:41.735 CC app/spdk_nvme_identify/identify.o 00:03:41.735 TEST_HEADER include/spdk/conf.h 00:03:41.735 CC test/rpc_client/rpc_client_test.o 00:03:41.735 TEST_HEADER include/spdk/config.h 00:03:41.735 TEST_HEADER include/spdk/cpuset.h 00:03:41.735 TEST_HEADER include/spdk/crc16.h 00:03:41.735 CC app/spdk_nvme_discover/discovery_aer.o 00:03:41.735 TEST_HEADER include/spdk/crc32.h 00:03:41.735 TEST_HEADER include/spdk/crc64.h 00:03:41.735 TEST_HEADER include/spdk/dma.h 00:03:41.735 TEST_HEADER include/spdk/dif.h 00:03:41.735 TEST_HEADER include/spdk/env_dpdk.h 00:03:41.735 TEST_HEADER include/spdk/endian.h 00:03:41.735 TEST_HEADER include/spdk/env.h 00:03:41.735 TEST_HEADER include/spdk/event.h 00:03:41.735 TEST_HEADER include/spdk/fd_group.h 00:03:41.735 TEST_HEADER include/spdk/fd.h 00:03:41.735 TEST_HEADER include/spdk/file.h 00:03:41.735 TEST_HEADER include/spdk/fsdev.h 00:03:41.735 TEST_HEADER include/spdk/fsdev_module.h 00:03:41.735 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:41.735 TEST_HEADER include/spdk/ftl.h 00:03:41.735 TEST_HEADER include/spdk/gpt_spec.h 00:03:41.735 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:41.735 TEST_HEADER include/spdk/hexlify.h 00:03:41.735 CC app/nvmf_tgt/nvmf_main.o 00:03:41.735 TEST_HEADER include/spdk/histogram_data.h 00:03:41.735 CC app/spdk_dd/spdk_dd.o 00:03:41.735 TEST_HEADER include/spdk/idxd.h 00:03:41.735 TEST_HEADER include/spdk/idxd_spec.h 00:03:41.735 TEST_HEADER include/spdk/init.h 00:03:41.735 TEST_HEADER include/spdk/ioat.h 00:03:41.735 TEST_HEADER include/spdk/ioat_spec.h 00:03:41.735 TEST_HEADER include/spdk/iscsi_spec.h 00:03:41.735 TEST_HEADER include/spdk/jsonrpc.h 00:03:41.735 TEST_HEADER include/spdk/json.h 00:03:41.735 TEST_HEADER include/spdk/keyring_module.h 00:03:41.735 TEST_HEADER include/spdk/keyring.h 00:03:41.735 TEST_HEADER include/spdk/likely.h 00:03:41.735 TEST_HEADER include/spdk/log.h 00:03:41.735 TEST_HEADER include/spdk/md5.h 00:03:41.735 TEST_HEADER include/spdk/lvol.h 00:03:41.735 TEST_HEADER include/spdk/memory.h 00:03:41.735 CC app/iscsi_tgt/iscsi_tgt.o 00:03:41.735 TEST_HEADER include/spdk/mmio.h 00:03:41.735 TEST_HEADER include/spdk/net.h 00:03:41.735 TEST_HEADER include/spdk/nbd.h 00:03:41.735 TEST_HEADER include/spdk/nvme.h 00:03:41.735 TEST_HEADER include/spdk/notify.h 00:03:41.735 TEST_HEADER include/spdk/nvme_intel.h 00:03:41.735 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:41.735 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:41.735 TEST_HEADER include/spdk/nvme_spec.h 00:03:41.735 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:41.735 TEST_HEADER include/spdk/nvme_zns.h 00:03:41.735 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:41.735 TEST_HEADER include/spdk/nvmf.h 00:03:41.735 TEST_HEADER include/spdk/nvmf_spec.h 00:03:41.735 TEST_HEADER include/spdk/nvmf_transport.h 00:03:41.735 TEST_HEADER include/spdk/opal_spec.h 00:03:41.735 TEST_HEADER include/spdk/opal.h 00:03:41.735 TEST_HEADER include/spdk/pipe.h 00:03:41.735 CC app/spdk_tgt/spdk_tgt.o 00:03:41.735 TEST_HEADER include/spdk/queue.h 00:03:41.735 TEST_HEADER include/spdk/pci_ids.h 00:03:41.735 TEST_HEADER include/spdk/reduce.h 00:03:41.735 TEST_HEADER include/spdk/rpc.h 00:03:41.735 TEST_HEADER include/spdk/scheduler.h 00:03:41.735 TEST_HEADER include/spdk/scsi.h 00:03:41.735 TEST_HEADER include/spdk/scsi_spec.h 00:03:41.735 TEST_HEADER include/spdk/stdinc.h 00:03:41.735 TEST_HEADER include/spdk/sock.h 00:03:41.735 TEST_HEADER include/spdk/thread.h 00:03:41.735 TEST_HEADER include/spdk/trace.h 00:03:41.735 TEST_HEADER include/spdk/trace_parser.h 00:03:41.735 TEST_HEADER include/spdk/string.h 00:03:41.735 TEST_HEADER include/spdk/tree.h 00:03:41.735 TEST_HEADER include/spdk/ublk.h 00:03:41.735 TEST_HEADER include/spdk/util.h 00:03:41.735 TEST_HEADER include/spdk/version.h 00:03:41.735 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:41.735 TEST_HEADER include/spdk/uuid.h 00:03:41.735 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:41.735 TEST_HEADER include/spdk/vhost.h 00:03:41.735 TEST_HEADER include/spdk/xor.h 00:03:41.735 TEST_HEADER include/spdk/vmd.h 00:03:41.735 CXX test/cpp_headers/accel.o 00:03:41.736 TEST_HEADER include/spdk/zipf.h 00:03:41.736 CXX test/cpp_headers/accel_module.o 00:03:41.736 CXX test/cpp_headers/assert.o 00:03:41.736 CXX test/cpp_headers/barrier.o 00:03:41.736 CXX test/cpp_headers/base64.o 00:03:41.736 CXX test/cpp_headers/bdev.o 00:03:41.736 CXX test/cpp_headers/bdev_module.o 00:03:41.736 CXX test/cpp_headers/bdev_zone.o 00:03:41.736 CXX test/cpp_headers/bit_array.o 00:03:41.736 CXX test/cpp_headers/bit_pool.o 00:03:41.736 CXX test/cpp_headers/blob_bdev.o 00:03:41.736 CXX test/cpp_headers/blobfs_bdev.o 00:03:41.736 CXX test/cpp_headers/blobfs.o 00:03:41.736 CXX test/cpp_headers/blob.o 00:03:41.736 CXX test/cpp_headers/conf.o 00:03:41.736 CXX test/cpp_headers/config.o 00:03:41.736 CXX test/cpp_headers/crc16.o 00:03:41.736 CXX test/cpp_headers/cpuset.o 00:03:41.736 CXX test/cpp_headers/crc64.o 00:03:41.736 CXX test/cpp_headers/crc32.o 00:03:41.736 CXX test/cpp_headers/dma.o 00:03:41.736 CXX test/cpp_headers/dif.o 00:03:41.736 CXX test/cpp_headers/endian.o 00:03:41.736 CXX test/cpp_headers/env_dpdk.o 00:03:41.736 CXX test/cpp_headers/event.o 00:03:41.736 CXX test/cpp_headers/env.o 00:03:41.736 CXX test/cpp_headers/fd_group.o 00:03:41.736 CXX test/cpp_headers/file.o 00:03:41.736 CXX test/cpp_headers/fd.o 00:03:41.736 CXX test/cpp_headers/fsdev.o 00:03:41.736 CXX test/cpp_headers/ftl.o 00:03:41.736 CXX test/cpp_headers/fsdev_module.o 00:03:41.736 CXX test/cpp_headers/fuse_dispatcher.o 00:03:41.736 CXX test/cpp_headers/gpt_spec.o 00:03:41.736 CXX test/cpp_headers/idxd.o 00:03:41.736 CXX test/cpp_headers/hexlify.o 00:03:41.736 CXX test/cpp_headers/histogram_data.o 00:03:41.736 CXX test/cpp_headers/ioat.o 00:03:41.736 CXX test/cpp_headers/ioat_spec.o 00:03:41.736 CXX test/cpp_headers/idxd_spec.o 00:03:41.736 CXX test/cpp_headers/init.o 00:03:41.736 CXX test/cpp_headers/iscsi_spec.o 00:03:41.736 CXX test/cpp_headers/json.o 00:03:41.736 CXX test/cpp_headers/jsonrpc.o 00:03:41.736 CXX test/cpp_headers/keyring_module.o 00:03:41.736 CXX test/cpp_headers/keyring.o 00:03:41.736 CXX test/cpp_headers/log.o 00:03:41.736 CXX test/cpp_headers/likely.o 00:03:41.736 CXX test/cpp_headers/md5.o 00:03:41.736 CXX test/cpp_headers/lvol.o 00:03:41.736 CXX test/cpp_headers/mmio.o 00:03:41.736 CXX test/cpp_headers/memory.o 00:03:41.736 CXX test/cpp_headers/net.o 00:03:41.736 CXX test/cpp_headers/notify.o 00:03:41.736 CXX test/cpp_headers/nbd.o 00:03:41.736 CXX test/cpp_headers/nvme.o 00:03:41.736 CXX test/cpp_headers/nvme_spec.o 00:03:41.736 CXX test/cpp_headers/nvme_intel.o 00:03:41.736 CXX test/cpp_headers/nvme_ocssd.o 00:03:41.736 CXX test/cpp_headers/nvme_zns.o 00:03:41.736 CXX test/cpp_headers/nvmf_cmd.o 00:03:41.736 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:41.736 CXX test/cpp_headers/nvmf_transport.o 00:03:41.736 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:41.736 CXX test/cpp_headers/nvmf_spec.o 00:03:41.736 CXX test/cpp_headers/opal_spec.o 00:03:41.736 CXX test/cpp_headers/opal.o 00:03:41.736 CXX test/cpp_headers/nvmf.o 00:03:41.736 CXX test/cpp_headers/pci_ids.o 00:03:41.736 CXX test/cpp_headers/pipe.o 00:03:41.736 CXX test/cpp_headers/rpc.o 00:03:41.736 CC examples/util/zipf/zipf.o 00:03:41.736 CXX test/cpp_headers/queue.o 00:03:41.736 CXX test/cpp_headers/scheduler.o 00:03:41.736 CXX test/cpp_headers/reduce.o 00:03:41.998 CXX test/cpp_headers/scsi_spec.o 00:03:41.998 CXX test/cpp_headers/scsi.o 00:03:41.998 CXX test/cpp_headers/sock.o 00:03:41.998 CXX test/cpp_headers/string.o 00:03:41.998 CXX test/cpp_headers/thread.o 00:03:41.998 CXX test/cpp_headers/trace_parser.o 00:03:41.998 CXX test/cpp_headers/stdinc.o 00:03:41.998 CXX test/cpp_headers/tree.o 00:03:41.998 CXX test/cpp_headers/trace.o 00:03:41.998 CC test/thread/poller_perf/poller_perf.o 00:03:41.998 CXX test/cpp_headers/version.o 00:03:41.998 CXX test/cpp_headers/util.o 00:03:41.998 CXX test/cpp_headers/ublk.o 00:03:41.998 CXX test/cpp_headers/uuid.o 00:03:41.998 CXX test/cpp_headers/vfio_user_pci.o 00:03:41.998 CC app/fio/nvme/fio_plugin.o 00:03:41.998 CXX test/cpp_headers/zipf.o 00:03:41.998 CXX test/cpp_headers/vhost.o 00:03:41.998 CXX test/cpp_headers/xor.o 00:03:41.998 CXX test/cpp_headers/vmd.o 00:03:41.998 CXX test/cpp_headers/vfio_user_spec.o 00:03:41.998 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:41.998 CC test/app/jsoncat/jsoncat.o 00:03:41.998 CC test/app/histogram_perf/histogram_perf.o 00:03:41.998 CC examples/ioat/perf/perf.o 00:03:41.998 CC examples/ioat/verify/verify.o 00:03:41.998 CC test/app/stub/stub.o 00:03:41.998 CC test/env/memory/memory_ut.o 00:03:41.998 CC app/fio/bdev/fio_plugin.o 00:03:41.998 CC test/env/vtophys/vtophys.o 00:03:41.998 CC test/env/pci/pci_ut.o 00:03:41.998 CC test/app/bdev_svc/bdev_svc.o 00:03:41.998 CC test/dma/test_dma/test_dma.o 00:03:41.998 LINK spdk_lspci 00:03:41.998 LINK rpc_client_test 00:03:41.998 LINK interrupt_tgt 00:03:41.998 LINK spdk_nvme_discover 00:03:41.998 LINK nvmf_tgt 00:03:42.259 LINK spdk_trace_record 00:03:42.259 LINK spdk_tgt 00:03:42.259 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:42.259 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:42.517 LINK iscsi_tgt 00:03:42.517 CC test/env/mem_callbacks/mem_callbacks.o 00:03:42.517 LINK poller_perf 00:03:42.517 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:42.517 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:42.517 LINK stub 00:03:42.517 LINK ioat_perf 00:03:42.517 LINK histogram_perf 00:03:42.517 LINK spdk_dd 00:03:42.517 LINK bdev_svc 00:03:42.517 LINK jsoncat 00:03:42.517 LINK zipf 00:03:42.799 LINK spdk_trace 00:03:42.799 LINK env_dpdk_post_init 00:03:42.799 LINK verify 00:03:42.799 LINK vtophys 00:03:42.799 LINK spdk_nvme_perf 00:03:43.060 CC test/event/reactor_perf/reactor_perf.o 00:03:43.060 CC test/event/reactor/reactor.o 00:03:43.060 CC test/event/event_perf/event_perf.o 00:03:43.060 CC test/event/app_repeat/app_repeat.o 00:03:43.060 LINK nvme_fuzz 00:03:43.060 LINK vhost_fuzz 00:03:43.060 CC test/event/scheduler/scheduler.o 00:03:43.060 LINK pci_ut 00:03:43.060 LINK spdk_bdev 00:03:43.060 LINK spdk_nvme 00:03:43.060 LINK test_dma 00:03:43.060 CC app/vhost/vhost.o 00:03:43.060 LINK spdk_nvme_identify 00:03:43.060 LINK spdk_top 00:03:43.060 LINK reactor_perf 00:03:43.060 CC examples/idxd/perf/perf.o 00:03:43.060 LINK event_perf 00:03:43.060 CC examples/sock/hello_world/hello_sock.o 00:03:43.060 LINK reactor 00:03:43.060 CC examples/vmd/led/led.o 00:03:43.060 CC examples/vmd/lsvmd/lsvmd.o 00:03:43.060 LINK mem_callbacks 00:03:43.060 LINK app_repeat 00:03:43.321 CC examples/thread/thread/thread_ex.o 00:03:43.321 LINK scheduler 00:03:43.321 LINK memory_ut 00:03:43.321 LINK led 00:03:43.321 LINK vhost 00:03:43.321 LINK lsvmd 00:03:43.321 LINK hello_sock 00:03:43.321 LINK idxd_perf 00:03:43.583 LINK thread 00:03:43.583 CC test/nvme/reset/reset.o 00:03:43.583 CC test/nvme/overhead/overhead.o 00:03:43.583 CC test/nvme/aer/aer.o 00:03:43.583 CC test/nvme/e2edp/nvme_dp.o 00:03:43.583 CC test/nvme/boot_partition/boot_partition.o 00:03:43.583 CC test/nvme/sgl/sgl.o 00:03:43.583 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:43.583 CC test/nvme/err_injection/err_injection.o 00:03:43.583 CC test/nvme/startup/startup.o 00:03:43.583 CC test/nvme/connect_stress/connect_stress.o 00:03:43.583 CC test/nvme/reserve/reserve.o 00:03:43.583 CC test/nvme/cuse/cuse.o 00:03:43.583 CC test/nvme/fused_ordering/fused_ordering.o 00:03:43.583 CC test/nvme/compliance/nvme_compliance.o 00:03:43.583 CC test/nvme/fdp/fdp.o 00:03:43.583 CC test/nvme/simple_copy/simple_copy.o 00:03:43.583 CC test/blobfs/mkfs/mkfs.o 00:03:43.583 CC test/accel/dif/dif.o 00:03:43.844 CC test/lvol/esnap/esnap.o 00:03:43.844 LINK boot_partition 00:03:43.844 LINK err_injection 00:03:43.844 LINK startup 00:03:43.844 LINK connect_stress 00:03:43.844 LINK doorbell_aers 00:03:43.844 LINK fused_ordering 00:03:43.844 LINK mkfs 00:03:43.844 CC examples/nvme/hotplug/hotplug.o 00:03:43.844 LINK reserve 00:03:43.844 CC examples/nvme/reconnect/reconnect.o 00:03:43.844 LINK sgl 00:03:43.844 LINK simple_copy 00:03:43.844 CC examples/nvme/hello_world/hello_world.o 00:03:43.844 LINK aer 00:03:43.844 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:43.844 LINK reset 00:03:43.844 LINK overhead 00:03:44.105 LINK nvme_dp 00:03:44.105 CC examples/nvme/arbitration/arbitration.o 00:03:44.105 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:44.105 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:44.105 CC examples/nvme/abort/abort.o 00:03:44.105 LINK nvme_compliance 00:03:44.105 LINK iscsi_fuzz 00:03:44.105 LINK fdp 00:03:44.105 CC examples/accel/perf/accel_perf.o 00:03:44.105 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:44.105 CC examples/blob/cli/blobcli.o 00:03:44.105 CC examples/blob/hello_world/hello_blob.o 00:03:44.105 LINK hotplug 00:03:44.105 LINK pmr_persistence 00:03:44.105 LINK cmb_copy 00:03:44.365 LINK hello_world 00:03:44.365 LINK reconnect 00:03:44.365 LINK arbitration 00:03:44.365 LINK dif 00:03:44.365 LINK abort 00:03:44.365 LINK hello_fsdev 00:03:44.365 LINK hello_blob 00:03:44.365 LINK nvme_manage 00:03:44.626 LINK accel_perf 00:03:44.626 LINK blobcli 00:03:44.887 LINK cuse 00:03:44.887 CC test/bdev/bdevio/bdevio.o 00:03:45.148 CC examples/bdev/hello_world/hello_bdev.o 00:03:45.148 CC examples/bdev/bdevperf/bdevperf.o 00:03:45.409 LINK bdevio 00:03:45.409 LINK hello_bdev 00:03:45.980 LINK bdevperf 00:03:46.549 CC examples/nvmf/nvmf/nvmf.o 00:03:46.808 LINK nvmf 00:03:48.773 LINK esnap 00:03:48.773 00:03:48.773 real 0m53.635s 00:03:48.773 user 6m20.855s 00:03:48.773 sys 3m34.998s 00:03:48.773 09:21:24 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:48.773 09:21:24 make -- common/autotest_common.sh@10 -- $ set +x 00:03:48.773 ************************************ 00:03:48.773 END TEST make 00:03:48.773 ************************************ 00:03:48.773 09:21:24 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:48.773 09:21:24 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:48.773 09:21:24 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:48.773 09:21:24 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:48.773 09:21:24 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:48.773 09:21:24 -- pm/common@44 -- $ pid=2434121 00:03:48.773 09:21:24 -- pm/common@50 -- $ kill -TERM 2434121 00:03:48.773 09:21:24 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:48.773 09:21:24 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:48.773 09:21:24 -- pm/common@44 -- $ pid=2434122 00:03:48.773 09:21:24 -- pm/common@50 -- $ kill -TERM 2434122 00:03:48.773 09:21:24 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:48.773 09:21:24 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:48.773 09:21:24 -- pm/common@44 -- $ pid=2434123 00:03:48.773 09:21:24 -- pm/common@50 -- $ kill -TERM 2434123 00:03:48.773 09:21:24 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:48.773 09:21:24 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:48.773 09:21:24 -- pm/common@44 -- $ pid=2434148 00:03:48.773 09:21:24 -- pm/common@50 -- $ sudo -E kill -TERM 2434148 00:03:48.773 09:21:24 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:48.773 09:21:24 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:48.773 09:21:24 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:48.773 09:21:24 -- common/autotest_common.sh@1711 -- # lcov --version 00:03:48.773 09:21:24 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:49.034 09:21:24 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:49.034 09:21:24 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:49.034 09:21:24 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:49.034 09:21:24 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:49.034 09:21:24 -- scripts/common.sh@336 -- # IFS=.-: 00:03:49.034 09:21:24 -- scripts/common.sh@336 -- # read -ra ver1 00:03:49.034 09:21:24 -- scripts/common.sh@337 -- # IFS=.-: 00:03:49.034 09:21:24 -- scripts/common.sh@337 -- # read -ra ver2 00:03:49.034 09:21:24 -- scripts/common.sh@338 -- # local 'op=<' 00:03:49.034 09:21:24 -- scripts/common.sh@340 -- # ver1_l=2 00:03:49.034 09:21:24 -- scripts/common.sh@341 -- # ver2_l=1 00:03:49.034 09:21:24 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:49.034 09:21:24 -- scripts/common.sh@344 -- # case "$op" in 00:03:49.034 09:21:24 -- scripts/common.sh@345 -- # : 1 00:03:49.034 09:21:24 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:49.034 09:21:24 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:49.034 09:21:24 -- scripts/common.sh@365 -- # decimal 1 00:03:49.034 09:21:24 -- scripts/common.sh@353 -- # local d=1 00:03:49.034 09:21:24 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:49.034 09:21:24 -- scripts/common.sh@355 -- # echo 1 00:03:49.034 09:21:24 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:49.034 09:21:24 -- scripts/common.sh@366 -- # decimal 2 00:03:49.034 09:21:24 -- scripts/common.sh@353 -- # local d=2 00:03:49.034 09:21:24 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:49.034 09:21:24 -- scripts/common.sh@355 -- # echo 2 00:03:49.034 09:21:24 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:49.034 09:21:24 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:49.034 09:21:24 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:49.034 09:21:24 -- scripts/common.sh@368 -- # return 0 00:03:49.034 09:21:24 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:49.034 09:21:24 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:49.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.034 --rc genhtml_branch_coverage=1 00:03:49.034 --rc genhtml_function_coverage=1 00:03:49.034 --rc genhtml_legend=1 00:03:49.034 --rc geninfo_all_blocks=1 00:03:49.034 --rc geninfo_unexecuted_blocks=1 00:03:49.034 00:03:49.034 ' 00:03:49.034 09:21:24 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:49.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.034 --rc genhtml_branch_coverage=1 00:03:49.034 --rc genhtml_function_coverage=1 00:03:49.034 --rc genhtml_legend=1 00:03:49.034 --rc geninfo_all_blocks=1 00:03:49.034 --rc geninfo_unexecuted_blocks=1 00:03:49.034 00:03:49.034 ' 00:03:49.034 09:21:24 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:49.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.034 --rc genhtml_branch_coverage=1 00:03:49.034 --rc genhtml_function_coverage=1 00:03:49.034 --rc genhtml_legend=1 00:03:49.034 --rc geninfo_all_blocks=1 00:03:49.034 --rc geninfo_unexecuted_blocks=1 00:03:49.034 00:03:49.034 ' 00:03:49.034 09:21:24 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:49.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:49.034 --rc genhtml_branch_coverage=1 00:03:49.034 --rc genhtml_function_coverage=1 00:03:49.034 --rc genhtml_legend=1 00:03:49.034 --rc geninfo_all_blocks=1 00:03:49.034 --rc geninfo_unexecuted_blocks=1 00:03:49.034 00:03:49.034 ' 00:03:49.034 09:21:24 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:49.034 09:21:24 -- nvmf/common.sh@7 -- # uname -s 00:03:49.034 09:21:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:49.034 09:21:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:49.034 09:21:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:49.034 09:21:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:49.034 09:21:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:49.034 09:21:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:49.034 09:21:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:49.034 09:21:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:49.034 09:21:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:49.034 09:21:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:49.034 09:21:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:03:49.034 09:21:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:03:49.034 09:21:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:49.034 09:21:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:49.034 09:21:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:49.034 09:21:24 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:49.034 09:21:24 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:49.034 09:21:24 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:49.034 09:21:24 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:49.034 09:21:24 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:49.034 09:21:24 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:49.034 09:21:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:49.034 09:21:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:49.034 09:21:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:49.034 09:21:24 -- paths/export.sh@5 -- # export PATH 00:03:49.034 09:21:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:49.034 09:21:24 -- nvmf/common.sh@51 -- # : 0 00:03:49.034 09:21:24 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:49.034 09:21:24 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:49.034 09:21:24 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:49.034 09:21:24 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:49.034 09:21:24 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:49.034 09:21:24 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:49.034 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:49.034 09:21:24 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:49.034 09:21:24 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:49.034 09:21:24 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:49.034 09:21:24 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:49.034 09:21:24 -- spdk/autotest.sh@32 -- # uname -s 00:03:49.034 09:21:24 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:49.034 09:21:24 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:49.034 09:21:24 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:49.034 09:21:24 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:49.034 09:21:24 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:49.034 09:21:24 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:49.034 09:21:24 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:49.034 09:21:24 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:49.034 09:21:24 -- spdk/autotest.sh@48 -- # udevadm_pid=2515855 00:03:49.034 09:21:24 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:49.034 09:21:24 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:49.034 09:21:24 -- pm/common@17 -- # local monitor 00:03:49.034 09:21:24 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:49.034 09:21:24 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:49.034 09:21:24 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:49.034 09:21:24 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:49.034 09:21:24 -- pm/common@21 -- # date +%s 00:03:49.034 09:21:24 -- pm/common@21 -- # date +%s 00:03:49.034 09:21:24 -- pm/common@25 -- # sleep 1 00:03:49.034 09:21:24 -- pm/common@21 -- # date +%s 00:03:49.034 09:21:24 -- pm/common@21 -- # date +%s 00:03:49.034 09:21:24 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733732484 00:03:49.034 09:21:24 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733732484 00:03:49.034 09:21:24 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733732484 00:03:49.034 09:21:24 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733732484 00:03:49.034 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733732484_collect-vmstat.pm.log 00:03:49.034 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733732484_collect-cpu-load.pm.log 00:03:49.034 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733732484_collect-cpu-temp.pm.log 00:03:49.034 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733732484_collect-bmc-pm.bmc.pm.log 00:03:49.972 09:21:25 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:49.972 09:21:25 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:49.972 09:21:25 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:49.972 09:21:25 -- common/autotest_common.sh@10 -- # set +x 00:03:49.972 09:21:25 -- spdk/autotest.sh@59 -- # create_test_list 00:03:49.972 09:21:25 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:49.972 09:21:25 -- common/autotest_common.sh@10 -- # set +x 00:03:49.972 09:21:25 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:49.972 09:21:25 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:50.231 09:21:25 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:50.231 09:21:25 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:50.231 09:21:25 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:50.231 09:21:25 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:50.231 09:21:25 -- common/autotest_common.sh@1457 -- # uname 00:03:50.232 09:21:25 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:50.232 09:21:25 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:50.232 09:21:25 -- common/autotest_common.sh@1477 -- # uname 00:03:50.232 09:21:25 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:50.232 09:21:25 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:50.232 09:21:25 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:50.232 lcov: LCOV version 1.15 00:03:50.232 09:21:25 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:04:05.134 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:05.134 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:23.249 09:21:55 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:23.249 09:21:55 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:23.249 09:21:55 -- common/autotest_common.sh@10 -- # set +x 00:04:23.249 09:21:55 -- spdk/autotest.sh@78 -- # rm -f 00:04:23.249 09:21:56 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:24.191 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:04:24.191 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:04:24.191 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:04:24.191 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:04:24.191 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:04:24.191 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:04:24.191 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:04:24.191 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:04:24.191 0000:65:00.0 (144d a80a): Already using the nvme driver 00:04:24.191 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:04:24.191 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:04:24.453 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:04:24.453 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:04:24.453 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:04:24.453 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:04:24.453 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:04:24.453 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:04:24.719 09:22:00 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:24.719 09:22:00 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:24.719 09:22:00 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:24.719 09:22:00 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:04:24.719 09:22:00 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:04:24.719 09:22:00 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:04:24.719 09:22:00 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:04:24.719 09:22:00 -- common/autotest_common.sh@1669 -- # bdf=0000:65:00.0 00:04:24.719 09:22:00 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:24.719 09:22:00 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:04:24.719 09:22:00 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:24.719 09:22:00 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:24.719 09:22:00 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:24.719 09:22:00 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:24.719 09:22:00 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:24.719 09:22:00 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:24.719 09:22:00 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:24.719 09:22:00 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:24.719 09:22:00 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:24.719 No valid GPT data, bailing 00:04:24.719 09:22:00 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:24.719 09:22:00 -- scripts/common.sh@394 -- # pt= 00:04:24.719 09:22:00 -- scripts/common.sh@395 -- # return 1 00:04:24.719 09:22:00 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:24.719 1+0 records in 00:04:24.719 1+0 records out 00:04:24.719 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00474485 s, 221 MB/s 00:04:24.719 09:22:00 -- spdk/autotest.sh@105 -- # sync 00:04:24.719 09:22:00 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:24.719 09:22:00 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:24.719 09:22:00 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:34.756 09:22:08 -- spdk/autotest.sh@111 -- # uname -s 00:04:34.756 09:22:08 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:34.756 09:22:08 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:34.756 09:22:08 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:36.667 Hugepages 00:04:36.667 node hugesize free / total 00:04:36.667 node0 1048576kB 0 / 0 00:04:36.667 node0 2048kB 0 / 0 00:04:36.667 node1 1048576kB 0 / 0 00:04:36.667 node1 2048kB 0 / 0 00:04:36.667 00:04:36.667 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:36.667 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:04:36.667 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:04:36.667 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:04:36.667 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:04:36.667 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:04:36.667 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:04:36.667 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:04:36.667 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:04:36.667 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:04:36.667 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:04:36.667 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:04:36.928 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:04:36.928 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:04:36.928 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:04:36.928 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:04:36.928 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:04:36.928 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:04:36.928 09:22:12 -- spdk/autotest.sh@117 -- # uname -s 00:04:36.928 09:22:12 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:36.928 09:22:12 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:36.928 09:22:12 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:40.254 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:40.255 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:40.255 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:40.255 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:40.255 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:40.255 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:40.255 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:40.255 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:40.255 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:40.255 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:40.255 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:40.255 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:40.255 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:40.255 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:40.255 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:40.255 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:42.168 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:42.429 09:22:17 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:43.372 09:22:18 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:43.372 09:22:18 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:43.372 09:22:18 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:43.372 09:22:18 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:43.373 09:22:18 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:43.373 09:22:18 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:43.373 09:22:18 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:43.373 09:22:18 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:43.373 09:22:18 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:43.373 09:22:18 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:43.373 09:22:18 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:04:43.373 09:22:18 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:47.581 Waiting for block devices as requested 00:04:47.581 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:04:47.581 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:04:47.581 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:04:47.581 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:04:47.581 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:04:47.582 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:04:47.582 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:04:47.582 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:04:47.582 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:04:47.842 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:04:47.842 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:04:47.842 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:04:48.101 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:04:48.101 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:04:48.101 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:04:48.360 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:04:48.360 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:04:48.621 09:22:23 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:48.621 09:22:23 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:04:48.621 09:22:23 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:04:48.621 09:22:23 -- common/autotest_common.sh@1487 -- # grep 0000:65:00.0/nvme/nvme 00:04:48.621 09:22:24 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:04:48.621 09:22:24 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:04:48.621 09:22:24 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:04:48.621 09:22:24 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:48.621 09:22:24 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:48.621 09:22:24 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:48.621 09:22:24 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:48.621 09:22:24 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:48.621 09:22:24 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:48.621 09:22:24 -- common/autotest_common.sh@1531 -- # oacs=' 0x5f' 00:04:48.621 09:22:24 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:48.621 09:22:24 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:48.621 09:22:24 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:48.621 09:22:24 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:48.621 09:22:24 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:48.621 09:22:24 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:48.621 09:22:24 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:48.621 09:22:24 -- common/autotest_common.sh@1543 -- # continue 00:04:48.621 09:22:24 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:48.621 09:22:24 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:48.621 09:22:24 -- common/autotest_common.sh@10 -- # set +x 00:04:48.881 09:22:24 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:48.881 09:22:24 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:48.881 09:22:24 -- common/autotest_common.sh@10 -- # set +x 00:04:48.881 09:22:24 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:52.324 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:52.324 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:52.324 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:52.324 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:52.324 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:52.324 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:52.324 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:52.324 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:52.324 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:52.324 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:52.324 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:52.324 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:52.324 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:52.324 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:52.324 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:52.324 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:52.324 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:52.896 09:22:28 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:52.896 09:22:28 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:52.896 09:22:28 -- common/autotest_common.sh@10 -- # set +x 00:04:52.896 09:22:28 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:52.896 09:22:28 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:52.896 09:22:28 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:52.896 09:22:28 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:52.896 09:22:28 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:52.896 09:22:28 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:52.896 09:22:28 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:52.896 09:22:28 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:52.896 09:22:28 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:52.896 09:22:28 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:52.896 09:22:28 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:52.896 09:22:28 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:52.896 09:22:28 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:52.896 09:22:28 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:52.896 09:22:28 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:04:52.896 09:22:28 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:52.896 09:22:28 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:04:52.896 09:22:28 -- common/autotest_common.sh@1566 -- # device=0xa80a 00:04:52.896 09:22:28 -- common/autotest_common.sh@1567 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:04:52.896 09:22:28 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:52.896 09:22:28 -- common/autotest_common.sh@1572 -- # return 0 00:04:52.896 09:22:28 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:52.896 09:22:28 -- common/autotest_common.sh@1580 -- # return 0 00:04:52.896 09:22:28 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:52.896 09:22:28 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:52.896 09:22:28 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:52.896 09:22:28 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:52.896 09:22:28 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:52.896 09:22:28 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:52.896 09:22:28 -- common/autotest_common.sh@10 -- # set +x 00:04:52.896 09:22:28 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:52.896 09:22:28 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:52.896 09:22:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:52.896 09:22:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:52.896 09:22:28 -- common/autotest_common.sh@10 -- # set +x 00:04:52.896 ************************************ 00:04:52.896 START TEST env 00:04:52.896 ************************************ 00:04:52.896 09:22:28 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:53.157 * Looking for test storage... 00:04:53.157 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:53.157 09:22:28 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:53.157 09:22:28 env -- common/autotest_common.sh@1711 -- # lcov --version 00:04:53.157 09:22:28 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:53.157 09:22:28 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:53.157 09:22:28 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:53.157 09:22:28 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:53.157 09:22:28 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:53.157 09:22:28 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:53.158 09:22:28 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:53.158 09:22:28 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:53.158 09:22:28 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:53.158 09:22:28 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:53.158 09:22:28 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:53.158 09:22:28 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:53.158 09:22:28 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:53.158 09:22:28 env -- scripts/common.sh@344 -- # case "$op" in 00:04:53.158 09:22:28 env -- scripts/common.sh@345 -- # : 1 00:04:53.158 09:22:28 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:53.158 09:22:28 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:53.158 09:22:28 env -- scripts/common.sh@365 -- # decimal 1 00:04:53.158 09:22:28 env -- scripts/common.sh@353 -- # local d=1 00:04:53.158 09:22:28 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:53.158 09:22:28 env -- scripts/common.sh@355 -- # echo 1 00:04:53.158 09:22:28 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:53.158 09:22:28 env -- scripts/common.sh@366 -- # decimal 2 00:04:53.158 09:22:28 env -- scripts/common.sh@353 -- # local d=2 00:04:53.158 09:22:28 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:53.158 09:22:28 env -- scripts/common.sh@355 -- # echo 2 00:04:53.158 09:22:28 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:53.158 09:22:28 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:53.158 09:22:28 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:53.158 09:22:28 env -- scripts/common.sh@368 -- # return 0 00:04:53.158 09:22:28 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:53.158 09:22:28 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:53.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.158 --rc genhtml_branch_coverage=1 00:04:53.158 --rc genhtml_function_coverage=1 00:04:53.158 --rc genhtml_legend=1 00:04:53.158 --rc geninfo_all_blocks=1 00:04:53.158 --rc geninfo_unexecuted_blocks=1 00:04:53.158 00:04:53.158 ' 00:04:53.158 09:22:28 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:53.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.158 --rc genhtml_branch_coverage=1 00:04:53.158 --rc genhtml_function_coverage=1 00:04:53.158 --rc genhtml_legend=1 00:04:53.158 --rc geninfo_all_blocks=1 00:04:53.158 --rc geninfo_unexecuted_blocks=1 00:04:53.158 00:04:53.158 ' 00:04:53.158 09:22:28 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:53.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.158 --rc genhtml_branch_coverage=1 00:04:53.158 --rc genhtml_function_coverage=1 00:04:53.158 --rc genhtml_legend=1 00:04:53.158 --rc geninfo_all_blocks=1 00:04:53.158 --rc geninfo_unexecuted_blocks=1 00:04:53.158 00:04:53.158 ' 00:04:53.158 09:22:28 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:53.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.158 --rc genhtml_branch_coverage=1 00:04:53.158 --rc genhtml_function_coverage=1 00:04:53.158 --rc genhtml_legend=1 00:04:53.158 --rc geninfo_all_blocks=1 00:04:53.158 --rc geninfo_unexecuted_blocks=1 00:04:53.158 00:04:53.158 ' 00:04:53.158 09:22:28 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:53.158 09:22:28 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:53.158 09:22:28 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:53.158 09:22:28 env -- common/autotest_common.sh@10 -- # set +x 00:04:53.158 ************************************ 00:04:53.158 START TEST env_memory 00:04:53.158 ************************************ 00:04:53.158 09:22:28 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:53.158 00:04:53.158 00:04:53.158 CUnit - A unit testing framework for C - Version 2.1-3 00:04:53.158 http://cunit.sourceforge.net/ 00:04:53.158 00:04:53.158 00:04:53.158 Suite: memory 00:04:53.158 Test: alloc and free memory map ...[2024-12-09 09:22:28.589680] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:53.158 passed 00:04:53.420 Test: mem map translation ...[2024-12-09 09:22:28.615383] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:53.420 [2024-12-09 09:22:28.615415] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:53.420 [2024-12-09 09:22:28.615463] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:53.420 [2024-12-09 09:22:28.615471] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:53.420 passed 00:04:53.420 Test: mem map registration ...[2024-12-09 09:22:28.670860] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:53.420 [2024-12-09 09:22:28.670898] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:53.420 passed 00:04:53.420 Test: mem map adjacent registrations ...passed 00:04:53.420 00:04:53.420 Run Summary: Type Total Ran Passed Failed Inactive 00:04:53.420 suites 1 1 n/a 0 0 00:04:53.420 tests 4 4 4 0 0 00:04:53.420 asserts 152 152 152 0 n/a 00:04:53.420 00:04:53.420 Elapsed time = 0.194 seconds 00:04:53.420 00:04:53.420 real 0m0.209s 00:04:53.420 user 0m0.191s 00:04:53.420 sys 0m0.017s 00:04:53.420 09:22:28 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:53.420 09:22:28 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:53.420 ************************************ 00:04:53.420 END TEST env_memory 00:04:53.420 ************************************ 00:04:53.420 09:22:28 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:53.420 09:22:28 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:53.420 09:22:28 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:53.420 09:22:28 env -- common/autotest_common.sh@10 -- # set +x 00:04:53.420 ************************************ 00:04:53.420 START TEST env_vtophys 00:04:53.420 ************************************ 00:04:53.420 09:22:28 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:53.420 EAL: lib.eal log level changed from notice to debug 00:04:53.420 EAL: Detected lcore 0 as core 0 on socket 0 00:04:53.420 EAL: Detected lcore 1 as core 1 on socket 0 00:04:53.420 EAL: Detected lcore 2 as core 2 on socket 0 00:04:53.420 EAL: Detected lcore 3 as core 3 on socket 0 00:04:53.420 EAL: Detected lcore 4 as core 4 on socket 0 00:04:53.420 EAL: Detected lcore 5 as core 5 on socket 0 00:04:53.420 EAL: Detected lcore 6 as core 6 on socket 0 00:04:53.420 EAL: Detected lcore 7 as core 7 on socket 0 00:04:53.420 EAL: Detected lcore 8 as core 8 on socket 0 00:04:53.420 EAL: Detected lcore 9 as core 9 on socket 0 00:04:53.420 EAL: Detected lcore 10 as core 10 on socket 0 00:04:53.420 EAL: Detected lcore 11 as core 11 on socket 0 00:04:53.420 EAL: Detected lcore 12 as core 12 on socket 0 00:04:53.420 EAL: Detected lcore 13 as core 13 on socket 0 00:04:53.420 EAL: Detected lcore 14 as core 14 on socket 0 00:04:53.420 EAL: Detected lcore 15 as core 15 on socket 0 00:04:53.420 EAL: Detected lcore 16 as core 16 on socket 0 00:04:53.420 EAL: Detected lcore 17 as core 17 on socket 0 00:04:53.420 EAL: Detected lcore 18 as core 18 on socket 0 00:04:53.420 EAL: Detected lcore 19 as core 19 on socket 0 00:04:53.420 EAL: Detected lcore 20 as core 20 on socket 0 00:04:53.420 EAL: Detected lcore 21 as core 21 on socket 0 00:04:53.420 EAL: Detected lcore 22 as core 22 on socket 0 00:04:53.421 EAL: Detected lcore 23 as core 23 on socket 0 00:04:53.421 EAL: Detected lcore 24 as core 24 on socket 0 00:04:53.421 EAL: Detected lcore 25 as core 25 on socket 0 00:04:53.421 EAL: Detected lcore 26 as core 26 on socket 0 00:04:53.421 EAL: Detected lcore 27 as core 27 on socket 0 00:04:53.421 EAL: Detected lcore 28 as core 28 on socket 0 00:04:53.421 EAL: Detected lcore 29 as core 29 on socket 0 00:04:53.421 EAL: Detected lcore 30 as core 30 on socket 0 00:04:53.421 EAL: Detected lcore 31 as core 31 on socket 0 00:04:53.421 EAL: Detected lcore 32 as core 32 on socket 0 00:04:53.421 EAL: Detected lcore 33 as core 33 on socket 0 00:04:53.421 EAL: Detected lcore 34 as core 34 on socket 0 00:04:53.421 EAL: Detected lcore 35 as core 35 on socket 0 00:04:53.421 EAL: Detected lcore 36 as core 0 on socket 1 00:04:53.421 EAL: Detected lcore 37 as core 1 on socket 1 00:04:53.421 EAL: Detected lcore 38 as core 2 on socket 1 00:04:53.421 EAL: Detected lcore 39 as core 3 on socket 1 00:04:53.421 EAL: Detected lcore 40 as core 4 on socket 1 00:04:53.421 EAL: Detected lcore 41 as core 5 on socket 1 00:04:53.421 EAL: Detected lcore 42 as core 6 on socket 1 00:04:53.421 EAL: Detected lcore 43 as core 7 on socket 1 00:04:53.421 EAL: Detected lcore 44 as core 8 on socket 1 00:04:53.421 EAL: Detected lcore 45 as core 9 on socket 1 00:04:53.421 EAL: Detected lcore 46 as core 10 on socket 1 00:04:53.421 EAL: Detected lcore 47 as core 11 on socket 1 00:04:53.421 EAL: Detected lcore 48 as core 12 on socket 1 00:04:53.421 EAL: Detected lcore 49 as core 13 on socket 1 00:04:53.421 EAL: Detected lcore 50 as core 14 on socket 1 00:04:53.421 EAL: Detected lcore 51 as core 15 on socket 1 00:04:53.421 EAL: Detected lcore 52 as core 16 on socket 1 00:04:53.421 EAL: Detected lcore 53 as core 17 on socket 1 00:04:53.421 EAL: Detected lcore 54 as core 18 on socket 1 00:04:53.421 EAL: Detected lcore 55 as core 19 on socket 1 00:04:53.421 EAL: Detected lcore 56 as core 20 on socket 1 00:04:53.421 EAL: Detected lcore 57 as core 21 on socket 1 00:04:53.421 EAL: Detected lcore 58 as core 22 on socket 1 00:04:53.421 EAL: Detected lcore 59 as core 23 on socket 1 00:04:53.421 EAL: Detected lcore 60 as core 24 on socket 1 00:04:53.421 EAL: Detected lcore 61 as core 25 on socket 1 00:04:53.421 EAL: Detected lcore 62 as core 26 on socket 1 00:04:53.421 EAL: Detected lcore 63 as core 27 on socket 1 00:04:53.421 EAL: Detected lcore 64 as core 28 on socket 1 00:04:53.421 EAL: Detected lcore 65 as core 29 on socket 1 00:04:53.421 EAL: Detected lcore 66 as core 30 on socket 1 00:04:53.421 EAL: Detected lcore 67 as core 31 on socket 1 00:04:53.421 EAL: Detected lcore 68 as core 32 on socket 1 00:04:53.421 EAL: Detected lcore 69 as core 33 on socket 1 00:04:53.421 EAL: Detected lcore 70 as core 34 on socket 1 00:04:53.421 EAL: Detected lcore 71 as core 35 on socket 1 00:04:53.421 EAL: Detected lcore 72 as core 0 on socket 0 00:04:53.421 EAL: Detected lcore 73 as core 1 on socket 0 00:04:53.421 EAL: Detected lcore 74 as core 2 on socket 0 00:04:53.421 EAL: Detected lcore 75 as core 3 on socket 0 00:04:53.421 EAL: Detected lcore 76 as core 4 on socket 0 00:04:53.421 EAL: Detected lcore 77 as core 5 on socket 0 00:04:53.421 EAL: Detected lcore 78 as core 6 on socket 0 00:04:53.421 EAL: Detected lcore 79 as core 7 on socket 0 00:04:53.421 EAL: Detected lcore 80 as core 8 on socket 0 00:04:53.421 EAL: Detected lcore 81 as core 9 on socket 0 00:04:53.421 EAL: Detected lcore 82 as core 10 on socket 0 00:04:53.421 EAL: Detected lcore 83 as core 11 on socket 0 00:04:53.421 EAL: Detected lcore 84 as core 12 on socket 0 00:04:53.421 EAL: Detected lcore 85 as core 13 on socket 0 00:04:53.421 EAL: Detected lcore 86 as core 14 on socket 0 00:04:53.421 EAL: Detected lcore 87 as core 15 on socket 0 00:04:53.421 EAL: Detected lcore 88 as core 16 on socket 0 00:04:53.421 EAL: Detected lcore 89 as core 17 on socket 0 00:04:53.421 EAL: Detected lcore 90 as core 18 on socket 0 00:04:53.421 EAL: Detected lcore 91 as core 19 on socket 0 00:04:53.421 EAL: Detected lcore 92 as core 20 on socket 0 00:04:53.421 EAL: Detected lcore 93 as core 21 on socket 0 00:04:53.421 EAL: Detected lcore 94 as core 22 on socket 0 00:04:53.421 EAL: Detected lcore 95 as core 23 on socket 0 00:04:53.421 EAL: Detected lcore 96 as core 24 on socket 0 00:04:53.421 EAL: Detected lcore 97 as core 25 on socket 0 00:04:53.421 EAL: Detected lcore 98 as core 26 on socket 0 00:04:53.421 EAL: Detected lcore 99 as core 27 on socket 0 00:04:53.421 EAL: Detected lcore 100 as core 28 on socket 0 00:04:53.421 EAL: Detected lcore 101 as core 29 on socket 0 00:04:53.421 EAL: Detected lcore 102 as core 30 on socket 0 00:04:53.421 EAL: Detected lcore 103 as core 31 on socket 0 00:04:53.421 EAL: Detected lcore 104 as core 32 on socket 0 00:04:53.421 EAL: Detected lcore 105 as core 33 on socket 0 00:04:53.421 EAL: Detected lcore 106 as core 34 on socket 0 00:04:53.421 EAL: Detected lcore 107 as core 35 on socket 0 00:04:53.421 EAL: Detected lcore 108 as core 0 on socket 1 00:04:53.421 EAL: Detected lcore 109 as core 1 on socket 1 00:04:53.421 EAL: Detected lcore 110 as core 2 on socket 1 00:04:53.421 EAL: Detected lcore 111 as core 3 on socket 1 00:04:53.421 EAL: Detected lcore 112 as core 4 on socket 1 00:04:53.421 EAL: Detected lcore 113 as core 5 on socket 1 00:04:53.421 EAL: Detected lcore 114 as core 6 on socket 1 00:04:53.421 EAL: Detected lcore 115 as core 7 on socket 1 00:04:53.421 EAL: Detected lcore 116 as core 8 on socket 1 00:04:53.421 EAL: Detected lcore 117 as core 9 on socket 1 00:04:53.421 EAL: Detected lcore 118 as core 10 on socket 1 00:04:53.421 EAL: Detected lcore 119 as core 11 on socket 1 00:04:53.421 EAL: Detected lcore 120 as core 12 on socket 1 00:04:53.421 EAL: Detected lcore 121 as core 13 on socket 1 00:04:53.421 EAL: Detected lcore 122 as core 14 on socket 1 00:04:53.421 EAL: Detected lcore 123 as core 15 on socket 1 00:04:53.421 EAL: Detected lcore 124 as core 16 on socket 1 00:04:53.421 EAL: Detected lcore 125 as core 17 on socket 1 00:04:53.421 EAL: Detected lcore 126 as core 18 on socket 1 00:04:53.421 EAL: Detected lcore 127 as core 19 on socket 1 00:04:53.421 EAL: Skipped lcore 128 as core 20 on socket 1 00:04:53.421 EAL: Skipped lcore 129 as core 21 on socket 1 00:04:53.421 EAL: Skipped lcore 130 as core 22 on socket 1 00:04:53.421 EAL: Skipped lcore 131 as core 23 on socket 1 00:04:53.421 EAL: Skipped lcore 132 as core 24 on socket 1 00:04:53.421 EAL: Skipped lcore 133 as core 25 on socket 1 00:04:53.421 EAL: Skipped lcore 134 as core 26 on socket 1 00:04:53.421 EAL: Skipped lcore 135 as core 27 on socket 1 00:04:53.421 EAL: Skipped lcore 136 as core 28 on socket 1 00:04:53.421 EAL: Skipped lcore 137 as core 29 on socket 1 00:04:53.421 EAL: Skipped lcore 138 as core 30 on socket 1 00:04:53.421 EAL: Skipped lcore 139 as core 31 on socket 1 00:04:53.421 EAL: Skipped lcore 140 as core 32 on socket 1 00:04:53.421 EAL: Skipped lcore 141 as core 33 on socket 1 00:04:53.421 EAL: Skipped lcore 142 as core 34 on socket 1 00:04:53.421 EAL: Skipped lcore 143 as core 35 on socket 1 00:04:53.421 EAL: Maximum logical cores by configuration: 128 00:04:53.421 EAL: Detected CPU lcores: 128 00:04:53.421 EAL: Detected NUMA nodes: 2 00:04:53.421 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:04:53.421 EAL: Detected shared linkage of DPDK 00:04:53.421 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:04:53.421 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:04:53.421 EAL: Registered [vdev] bus. 00:04:53.421 EAL: bus.vdev log level changed from disabled to notice 00:04:53.421 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:04:53.421 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:04:53.421 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:04:53.421 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:04:53.421 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:04:53.421 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:04:53.421 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:04:53.421 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:04:53.421 EAL: No shared files mode enabled, IPC will be disabled 00:04:53.683 EAL: No shared files mode enabled, IPC is disabled 00:04:53.683 EAL: Bus pci wants IOVA as 'DC' 00:04:53.683 EAL: Bus vdev wants IOVA as 'DC' 00:04:53.683 EAL: Buses did not request a specific IOVA mode. 00:04:53.683 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:53.683 EAL: Selected IOVA mode 'VA' 00:04:53.683 EAL: Probing VFIO support... 00:04:53.683 EAL: IOMMU type 1 (Type 1) is supported 00:04:53.683 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:53.683 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:53.683 EAL: VFIO support initialized 00:04:53.683 EAL: Ask a virtual area of 0x2e000 bytes 00:04:53.683 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:53.683 EAL: Setting up physically contiguous memory... 00:04:53.683 EAL: Setting maximum number of open files to 524288 00:04:53.683 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:53.683 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:53.683 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:53.683 EAL: Ask a virtual area of 0x61000 bytes 00:04:53.683 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:53.683 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:53.683 EAL: Ask a virtual area of 0x400000000 bytes 00:04:53.683 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:53.683 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:53.683 EAL: Ask a virtual area of 0x61000 bytes 00:04:53.683 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:53.683 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:53.683 EAL: Ask a virtual area of 0x400000000 bytes 00:04:53.683 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:53.683 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:53.683 EAL: Ask a virtual area of 0x61000 bytes 00:04:53.683 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:53.683 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:53.683 EAL: Ask a virtual area of 0x400000000 bytes 00:04:53.683 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:53.683 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:53.683 EAL: Ask a virtual area of 0x61000 bytes 00:04:53.683 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:53.683 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:53.683 EAL: Ask a virtual area of 0x400000000 bytes 00:04:53.683 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:53.683 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:53.683 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:53.683 EAL: Ask a virtual area of 0x61000 bytes 00:04:53.683 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:53.683 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:53.683 EAL: Ask a virtual area of 0x400000000 bytes 00:04:53.683 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:53.683 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:53.683 EAL: Ask a virtual area of 0x61000 bytes 00:04:53.683 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:53.683 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:53.683 EAL: Ask a virtual area of 0x400000000 bytes 00:04:53.683 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:53.683 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:53.683 EAL: Ask a virtual area of 0x61000 bytes 00:04:53.683 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:53.683 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:53.683 EAL: Ask a virtual area of 0x400000000 bytes 00:04:53.683 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:53.683 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:53.683 EAL: Ask a virtual area of 0x61000 bytes 00:04:53.683 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:53.683 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:53.683 EAL: Ask a virtual area of 0x400000000 bytes 00:04:53.683 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:53.683 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:53.683 EAL: Hugepages will be freed exactly as allocated. 00:04:53.683 EAL: No shared files mode enabled, IPC is disabled 00:04:53.683 EAL: No shared files mode enabled, IPC is disabled 00:04:53.683 EAL: TSC frequency is ~2400000 KHz 00:04:53.683 EAL: Main lcore 0 is ready (tid=7f47062eba00;cpuset=[0]) 00:04:53.683 EAL: Trying to obtain current memory policy. 00:04:53.683 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:53.683 EAL: Restoring previous memory policy: 0 00:04:53.683 EAL: request: mp_malloc_sync 00:04:53.683 EAL: No shared files mode enabled, IPC is disabled 00:04:53.683 EAL: Heap on socket 0 was expanded by 2MB 00:04:53.683 EAL: No shared files mode enabled, IPC is disabled 00:04:53.683 EAL: No shared files mode enabled, IPC is disabled 00:04:53.683 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:53.683 EAL: Mem event callback 'spdk:(nil)' registered 00:04:53.683 00:04:53.683 00:04:53.683 CUnit - A unit testing framework for C - Version 2.1-3 00:04:53.683 http://cunit.sourceforge.net/ 00:04:53.683 00:04:53.683 00:04:53.684 Suite: components_suite 00:04:53.684 Test: vtophys_malloc_test ...passed 00:04:53.684 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:53.684 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:53.684 EAL: Restoring previous memory policy: 4 00:04:53.684 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.684 EAL: request: mp_malloc_sync 00:04:53.684 EAL: No shared files mode enabled, IPC is disabled 00:04:53.684 EAL: Heap on socket 0 was expanded by 4MB 00:04:53.684 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.684 EAL: request: mp_malloc_sync 00:04:53.684 EAL: No shared files mode enabled, IPC is disabled 00:04:53.684 EAL: Heap on socket 0 was shrunk by 4MB 00:04:53.684 EAL: Trying to obtain current memory policy. 00:04:53.684 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:53.684 EAL: Restoring previous memory policy: 4 00:04:53.684 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.684 EAL: request: mp_malloc_sync 00:04:53.684 EAL: No shared files mode enabled, IPC is disabled 00:04:53.684 EAL: Heap on socket 0 was expanded by 6MB 00:04:53.684 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.684 EAL: request: mp_malloc_sync 00:04:53.684 EAL: No shared files mode enabled, IPC is disabled 00:04:53.684 EAL: Heap on socket 0 was shrunk by 6MB 00:04:53.684 EAL: Trying to obtain current memory policy. 00:04:53.684 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:53.684 EAL: Restoring previous memory policy: 4 00:04:53.684 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.684 EAL: request: mp_malloc_sync 00:04:53.684 EAL: No shared files mode enabled, IPC is disabled 00:04:53.684 EAL: Heap on socket 0 was expanded by 10MB 00:04:53.684 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.684 EAL: request: mp_malloc_sync 00:04:53.684 EAL: No shared files mode enabled, IPC is disabled 00:04:53.684 EAL: Heap on socket 0 was shrunk by 10MB 00:04:53.684 EAL: Trying to obtain current memory policy. 00:04:53.684 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:53.684 EAL: Restoring previous memory policy: 4 00:04:53.684 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.684 EAL: request: mp_malloc_sync 00:04:53.684 EAL: No shared files mode enabled, IPC is disabled 00:04:53.684 EAL: Heap on socket 0 was expanded by 18MB 00:04:53.684 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.684 EAL: request: mp_malloc_sync 00:04:53.684 EAL: No shared files mode enabled, IPC is disabled 00:04:53.684 EAL: Heap on socket 0 was shrunk by 18MB 00:04:53.684 EAL: Trying to obtain current memory policy. 00:04:53.684 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:53.684 EAL: Restoring previous memory policy: 4 00:04:53.684 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.684 EAL: request: mp_malloc_sync 00:04:53.684 EAL: No shared files mode enabled, IPC is disabled 00:04:53.684 EAL: Heap on socket 0 was expanded by 34MB 00:04:53.684 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.684 EAL: request: mp_malloc_sync 00:04:53.684 EAL: No shared files mode enabled, IPC is disabled 00:04:53.684 EAL: Heap on socket 0 was shrunk by 34MB 00:04:53.684 EAL: Trying to obtain current memory policy. 00:04:53.684 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:53.684 EAL: Restoring previous memory policy: 4 00:04:53.684 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.684 EAL: request: mp_malloc_sync 00:04:53.684 EAL: No shared files mode enabled, IPC is disabled 00:04:53.684 EAL: Heap on socket 0 was expanded by 66MB 00:04:53.684 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.684 EAL: request: mp_malloc_sync 00:04:53.684 EAL: No shared files mode enabled, IPC is disabled 00:04:53.684 EAL: Heap on socket 0 was shrunk by 66MB 00:04:53.684 EAL: Trying to obtain current memory policy. 00:04:53.684 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:53.684 EAL: Restoring previous memory policy: 4 00:04:53.684 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.684 EAL: request: mp_malloc_sync 00:04:53.684 EAL: No shared files mode enabled, IPC is disabled 00:04:53.684 EAL: Heap on socket 0 was expanded by 130MB 00:04:53.684 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.684 EAL: request: mp_malloc_sync 00:04:53.684 EAL: No shared files mode enabled, IPC is disabled 00:04:53.684 EAL: Heap on socket 0 was shrunk by 130MB 00:04:53.684 EAL: Trying to obtain current memory policy. 00:04:53.684 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:53.684 EAL: Restoring previous memory policy: 4 00:04:53.684 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.684 EAL: request: mp_malloc_sync 00:04:53.684 EAL: No shared files mode enabled, IPC is disabled 00:04:53.684 EAL: Heap on socket 0 was expanded by 258MB 00:04:53.684 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.944 EAL: request: mp_malloc_sync 00:04:53.944 EAL: No shared files mode enabled, IPC is disabled 00:04:53.944 EAL: Heap on socket 0 was shrunk by 258MB 00:04:53.944 EAL: Trying to obtain current memory policy. 00:04:53.944 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:53.944 EAL: Restoring previous memory policy: 4 00:04:53.944 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.944 EAL: request: mp_malloc_sync 00:04:53.944 EAL: No shared files mode enabled, IPC is disabled 00:04:53.944 EAL: Heap on socket 0 was expanded by 514MB 00:04:53.944 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.944 EAL: request: mp_malloc_sync 00:04:53.944 EAL: No shared files mode enabled, IPC is disabled 00:04:53.944 EAL: Heap on socket 0 was shrunk by 514MB 00:04:53.944 EAL: Trying to obtain current memory policy. 00:04:53.944 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:54.206 EAL: Restoring previous memory policy: 4 00:04:54.206 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.206 EAL: request: mp_malloc_sync 00:04:54.206 EAL: No shared files mode enabled, IPC is disabled 00:04:54.206 EAL: Heap on socket 0 was expanded by 1026MB 00:04:54.206 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.468 EAL: request: mp_malloc_sync 00:04:54.468 EAL: No shared files mode enabled, IPC is disabled 00:04:54.468 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:54.468 passed 00:04:54.468 00:04:54.468 Run Summary: Type Total Ran Passed Failed Inactive 00:04:54.468 suites 1 1 n/a 0 0 00:04:54.468 tests 2 2 2 0 0 00:04:54.468 asserts 497 497 497 0 n/a 00:04:54.468 00:04:54.468 Elapsed time = 0.684 seconds 00:04:54.468 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.468 EAL: request: mp_malloc_sync 00:04:54.468 EAL: No shared files mode enabled, IPC is disabled 00:04:54.468 EAL: Heap on socket 0 was shrunk by 2MB 00:04:54.468 EAL: No shared files mode enabled, IPC is disabled 00:04:54.468 EAL: No shared files mode enabled, IPC is disabled 00:04:54.468 EAL: No shared files mode enabled, IPC is disabled 00:04:54.468 00:04:54.468 real 0m0.842s 00:04:54.468 user 0m0.430s 00:04:54.468 sys 0m0.385s 00:04:54.468 09:22:29 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:54.468 09:22:29 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:54.468 ************************************ 00:04:54.468 END TEST env_vtophys 00:04:54.468 ************************************ 00:04:54.468 09:22:29 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:54.468 09:22:29 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:54.468 09:22:29 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:54.468 09:22:29 env -- common/autotest_common.sh@10 -- # set +x 00:04:54.468 ************************************ 00:04:54.469 START TEST env_pci 00:04:54.469 ************************************ 00:04:54.469 09:22:29 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:54.469 00:04:54.469 00:04:54.469 CUnit - A unit testing framework for C - Version 2.1-3 00:04:54.469 http://cunit.sourceforge.net/ 00:04:54.469 00:04:54.469 00:04:54.469 Suite: pci 00:04:54.469 Test: pci_hook ...[2024-12-09 09:22:29.763353] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2534953 has claimed it 00:04:54.469 EAL: Cannot find device (10000:00:01.0) 00:04:54.469 EAL: Failed to attach device on primary process 00:04:54.469 passed 00:04:54.469 00:04:54.469 Run Summary: Type Total Ran Passed Failed Inactive 00:04:54.469 suites 1 1 n/a 0 0 00:04:54.469 tests 1 1 1 0 0 00:04:54.469 asserts 25 25 25 0 n/a 00:04:54.469 00:04:54.469 Elapsed time = 0.031 seconds 00:04:54.469 00:04:54.469 real 0m0.051s 00:04:54.469 user 0m0.018s 00:04:54.469 sys 0m0.033s 00:04:54.469 09:22:29 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:54.469 09:22:29 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:54.469 ************************************ 00:04:54.469 END TEST env_pci 00:04:54.469 ************************************ 00:04:54.469 09:22:29 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:54.469 09:22:29 env -- env/env.sh@15 -- # uname 00:04:54.469 09:22:29 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:54.469 09:22:29 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:54.469 09:22:29 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:54.469 09:22:29 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:54.469 09:22:29 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:54.469 09:22:29 env -- common/autotest_common.sh@10 -- # set +x 00:04:54.469 ************************************ 00:04:54.469 START TEST env_dpdk_post_init 00:04:54.469 ************************************ 00:04:54.469 09:22:29 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:54.469 EAL: Detected CPU lcores: 128 00:04:54.469 EAL: Detected NUMA nodes: 2 00:04:54.469 EAL: Detected shared linkage of DPDK 00:04:54.469 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:54.730 EAL: Selected IOVA mode 'VA' 00:04:54.730 EAL: VFIO support initialized 00:04:54.730 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:54.730 EAL: Using IOMMU type 1 (Type 1) 00:04:54.730 EAL: Ignore mapping IO port bar(1) 00:04:54.990 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:04:54.990 EAL: Ignore mapping IO port bar(1) 00:04:55.251 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:04:55.251 EAL: Ignore mapping IO port bar(1) 00:04:55.512 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:04:55.512 EAL: Ignore mapping IO port bar(1) 00:04:55.512 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:04:55.772 EAL: Ignore mapping IO port bar(1) 00:04:55.772 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:04:56.033 EAL: Ignore mapping IO port bar(1) 00:04:56.033 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:04:56.294 EAL: Ignore mapping IO port bar(1) 00:04:56.294 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:04:56.294 EAL: Ignore mapping IO port bar(1) 00:04:56.555 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:04:56.815 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:04:56.815 EAL: Ignore mapping IO port bar(1) 00:04:57.076 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:04:57.076 EAL: Ignore mapping IO port bar(1) 00:04:57.076 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:04:57.336 EAL: Ignore mapping IO port bar(1) 00:04:57.336 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:04:57.596 EAL: Ignore mapping IO port bar(1) 00:04:57.596 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:04:57.858 EAL: Ignore mapping IO port bar(1) 00:04:57.858 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:04:57.858 EAL: Ignore mapping IO port bar(1) 00:04:58.118 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:04:58.118 EAL: Ignore mapping IO port bar(1) 00:04:58.379 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:04:58.379 EAL: Ignore mapping IO port bar(1) 00:04:58.640 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:04:58.640 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:04:58.640 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:04:58.640 Starting DPDK initialization... 00:04:58.640 Starting SPDK post initialization... 00:04:58.640 SPDK NVMe probe 00:04:58.640 Attaching to 0000:65:00.0 00:04:58.640 Attached to 0000:65:00.0 00:04:58.640 Cleaning up... 00:05:00.550 00:05:00.550 real 0m5.753s 00:05:00.550 user 0m0.204s 00:05:00.550 sys 0m0.103s 00:05:00.550 09:22:35 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:00.550 09:22:35 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:00.550 ************************************ 00:05:00.550 END TEST env_dpdk_post_init 00:05:00.550 ************************************ 00:05:00.550 09:22:35 env -- env/env.sh@26 -- # uname 00:05:00.550 09:22:35 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:00.550 09:22:35 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:00.550 09:22:35 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:00.550 09:22:35 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:00.550 09:22:35 env -- common/autotest_common.sh@10 -- # set +x 00:05:00.550 ************************************ 00:05:00.550 START TEST env_mem_callbacks 00:05:00.550 ************************************ 00:05:00.550 09:22:35 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:00.550 EAL: Detected CPU lcores: 128 00:05:00.550 EAL: Detected NUMA nodes: 2 00:05:00.550 EAL: Detected shared linkage of DPDK 00:05:00.550 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:00.550 EAL: Selected IOVA mode 'VA' 00:05:00.550 EAL: VFIO support initialized 00:05:00.550 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:00.550 00:05:00.550 00:05:00.550 CUnit - A unit testing framework for C - Version 2.1-3 00:05:00.550 http://cunit.sourceforge.net/ 00:05:00.550 00:05:00.550 00:05:00.550 Suite: memory 00:05:00.550 Test: test ... 00:05:00.550 register 0x200000200000 2097152 00:05:00.550 malloc 3145728 00:05:00.550 register 0x200000400000 4194304 00:05:00.550 buf 0x200000500000 len 3145728 PASSED 00:05:00.550 malloc 64 00:05:00.550 buf 0x2000004fff40 len 64 PASSED 00:05:00.550 malloc 4194304 00:05:00.550 register 0x200000800000 6291456 00:05:00.550 buf 0x200000a00000 len 4194304 PASSED 00:05:00.550 free 0x200000500000 3145728 00:05:00.550 free 0x2000004fff40 64 00:05:00.550 unregister 0x200000400000 4194304 PASSED 00:05:00.550 free 0x200000a00000 4194304 00:05:00.550 unregister 0x200000800000 6291456 PASSED 00:05:00.550 malloc 8388608 00:05:00.550 register 0x200000400000 10485760 00:05:00.550 buf 0x200000600000 len 8388608 PASSED 00:05:00.550 free 0x200000600000 8388608 00:05:00.550 unregister 0x200000400000 10485760 PASSED 00:05:00.550 passed 00:05:00.550 00:05:00.550 Run Summary: Type Total Ran Passed Failed Inactive 00:05:00.550 suites 1 1 n/a 0 0 00:05:00.550 tests 1 1 1 0 0 00:05:00.550 asserts 15 15 15 0 n/a 00:05:00.550 00:05:00.550 Elapsed time = 0.010 seconds 00:05:00.550 00:05:00.550 real 0m0.066s 00:05:00.550 user 0m0.017s 00:05:00.550 sys 0m0.049s 00:05:00.550 09:22:35 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:00.550 09:22:35 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:00.550 ************************************ 00:05:00.550 END TEST env_mem_callbacks 00:05:00.550 ************************************ 00:05:00.550 00:05:00.550 real 0m7.537s 00:05:00.550 user 0m1.132s 00:05:00.550 sys 0m0.964s 00:05:00.550 09:22:35 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:00.550 09:22:35 env -- common/autotest_common.sh@10 -- # set +x 00:05:00.550 ************************************ 00:05:00.550 END TEST env 00:05:00.550 ************************************ 00:05:00.551 09:22:35 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:00.551 09:22:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:00.551 09:22:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:00.551 09:22:35 -- common/autotest_common.sh@10 -- # set +x 00:05:00.551 ************************************ 00:05:00.551 START TEST rpc 00:05:00.551 ************************************ 00:05:00.551 09:22:35 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:00.810 * Looking for test storage... 00:05:00.810 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:00.810 09:22:36 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:00.810 09:22:36 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:00.810 09:22:36 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:00.810 09:22:36 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:00.810 09:22:36 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:00.810 09:22:36 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:00.810 09:22:36 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:00.810 09:22:36 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:00.810 09:22:36 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:00.810 09:22:36 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:00.810 09:22:36 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:00.810 09:22:36 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:00.810 09:22:36 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:00.810 09:22:36 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:00.810 09:22:36 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:00.810 09:22:36 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:00.810 09:22:36 rpc -- scripts/common.sh@345 -- # : 1 00:05:00.810 09:22:36 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:00.810 09:22:36 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:00.810 09:22:36 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:00.810 09:22:36 rpc -- scripts/common.sh@353 -- # local d=1 00:05:00.810 09:22:36 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:00.810 09:22:36 rpc -- scripts/common.sh@355 -- # echo 1 00:05:00.810 09:22:36 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:00.810 09:22:36 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:00.810 09:22:36 rpc -- scripts/common.sh@353 -- # local d=2 00:05:00.810 09:22:36 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:00.810 09:22:36 rpc -- scripts/common.sh@355 -- # echo 2 00:05:00.810 09:22:36 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:00.810 09:22:36 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:00.810 09:22:36 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:00.810 09:22:36 rpc -- scripts/common.sh@368 -- # return 0 00:05:00.810 09:22:36 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:00.810 09:22:36 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:00.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.810 --rc genhtml_branch_coverage=1 00:05:00.810 --rc genhtml_function_coverage=1 00:05:00.810 --rc genhtml_legend=1 00:05:00.810 --rc geninfo_all_blocks=1 00:05:00.810 --rc geninfo_unexecuted_blocks=1 00:05:00.810 00:05:00.810 ' 00:05:00.811 09:22:36 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:00.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.811 --rc genhtml_branch_coverage=1 00:05:00.811 --rc genhtml_function_coverage=1 00:05:00.811 --rc genhtml_legend=1 00:05:00.811 --rc geninfo_all_blocks=1 00:05:00.811 --rc geninfo_unexecuted_blocks=1 00:05:00.811 00:05:00.811 ' 00:05:00.811 09:22:36 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:00.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.811 --rc genhtml_branch_coverage=1 00:05:00.811 --rc genhtml_function_coverage=1 00:05:00.811 --rc genhtml_legend=1 00:05:00.811 --rc geninfo_all_blocks=1 00:05:00.811 --rc geninfo_unexecuted_blocks=1 00:05:00.811 00:05:00.811 ' 00:05:00.811 09:22:36 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:00.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.811 --rc genhtml_branch_coverage=1 00:05:00.811 --rc genhtml_function_coverage=1 00:05:00.811 --rc genhtml_legend=1 00:05:00.811 --rc geninfo_all_blocks=1 00:05:00.811 --rc geninfo_unexecuted_blocks=1 00:05:00.811 00:05:00.811 ' 00:05:00.811 09:22:36 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2536327 00:05:00.811 09:22:36 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:00.811 09:22:36 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2536327 00:05:00.811 09:22:36 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:00.811 09:22:36 rpc -- common/autotest_common.sh@835 -- # '[' -z 2536327 ']' 00:05:00.811 09:22:36 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:00.811 09:22:36 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:00.811 09:22:36 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:00.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:00.811 09:22:36 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:00.811 09:22:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.811 [2024-12-09 09:22:36.182392] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:05:00.811 [2024-12-09 09:22:36.182457] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2536327 ] 00:05:01.105 [2024-12-09 09:22:36.273233] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.105 [2024-12-09 09:22:36.300850] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:01.105 [2024-12-09 09:22:36.300899] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2536327' to capture a snapshot of events at runtime. 00:05:01.105 [2024-12-09 09:22:36.300908] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:01.105 [2024-12-09 09:22:36.300915] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:01.105 [2024-12-09 09:22:36.300922] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2536327 for offline analysis/debug. 00:05:01.105 [2024-12-09 09:22:36.301663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.676 09:22:36 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:01.676 09:22:36 rpc -- common/autotest_common.sh@868 -- # return 0 00:05:01.676 09:22:36 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:01.676 09:22:36 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:01.676 09:22:36 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:01.676 09:22:36 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:01.676 09:22:36 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:01.676 09:22:36 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:01.676 09:22:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.676 ************************************ 00:05:01.676 START TEST rpc_integrity 00:05:01.676 ************************************ 00:05:01.676 09:22:37 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:01.676 09:22:37 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:01.676 09:22:37 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:01.676 09:22:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.676 09:22:37 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:01.676 09:22:37 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:01.676 09:22:37 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:01.676 09:22:37 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:01.676 09:22:37 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:01.676 09:22:37 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:01.676 09:22:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.676 09:22:37 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:01.676 09:22:37 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:01.676 09:22:37 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:01.676 09:22:37 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:01.676 09:22:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.676 09:22:37 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:01.676 09:22:37 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:01.676 { 00:05:01.676 "name": "Malloc0", 00:05:01.676 "aliases": [ 00:05:01.676 "1d5f513b-30b7-4c63-9f56-32d9a543da60" 00:05:01.676 ], 00:05:01.676 "product_name": "Malloc disk", 00:05:01.676 "block_size": 512, 00:05:01.676 "num_blocks": 16384, 00:05:01.676 "uuid": "1d5f513b-30b7-4c63-9f56-32d9a543da60", 00:05:01.676 "assigned_rate_limits": { 00:05:01.676 "rw_ios_per_sec": 0, 00:05:01.676 "rw_mbytes_per_sec": 0, 00:05:01.676 "r_mbytes_per_sec": 0, 00:05:01.676 "w_mbytes_per_sec": 0 00:05:01.676 }, 00:05:01.676 "claimed": false, 00:05:01.676 "zoned": false, 00:05:01.676 "supported_io_types": { 00:05:01.676 "read": true, 00:05:01.676 "write": true, 00:05:01.676 "unmap": true, 00:05:01.676 "flush": true, 00:05:01.676 "reset": true, 00:05:01.676 "nvme_admin": false, 00:05:01.676 "nvme_io": false, 00:05:01.676 "nvme_io_md": false, 00:05:01.676 "write_zeroes": true, 00:05:01.676 "zcopy": true, 00:05:01.676 "get_zone_info": false, 00:05:01.677 "zone_management": false, 00:05:01.677 "zone_append": false, 00:05:01.677 "compare": false, 00:05:01.677 "compare_and_write": false, 00:05:01.677 "abort": true, 00:05:01.677 "seek_hole": false, 00:05:01.677 "seek_data": false, 00:05:01.677 "copy": true, 00:05:01.677 "nvme_iov_md": false 00:05:01.677 }, 00:05:01.677 "memory_domains": [ 00:05:01.677 { 00:05:01.677 "dma_device_id": "system", 00:05:01.677 "dma_device_type": 1 00:05:01.677 }, 00:05:01.677 { 00:05:01.677 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:01.677 "dma_device_type": 2 00:05:01.677 } 00:05:01.677 ], 00:05:01.677 "driver_specific": {} 00:05:01.677 } 00:05:01.677 ]' 00:05:01.677 09:22:37 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:01.937 09:22:37 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:01.937 09:22:37 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:01.937 09:22:37 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:01.937 09:22:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.937 [2024-12-09 09:22:37.165937] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:01.937 [2024-12-09 09:22:37.165982] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:01.937 [2024-12-09 09:22:37.165998] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb173f0 00:05:01.937 [2024-12-09 09:22:37.166012] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:01.937 [2024-12-09 09:22:37.167588] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:01.937 [2024-12-09 09:22:37.167624] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:01.937 Passthru0 00:05:01.937 09:22:37 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:01.937 09:22:37 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:01.937 09:22:37 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:01.937 09:22:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.937 09:22:37 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:01.937 09:22:37 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:01.937 { 00:05:01.937 "name": "Malloc0", 00:05:01.937 "aliases": [ 00:05:01.937 "1d5f513b-30b7-4c63-9f56-32d9a543da60" 00:05:01.937 ], 00:05:01.937 "product_name": "Malloc disk", 00:05:01.937 "block_size": 512, 00:05:01.937 "num_blocks": 16384, 00:05:01.937 "uuid": "1d5f513b-30b7-4c63-9f56-32d9a543da60", 00:05:01.937 "assigned_rate_limits": { 00:05:01.937 "rw_ios_per_sec": 0, 00:05:01.937 "rw_mbytes_per_sec": 0, 00:05:01.937 "r_mbytes_per_sec": 0, 00:05:01.937 "w_mbytes_per_sec": 0 00:05:01.937 }, 00:05:01.937 "claimed": true, 00:05:01.937 "claim_type": "exclusive_write", 00:05:01.937 "zoned": false, 00:05:01.937 "supported_io_types": { 00:05:01.937 "read": true, 00:05:01.937 "write": true, 00:05:01.937 "unmap": true, 00:05:01.937 "flush": true, 00:05:01.937 "reset": true, 00:05:01.937 "nvme_admin": false, 00:05:01.937 "nvme_io": false, 00:05:01.937 "nvme_io_md": false, 00:05:01.937 "write_zeroes": true, 00:05:01.937 "zcopy": true, 00:05:01.937 "get_zone_info": false, 00:05:01.937 "zone_management": false, 00:05:01.937 "zone_append": false, 00:05:01.937 "compare": false, 00:05:01.937 "compare_and_write": false, 00:05:01.937 "abort": true, 00:05:01.937 "seek_hole": false, 00:05:01.937 "seek_data": false, 00:05:01.937 "copy": true, 00:05:01.937 "nvme_iov_md": false 00:05:01.937 }, 00:05:01.937 "memory_domains": [ 00:05:01.937 { 00:05:01.937 "dma_device_id": "system", 00:05:01.937 "dma_device_type": 1 00:05:01.937 }, 00:05:01.937 { 00:05:01.937 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:01.937 "dma_device_type": 2 00:05:01.937 } 00:05:01.937 ], 00:05:01.937 "driver_specific": {} 00:05:01.937 }, 00:05:01.937 { 00:05:01.937 "name": "Passthru0", 00:05:01.937 "aliases": [ 00:05:01.937 "44af480f-63f2-5d81-ba7d-6f88cbdc3491" 00:05:01.937 ], 00:05:01.937 "product_name": "passthru", 00:05:01.937 "block_size": 512, 00:05:01.937 "num_blocks": 16384, 00:05:01.937 "uuid": "44af480f-63f2-5d81-ba7d-6f88cbdc3491", 00:05:01.937 "assigned_rate_limits": { 00:05:01.937 "rw_ios_per_sec": 0, 00:05:01.937 "rw_mbytes_per_sec": 0, 00:05:01.937 "r_mbytes_per_sec": 0, 00:05:01.937 "w_mbytes_per_sec": 0 00:05:01.937 }, 00:05:01.937 "claimed": false, 00:05:01.937 "zoned": false, 00:05:01.937 "supported_io_types": { 00:05:01.937 "read": true, 00:05:01.937 "write": true, 00:05:01.937 "unmap": true, 00:05:01.937 "flush": true, 00:05:01.937 "reset": true, 00:05:01.937 "nvme_admin": false, 00:05:01.937 "nvme_io": false, 00:05:01.937 "nvme_io_md": false, 00:05:01.937 "write_zeroes": true, 00:05:01.937 "zcopy": true, 00:05:01.937 "get_zone_info": false, 00:05:01.937 "zone_management": false, 00:05:01.937 "zone_append": false, 00:05:01.937 "compare": false, 00:05:01.937 "compare_and_write": false, 00:05:01.937 "abort": true, 00:05:01.937 "seek_hole": false, 00:05:01.937 "seek_data": false, 00:05:01.937 "copy": true, 00:05:01.937 "nvme_iov_md": false 00:05:01.937 }, 00:05:01.937 "memory_domains": [ 00:05:01.937 { 00:05:01.937 "dma_device_id": "system", 00:05:01.937 "dma_device_type": 1 00:05:01.937 }, 00:05:01.937 { 00:05:01.937 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:01.937 "dma_device_type": 2 00:05:01.937 } 00:05:01.937 ], 00:05:01.937 "driver_specific": { 00:05:01.937 "passthru": { 00:05:01.937 "name": "Passthru0", 00:05:01.937 "base_bdev_name": "Malloc0" 00:05:01.937 } 00:05:01.937 } 00:05:01.937 } 00:05:01.937 ]' 00:05:01.937 09:22:37 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:01.937 09:22:37 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:01.937 09:22:37 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:01.937 09:22:37 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:01.937 09:22:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.937 09:22:37 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:01.937 09:22:37 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:01.937 09:22:37 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:01.937 09:22:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.937 09:22:37 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:01.937 09:22:37 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:01.937 09:22:37 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:01.937 09:22:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.937 09:22:37 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:01.937 09:22:37 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:01.937 09:22:37 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:01.937 09:22:37 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:01.937 00:05:01.937 real 0m0.312s 00:05:01.937 user 0m0.196s 00:05:01.937 sys 0m0.041s 00:05:01.937 09:22:37 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:01.937 09:22:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.937 ************************************ 00:05:01.937 END TEST rpc_integrity 00:05:01.937 ************************************ 00:05:01.937 09:22:37 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:01.937 09:22:37 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:01.937 09:22:37 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:01.937 09:22:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.197 ************************************ 00:05:02.197 START TEST rpc_plugins 00:05:02.197 ************************************ 00:05:02.197 09:22:37 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:05:02.198 09:22:37 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:02.198 09:22:37 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:02.198 09:22:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:02.198 09:22:37 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:02.198 09:22:37 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:02.198 09:22:37 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:02.198 09:22:37 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:02.198 09:22:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:02.198 09:22:37 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:02.198 09:22:37 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:02.198 { 00:05:02.198 "name": "Malloc1", 00:05:02.198 "aliases": [ 00:05:02.198 "69a50f9b-183b-4439-a5e3-7970968d695c" 00:05:02.198 ], 00:05:02.198 "product_name": "Malloc disk", 00:05:02.198 "block_size": 4096, 00:05:02.198 "num_blocks": 256, 00:05:02.198 "uuid": "69a50f9b-183b-4439-a5e3-7970968d695c", 00:05:02.198 "assigned_rate_limits": { 00:05:02.198 "rw_ios_per_sec": 0, 00:05:02.198 "rw_mbytes_per_sec": 0, 00:05:02.198 "r_mbytes_per_sec": 0, 00:05:02.198 "w_mbytes_per_sec": 0 00:05:02.198 }, 00:05:02.198 "claimed": false, 00:05:02.198 "zoned": false, 00:05:02.198 "supported_io_types": { 00:05:02.198 "read": true, 00:05:02.198 "write": true, 00:05:02.198 "unmap": true, 00:05:02.198 "flush": true, 00:05:02.198 "reset": true, 00:05:02.198 "nvme_admin": false, 00:05:02.198 "nvme_io": false, 00:05:02.198 "nvme_io_md": false, 00:05:02.198 "write_zeroes": true, 00:05:02.198 "zcopy": true, 00:05:02.198 "get_zone_info": false, 00:05:02.198 "zone_management": false, 00:05:02.198 "zone_append": false, 00:05:02.198 "compare": false, 00:05:02.198 "compare_and_write": false, 00:05:02.198 "abort": true, 00:05:02.198 "seek_hole": false, 00:05:02.198 "seek_data": false, 00:05:02.198 "copy": true, 00:05:02.198 "nvme_iov_md": false 00:05:02.198 }, 00:05:02.198 "memory_domains": [ 00:05:02.198 { 00:05:02.198 "dma_device_id": "system", 00:05:02.198 "dma_device_type": 1 00:05:02.198 }, 00:05:02.198 { 00:05:02.198 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:02.198 "dma_device_type": 2 00:05:02.198 } 00:05:02.198 ], 00:05:02.198 "driver_specific": {} 00:05:02.198 } 00:05:02.198 ]' 00:05:02.198 09:22:37 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:02.198 09:22:37 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:02.198 09:22:37 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:02.198 09:22:37 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:02.198 09:22:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:02.198 09:22:37 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:02.198 09:22:37 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:02.198 09:22:37 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:02.198 09:22:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:02.198 09:22:37 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:02.198 09:22:37 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:02.198 09:22:37 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:02.198 09:22:37 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:02.198 00:05:02.198 real 0m0.159s 00:05:02.198 user 0m0.094s 00:05:02.198 sys 0m0.024s 00:05:02.198 09:22:37 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:02.198 09:22:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:02.198 ************************************ 00:05:02.198 END TEST rpc_plugins 00:05:02.198 ************************************ 00:05:02.198 09:22:37 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:02.198 09:22:37 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:02.198 09:22:37 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:02.198 09:22:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.458 ************************************ 00:05:02.458 START TEST rpc_trace_cmd_test 00:05:02.458 ************************************ 00:05:02.458 09:22:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:05:02.458 09:22:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:02.458 09:22:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:02.458 09:22:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:02.458 09:22:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:02.458 09:22:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:02.458 09:22:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:02.458 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2536327", 00:05:02.458 "tpoint_group_mask": "0x8", 00:05:02.458 "iscsi_conn": { 00:05:02.458 "mask": "0x2", 00:05:02.458 "tpoint_mask": "0x0" 00:05:02.458 }, 00:05:02.458 "scsi": { 00:05:02.458 "mask": "0x4", 00:05:02.458 "tpoint_mask": "0x0" 00:05:02.458 }, 00:05:02.458 "bdev": { 00:05:02.458 "mask": "0x8", 00:05:02.458 "tpoint_mask": "0xffffffffffffffff" 00:05:02.458 }, 00:05:02.458 "nvmf_rdma": { 00:05:02.458 "mask": "0x10", 00:05:02.458 "tpoint_mask": "0x0" 00:05:02.458 }, 00:05:02.458 "nvmf_tcp": { 00:05:02.458 "mask": "0x20", 00:05:02.458 "tpoint_mask": "0x0" 00:05:02.458 }, 00:05:02.458 "ftl": { 00:05:02.458 "mask": "0x40", 00:05:02.458 "tpoint_mask": "0x0" 00:05:02.458 }, 00:05:02.458 "blobfs": { 00:05:02.458 "mask": "0x80", 00:05:02.458 "tpoint_mask": "0x0" 00:05:02.458 }, 00:05:02.459 "dsa": { 00:05:02.459 "mask": "0x200", 00:05:02.459 "tpoint_mask": "0x0" 00:05:02.459 }, 00:05:02.459 "thread": { 00:05:02.459 "mask": "0x400", 00:05:02.459 "tpoint_mask": "0x0" 00:05:02.459 }, 00:05:02.459 "nvme_pcie": { 00:05:02.459 "mask": "0x800", 00:05:02.459 "tpoint_mask": "0x0" 00:05:02.459 }, 00:05:02.459 "iaa": { 00:05:02.459 "mask": "0x1000", 00:05:02.459 "tpoint_mask": "0x0" 00:05:02.459 }, 00:05:02.459 "nvme_tcp": { 00:05:02.459 "mask": "0x2000", 00:05:02.459 "tpoint_mask": "0x0" 00:05:02.459 }, 00:05:02.459 "bdev_nvme": { 00:05:02.459 "mask": "0x4000", 00:05:02.459 "tpoint_mask": "0x0" 00:05:02.459 }, 00:05:02.459 "sock": { 00:05:02.459 "mask": "0x8000", 00:05:02.459 "tpoint_mask": "0x0" 00:05:02.459 }, 00:05:02.459 "blob": { 00:05:02.459 "mask": "0x10000", 00:05:02.459 "tpoint_mask": "0x0" 00:05:02.459 }, 00:05:02.459 "bdev_raid": { 00:05:02.459 "mask": "0x20000", 00:05:02.459 "tpoint_mask": "0x0" 00:05:02.459 }, 00:05:02.459 "scheduler": { 00:05:02.459 "mask": "0x40000", 00:05:02.459 "tpoint_mask": "0x0" 00:05:02.459 } 00:05:02.459 }' 00:05:02.459 09:22:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:02.459 09:22:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:02.459 09:22:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:02.459 09:22:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:02.459 09:22:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:02.459 09:22:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:02.459 09:22:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:02.459 09:22:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:02.459 09:22:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:02.459 09:22:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:02.459 00:05:02.459 real 0m0.250s 00:05:02.459 user 0m0.213s 00:05:02.459 sys 0m0.030s 00:05:02.459 09:22:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:02.459 09:22:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:02.459 ************************************ 00:05:02.459 END TEST rpc_trace_cmd_test 00:05:02.459 ************************************ 00:05:02.719 09:22:37 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:02.719 09:22:37 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:02.719 09:22:37 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:02.719 09:22:37 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:02.719 09:22:37 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:02.719 09:22:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.719 ************************************ 00:05:02.720 START TEST rpc_daemon_integrity 00:05:02.720 ************************************ 00:05:02.720 09:22:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:02.720 09:22:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:02.720 09:22:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:02.720 09:22:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:02.720 09:22:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:02.720 09:22:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:02.720 09:22:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:02.720 09:22:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:02.720 09:22:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:02.720 09:22:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:02.720 09:22:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:02.720 09:22:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:02.720 09:22:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:02.720 09:22:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:02.720 09:22:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:02.720 09:22:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:02.720 09:22:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:02.720 09:22:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:02.720 { 00:05:02.720 "name": "Malloc2", 00:05:02.720 "aliases": [ 00:05:02.720 "dcb0c412-260c-4bbe-a548-96348288e83f" 00:05:02.720 ], 00:05:02.720 "product_name": "Malloc disk", 00:05:02.720 "block_size": 512, 00:05:02.720 "num_blocks": 16384, 00:05:02.720 "uuid": "dcb0c412-260c-4bbe-a548-96348288e83f", 00:05:02.720 "assigned_rate_limits": { 00:05:02.720 "rw_ios_per_sec": 0, 00:05:02.720 "rw_mbytes_per_sec": 0, 00:05:02.720 "r_mbytes_per_sec": 0, 00:05:02.720 "w_mbytes_per_sec": 0 00:05:02.720 }, 00:05:02.720 "claimed": false, 00:05:02.720 "zoned": false, 00:05:02.720 "supported_io_types": { 00:05:02.720 "read": true, 00:05:02.720 "write": true, 00:05:02.720 "unmap": true, 00:05:02.720 "flush": true, 00:05:02.720 "reset": true, 00:05:02.720 "nvme_admin": false, 00:05:02.720 "nvme_io": false, 00:05:02.720 "nvme_io_md": false, 00:05:02.720 "write_zeroes": true, 00:05:02.720 "zcopy": true, 00:05:02.720 "get_zone_info": false, 00:05:02.720 "zone_management": false, 00:05:02.720 "zone_append": false, 00:05:02.720 "compare": false, 00:05:02.720 "compare_and_write": false, 00:05:02.720 "abort": true, 00:05:02.720 "seek_hole": false, 00:05:02.720 "seek_data": false, 00:05:02.720 "copy": true, 00:05:02.720 "nvme_iov_md": false 00:05:02.720 }, 00:05:02.720 "memory_domains": [ 00:05:02.720 { 00:05:02.720 "dma_device_id": "system", 00:05:02.720 "dma_device_type": 1 00:05:02.720 }, 00:05:02.720 { 00:05:02.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:02.720 "dma_device_type": 2 00:05:02.720 } 00:05:02.720 ], 00:05:02.720 "driver_specific": {} 00:05:02.720 } 00:05:02.720 ]' 00:05:02.720 09:22:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:02.720 09:22:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:02.720 09:22:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:02.720 09:22:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:02.720 09:22:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:02.720 [2024-12-09 09:22:38.132535] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:02.720 [2024-12-09 09:22:38.132577] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:02.720 [2024-12-09 09:22:38.132599] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb1a9a0 00:05:02.720 [2024-12-09 09:22:38.132606] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:02.720 [2024-12-09 09:22:38.134111] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:02.720 [2024-12-09 09:22:38.134145] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:02.720 Passthru0 00:05:02.720 09:22:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:02.720 09:22:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:02.720 09:22:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:02.720 09:22:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:02.720 09:22:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:02.720 09:22:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:02.720 { 00:05:02.720 "name": "Malloc2", 00:05:02.720 "aliases": [ 00:05:02.720 "dcb0c412-260c-4bbe-a548-96348288e83f" 00:05:02.720 ], 00:05:02.720 "product_name": "Malloc disk", 00:05:02.720 "block_size": 512, 00:05:02.720 "num_blocks": 16384, 00:05:02.720 "uuid": "dcb0c412-260c-4bbe-a548-96348288e83f", 00:05:02.720 "assigned_rate_limits": { 00:05:02.720 "rw_ios_per_sec": 0, 00:05:02.720 "rw_mbytes_per_sec": 0, 00:05:02.720 "r_mbytes_per_sec": 0, 00:05:02.720 "w_mbytes_per_sec": 0 00:05:02.720 }, 00:05:02.720 "claimed": true, 00:05:02.720 "claim_type": "exclusive_write", 00:05:02.720 "zoned": false, 00:05:02.720 "supported_io_types": { 00:05:02.720 "read": true, 00:05:02.720 "write": true, 00:05:02.720 "unmap": true, 00:05:02.720 "flush": true, 00:05:02.720 "reset": true, 00:05:02.720 "nvme_admin": false, 00:05:02.720 "nvme_io": false, 00:05:02.720 "nvme_io_md": false, 00:05:02.720 "write_zeroes": true, 00:05:02.720 "zcopy": true, 00:05:02.720 "get_zone_info": false, 00:05:02.720 "zone_management": false, 00:05:02.720 "zone_append": false, 00:05:02.720 "compare": false, 00:05:02.720 "compare_and_write": false, 00:05:02.720 "abort": true, 00:05:02.720 "seek_hole": false, 00:05:02.720 "seek_data": false, 00:05:02.720 "copy": true, 00:05:02.720 "nvme_iov_md": false 00:05:02.720 }, 00:05:02.720 "memory_domains": [ 00:05:02.720 { 00:05:02.720 "dma_device_id": "system", 00:05:02.720 "dma_device_type": 1 00:05:02.720 }, 00:05:02.720 { 00:05:02.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:02.720 "dma_device_type": 2 00:05:02.720 } 00:05:02.720 ], 00:05:02.720 "driver_specific": {} 00:05:02.720 }, 00:05:02.720 { 00:05:02.720 "name": "Passthru0", 00:05:02.720 "aliases": [ 00:05:02.720 "b0cc3019-6762-542a-af36-75399cd41d37" 00:05:02.720 ], 00:05:02.720 "product_name": "passthru", 00:05:02.720 "block_size": 512, 00:05:02.720 "num_blocks": 16384, 00:05:02.720 "uuid": "b0cc3019-6762-542a-af36-75399cd41d37", 00:05:02.720 "assigned_rate_limits": { 00:05:02.720 "rw_ios_per_sec": 0, 00:05:02.720 "rw_mbytes_per_sec": 0, 00:05:02.720 "r_mbytes_per_sec": 0, 00:05:02.720 "w_mbytes_per_sec": 0 00:05:02.720 }, 00:05:02.720 "claimed": false, 00:05:02.720 "zoned": false, 00:05:02.720 "supported_io_types": { 00:05:02.720 "read": true, 00:05:02.720 "write": true, 00:05:02.720 "unmap": true, 00:05:02.720 "flush": true, 00:05:02.720 "reset": true, 00:05:02.720 "nvme_admin": false, 00:05:02.720 "nvme_io": false, 00:05:02.720 "nvme_io_md": false, 00:05:02.720 "write_zeroes": true, 00:05:02.720 "zcopy": true, 00:05:02.720 "get_zone_info": false, 00:05:02.720 "zone_management": false, 00:05:02.720 "zone_append": false, 00:05:02.720 "compare": false, 00:05:02.720 "compare_and_write": false, 00:05:02.720 "abort": true, 00:05:02.720 "seek_hole": false, 00:05:02.720 "seek_data": false, 00:05:02.720 "copy": true, 00:05:02.720 "nvme_iov_md": false 00:05:02.720 }, 00:05:02.720 "memory_domains": [ 00:05:02.720 { 00:05:02.720 "dma_device_id": "system", 00:05:02.720 "dma_device_type": 1 00:05:02.720 }, 00:05:02.720 { 00:05:02.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:02.720 "dma_device_type": 2 00:05:02.720 } 00:05:02.720 ], 00:05:02.720 "driver_specific": { 00:05:02.720 "passthru": { 00:05:02.720 "name": "Passthru0", 00:05:02.720 "base_bdev_name": "Malloc2" 00:05:02.720 } 00:05:02.720 } 00:05:02.720 } 00:05:02.720 ]' 00:05:02.720 09:22:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:02.982 09:22:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:02.982 09:22:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:02.982 09:22:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:02.982 09:22:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:02.982 09:22:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:02.982 09:22:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:02.982 09:22:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:02.982 09:22:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:02.982 09:22:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:02.982 09:22:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:02.982 09:22:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:02.982 09:22:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:02.982 09:22:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:02.982 09:22:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:02.982 09:22:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:02.982 09:22:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:02.982 00:05:02.982 real 0m0.306s 00:05:02.982 user 0m0.200s 00:05:02.982 sys 0m0.036s 00:05:02.982 09:22:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:02.982 09:22:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:02.982 ************************************ 00:05:02.982 END TEST rpc_daemon_integrity 00:05:02.982 ************************************ 00:05:02.982 09:22:38 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:02.982 09:22:38 rpc -- rpc/rpc.sh@84 -- # killprocess 2536327 00:05:02.982 09:22:38 rpc -- common/autotest_common.sh@954 -- # '[' -z 2536327 ']' 00:05:02.982 09:22:38 rpc -- common/autotest_common.sh@958 -- # kill -0 2536327 00:05:02.982 09:22:38 rpc -- common/autotest_common.sh@959 -- # uname 00:05:02.982 09:22:38 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:02.982 09:22:38 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2536327 00:05:02.982 09:22:38 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:02.982 09:22:38 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:02.982 09:22:38 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2536327' 00:05:02.982 killing process with pid 2536327 00:05:02.982 09:22:38 rpc -- common/autotest_common.sh@973 -- # kill 2536327 00:05:02.982 09:22:38 rpc -- common/autotest_common.sh@978 -- # wait 2536327 00:05:03.244 00:05:03.244 real 0m2.716s 00:05:03.244 user 0m3.500s 00:05:03.244 sys 0m0.796s 00:05:03.244 09:22:38 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:03.244 09:22:38 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.244 ************************************ 00:05:03.244 END TEST rpc 00:05:03.244 ************************************ 00:05:03.244 09:22:38 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:03.244 09:22:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:03.244 09:22:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.244 09:22:38 -- common/autotest_common.sh@10 -- # set +x 00:05:03.504 ************************************ 00:05:03.504 START TEST skip_rpc 00:05:03.504 ************************************ 00:05:03.504 09:22:38 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:03.504 * Looking for test storage... 00:05:03.504 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:03.504 09:22:38 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:03.504 09:22:38 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:03.504 09:22:38 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:03.504 09:22:38 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:03.504 09:22:38 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:03.504 09:22:38 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:03.504 09:22:38 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:03.504 09:22:38 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:03.504 09:22:38 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:03.504 09:22:38 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:03.504 09:22:38 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:03.504 09:22:38 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:03.504 09:22:38 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:03.504 09:22:38 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:03.504 09:22:38 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:03.504 09:22:38 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:03.504 09:22:38 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:03.504 09:22:38 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:03.504 09:22:38 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:03.504 09:22:38 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:03.504 09:22:38 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:03.504 09:22:38 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:03.504 09:22:38 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:03.504 09:22:38 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:03.504 09:22:38 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:03.504 09:22:38 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:03.504 09:22:38 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:03.504 09:22:38 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:03.504 09:22:38 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:03.504 09:22:38 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:03.504 09:22:38 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:03.504 09:22:38 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:03.504 09:22:38 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:03.504 09:22:38 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:03.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.504 --rc genhtml_branch_coverage=1 00:05:03.504 --rc genhtml_function_coverage=1 00:05:03.504 --rc genhtml_legend=1 00:05:03.504 --rc geninfo_all_blocks=1 00:05:03.505 --rc geninfo_unexecuted_blocks=1 00:05:03.505 00:05:03.505 ' 00:05:03.505 09:22:38 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:03.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.505 --rc genhtml_branch_coverage=1 00:05:03.505 --rc genhtml_function_coverage=1 00:05:03.505 --rc genhtml_legend=1 00:05:03.505 --rc geninfo_all_blocks=1 00:05:03.505 --rc geninfo_unexecuted_blocks=1 00:05:03.505 00:05:03.505 ' 00:05:03.505 09:22:38 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:03.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.505 --rc genhtml_branch_coverage=1 00:05:03.505 --rc genhtml_function_coverage=1 00:05:03.505 --rc genhtml_legend=1 00:05:03.505 --rc geninfo_all_blocks=1 00:05:03.505 --rc geninfo_unexecuted_blocks=1 00:05:03.505 00:05:03.505 ' 00:05:03.505 09:22:38 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:03.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.505 --rc genhtml_branch_coverage=1 00:05:03.505 --rc genhtml_function_coverage=1 00:05:03.505 --rc genhtml_legend=1 00:05:03.505 --rc geninfo_all_blocks=1 00:05:03.505 --rc geninfo_unexecuted_blocks=1 00:05:03.505 00:05:03.505 ' 00:05:03.505 09:22:38 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:03.505 09:22:38 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:03.505 09:22:38 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:03.505 09:22:38 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:03.505 09:22:38 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.505 09:22:38 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.765 ************************************ 00:05:03.765 START TEST skip_rpc 00:05:03.765 ************************************ 00:05:03.765 09:22:38 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:03.765 09:22:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2537175 00:05:03.765 09:22:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:03.765 09:22:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:03.765 09:22:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:03.765 [2024-12-09 09:22:39.017739] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:05:03.765 [2024-12-09 09:22:39.017797] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2537175 ] 00:05:03.765 [2024-12-09 09:22:39.107849] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.765 [2024-12-09 09:22:39.136045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.050 09:22:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:09.050 09:22:43 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:09.050 09:22:43 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:09.050 09:22:43 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:09.050 09:22:43 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:09.050 09:22:43 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:09.050 09:22:43 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:09.050 09:22:43 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:09.050 09:22:43 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.050 09:22:43 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.050 09:22:43 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:09.050 09:22:43 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:09.050 09:22:43 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:09.050 09:22:43 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:09.050 09:22:43 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:09.050 09:22:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:09.050 09:22:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2537175 00:05:09.050 09:22:43 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 2537175 ']' 00:05:09.050 09:22:43 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 2537175 00:05:09.050 09:22:43 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:09.050 09:22:43 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:09.050 09:22:43 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2537175 00:05:09.050 09:22:44 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:09.050 09:22:44 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:09.050 09:22:44 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2537175' 00:05:09.050 killing process with pid 2537175 00:05:09.050 09:22:44 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 2537175 00:05:09.050 09:22:44 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 2537175 00:05:09.050 00:05:09.050 real 0m5.249s 00:05:09.050 user 0m5.012s 00:05:09.050 sys 0m0.277s 00:05:09.050 09:22:44 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:09.050 09:22:44 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.050 ************************************ 00:05:09.050 END TEST skip_rpc 00:05:09.050 ************************************ 00:05:09.050 09:22:44 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:09.050 09:22:44 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:09.050 09:22:44 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:09.050 09:22:44 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.050 ************************************ 00:05:09.050 START TEST skip_rpc_with_json 00:05:09.050 ************************************ 00:05:09.050 09:22:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:09.050 09:22:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:09.050 09:22:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2538215 00:05:09.050 09:22:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:09.050 09:22:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2538215 00:05:09.050 09:22:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 2538215 ']' 00:05:09.050 09:22:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.050 09:22:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:09.050 09:22:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.050 09:22:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:09.050 09:22:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:09.050 09:22:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:09.050 [2024-12-09 09:22:44.342808] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:05:09.050 [2024-12-09 09:22:44.342858] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2538215 ] 00:05:09.050 [2024-12-09 09:22:44.425912] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.050 [2024-12-09 09:22:44.442646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.990 09:22:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:09.990 09:22:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:09.990 09:22:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:09.990 09:22:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.990 09:22:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:09.990 [2024-12-09 09:22:45.123922] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:09.990 request: 00:05:09.990 { 00:05:09.990 "trtype": "tcp", 00:05:09.990 "method": "nvmf_get_transports", 00:05:09.990 "req_id": 1 00:05:09.990 } 00:05:09.990 Got JSON-RPC error response 00:05:09.990 response: 00:05:09.990 { 00:05:09.990 "code": -19, 00:05:09.990 "message": "No such device" 00:05:09.990 } 00:05:09.990 09:22:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:09.990 09:22:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:09.990 09:22:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.990 09:22:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:09.990 [2024-12-09 09:22:45.132010] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:09.990 09:22:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.990 09:22:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:09.990 09:22:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.990 09:22:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:09.990 09:22:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.990 09:22:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:09.990 { 00:05:09.990 "subsystems": [ 00:05:09.990 { 00:05:09.990 "subsystem": "fsdev", 00:05:09.990 "config": [ 00:05:09.990 { 00:05:09.990 "method": "fsdev_set_opts", 00:05:09.990 "params": { 00:05:09.990 "fsdev_io_pool_size": 65535, 00:05:09.990 "fsdev_io_cache_size": 256 00:05:09.990 } 00:05:09.990 } 00:05:09.990 ] 00:05:09.990 }, 00:05:09.990 { 00:05:09.990 "subsystem": "vfio_user_target", 00:05:09.990 "config": null 00:05:09.990 }, 00:05:09.990 { 00:05:09.990 "subsystem": "keyring", 00:05:09.990 "config": [] 00:05:09.990 }, 00:05:09.990 { 00:05:09.990 "subsystem": "iobuf", 00:05:09.990 "config": [ 00:05:09.990 { 00:05:09.990 "method": "iobuf_set_options", 00:05:09.990 "params": { 00:05:09.990 "small_pool_count": 8192, 00:05:09.990 "large_pool_count": 1024, 00:05:09.990 "small_bufsize": 8192, 00:05:09.990 "large_bufsize": 135168, 00:05:09.990 "enable_numa": false 00:05:09.990 } 00:05:09.990 } 00:05:09.990 ] 00:05:09.990 }, 00:05:09.990 { 00:05:09.990 "subsystem": "sock", 00:05:09.990 "config": [ 00:05:09.990 { 00:05:09.990 "method": "sock_set_default_impl", 00:05:09.990 "params": { 00:05:09.990 "impl_name": "posix" 00:05:09.990 } 00:05:09.990 }, 00:05:09.990 { 00:05:09.990 "method": "sock_impl_set_options", 00:05:09.990 "params": { 00:05:09.990 "impl_name": "ssl", 00:05:09.990 "recv_buf_size": 4096, 00:05:09.990 "send_buf_size": 4096, 00:05:09.990 "enable_recv_pipe": true, 00:05:09.990 "enable_quickack": false, 00:05:09.990 "enable_placement_id": 0, 00:05:09.990 "enable_zerocopy_send_server": true, 00:05:09.990 "enable_zerocopy_send_client": false, 00:05:09.990 "zerocopy_threshold": 0, 00:05:09.990 "tls_version": 0, 00:05:09.990 "enable_ktls": false 00:05:09.990 } 00:05:09.990 }, 00:05:09.990 { 00:05:09.990 "method": "sock_impl_set_options", 00:05:09.990 "params": { 00:05:09.990 "impl_name": "posix", 00:05:09.990 "recv_buf_size": 2097152, 00:05:09.990 "send_buf_size": 2097152, 00:05:09.990 "enable_recv_pipe": true, 00:05:09.990 "enable_quickack": false, 00:05:09.990 "enable_placement_id": 0, 00:05:09.990 "enable_zerocopy_send_server": true, 00:05:09.990 "enable_zerocopy_send_client": false, 00:05:09.990 "zerocopy_threshold": 0, 00:05:09.990 "tls_version": 0, 00:05:09.990 "enable_ktls": false 00:05:09.990 } 00:05:09.990 } 00:05:09.990 ] 00:05:09.990 }, 00:05:09.990 { 00:05:09.990 "subsystem": "vmd", 00:05:09.990 "config": [] 00:05:09.990 }, 00:05:09.990 { 00:05:09.990 "subsystem": "accel", 00:05:09.990 "config": [ 00:05:09.990 { 00:05:09.990 "method": "accel_set_options", 00:05:09.990 "params": { 00:05:09.990 "small_cache_size": 128, 00:05:09.990 "large_cache_size": 16, 00:05:09.990 "task_count": 2048, 00:05:09.990 "sequence_count": 2048, 00:05:09.990 "buf_count": 2048 00:05:09.990 } 00:05:09.990 } 00:05:09.990 ] 00:05:09.990 }, 00:05:09.990 { 00:05:09.990 "subsystem": "bdev", 00:05:09.990 "config": [ 00:05:09.990 { 00:05:09.990 "method": "bdev_set_options", 00:05:09.990 "params": { 00:05:09.990 "bdev_io_pool_size": 65535, 00:05:09.990 "bdev_io_cache_size": 256, 00:05:09.990 "bdev_auto_examine": true, 00:05:09.990 "iobuf_small_cache_size": 128, 00:05:09.990 "iobuf_large_cache_size": 16 00:05:09.990 } 00:05:09.990 }, 00:05:09.990 { 00:05:09.990 "method": "bdev_raid_set_options", 00:05:09.990 "params": { 00:05:09.990 "process_window_size_kb": 1024, 00:05:09.990 "process_max_bandwidth_mb_sec": 0 00:05:09.990 } 00:05:09.990 }, 00:05:09.990 { 00:05:09.990 "method": "bdev_iscsi_set_options", 00:05:09.990 "params": { 00:05:09.990 "timeout_sec": 30 00:05:09.990 } 00:05:09.990 }, 00:05:09.990 { 00:05:09.990 "method": "bdev_nvme_set_options", 00:05:09.990 "params": { 00:05:09.990 "action_on_timeout": "none", 00:05:09.990 "timeout_us": 0, 00:05:09.990 "timeout_admin_us": 0, 00:05:09.990 "keep_alive_timeout_ms": 10000, 00:05:09.990 "arbitration_burst": 0, 00:05:09.990 "low_priority_weight": 0, 00:05:09.990 "medium_priority_weight": 0, 00:05:09.990 "high_priority_weight": 0, 00:05:09.990 "nvme_adminq_poll_period_us": 10000, 00:05:09.990 "nvme_ioq_poll_period_us": 0, 00:05:09.990 "io_queue_requests": 0, 00:05:09.990 "delay_cmd_submit": true, 00:05:09.990 "transport_retry_count": 4, 00:05:09.990 "bdev_retry_count": 3, 00:05:09.990 "transport_ack_timeout": 0, 00:05:09.990 "ctrlr_loss_timeout_sec": 0, 00:05:09.990 "reconnect_delay_sec": 0, 00:05:09.990 "fast_io_fail_timeout_sec": 0, 00:05:09.990 "disable_auto_failback": false, 00:05:09.990 "generate_uuids": false, 00:05:09.990 "transport_tos": 0, 00:05:09.990 "nvme_error_stat": false, 00:05:09.990 "rdma_srq_size": 0, 00:05:09.990 "io_path_stat": false, 00:05:09.990 "allow_accel_sequence": false, 00:05:09.990 "rdma_max_cq_size": 0, 00:05:09.990 "rdma_cm_event_timeout_ms": 0, 00:05:09.990 "dhchap_digests": [ 00:05:09.990 "sha256", 00:05:09.990 "sha384", 00:05:09.990 "sha512" 00:05:09.990 ], 00:05:09.990 "dhchap_dhgroups": [ 00:05:09.990 "null", 00:05:09.990 "ffdhe2048", 00:05:09.990 "ffdhe3072", 00:05:09.990 "ffdhe4096", 00:05:09.990 "ffdhe6144", 00:05:09.990 "ffdhe8192" 00:05:09.990 ] 00:05:09.990 } 00:05:09.990 }, 00:05:09.990 { 00:05:09.990 "method": "bdev_nvme_set_hotplug", 00:05:09.990 "params": { 00:05:09.990 "period_us": 100000, 00:05:09.990 "enable": false 00:05:09.990 } 00:05:09.990 }, 00:05:09.990 { 00:05:09.990 "method": "bdev_wait_for_examine" 00:05:09.990 } 00:05:09.990 ] 00:05:09.990 }, 00:05:09.990 { 00:05:09.990 "subsystem": "scsi", 00:05:09.990 "config": null 00:05:09.990 }, 00:05:09.990 { 00:05:09.990 "subsystem": "scheduler", 00:05:09.990 "config": [ 00:05:09.990 { 00:05:09.990 "method": "framework_set_scheduler", 00:05:09.990 "params": { 00:05:09.990 "name": "static" 00:05:09.990 } 00:05:09.990 } 00:05:09.990 ] 00:05:09.990 }, 00:05:09.990 { 00:05:09.990 "subsystem": "vhost_scsi", 00:05:09.990 "config": [] 00:05:09.990 }, 00:05:09.990 { 00:05:09.990 "subsystem": "vhost_blk", 00:05:09.990 "config": [] 00:05:09.990 }, 00:05:09.990 { 00:05:09.990 "subsystem": "ublk", 00:05:09.990 "config": [] 00:05:09.990 }, 00:05:09.990 { 00:05:09.990 "subsystem": "nbd", 00:05:09.990 "config": [] 00:05:09.990 }, 00:05:09.990 { 00:05:09.990 "subsystem": "nvmf", 00:05:09.990 "config": [ 00:05:09.990 { 00:05:09.990 "method": "nvmf_set_config", 00:05:09.990 "params": { 00:05:09.990 "discovery_filter": "match_any", 00:05:09.990 "admin_cmd_passthru": { 00:05:09.990 "identify_ctrlr": false 00:05:09.990 }, 00:05:09.990 "dhchap_digests": [ 00:05:09.990 "sha256", 00:05:09.990 "sha384", 00:05:09.990 "sha512" 00:05:09.990 ], 00:05:09.990 "dhchap_dhgroups": [ 00:05:09.990 "null", 00:05:09.990 "ffdhe2048", 00:05:09.990 "ffdhe3072", 00:05:09.990 "ffdhe4096", 00:05:09.990 "ffdhe6144", 00:05:09.990 "ffdhe8192" 00:05:09.990 ] 00:05:09.990 } 00:05:09.990 }, 00:05:09.990 { 00:05:09.990 "method": "nvmf_set_max_subsystems", 00:05:09.990 "params": { 00:05:09.990 "max_subsystems": 1024 00:05:09.990 } 00:05:09.990 }, 00:05:09.990 { 00:05:09.990 "method": "nvmf_set_crdt", 00:05:09.990 "params": { 00:05:09.990 "crdt1": 0, 00:05:09.990 "crdt2": 0, 00:05:09.990 "crdt3": 0 00:05:09.990 } 00:05:09.991 }, 00:05:09.991 { 00:05:09.991 "method": "nvmf_create_transport", 00:05:09.991 "params": { 00:05:09.991 "trtype": "TCP", 00:05:09.991 "max_queue_depth": 128, 00:05:09.991 "max_io_qpairs_per_ctrlr": 127, 00:05:09.991 "in_capsule_data_size": 4096, 00:05:09.991 "max_io_size": 131072, 00:05:09.991 "io_unit_size": 131072, 00:05:09.991 "max_aq_depth": 128, 00:05:09.991 "num_shared_buffers": 511, 00:05:09.991 "buf_cache_size": 4294967295, 00:05:09.991 "dif_insert_or_strip": false, 00:05:09.991 "zcopy": false, 00:05:09.991 "c2h_success": true, 00:05:09.991 "sock_priority": 0, 00:05:09.991 "abort_timeout_sec": 1, 00:05:09.991 "ack_timeout": 0, 00:05:09.991 "data_wr_pool_size": 0 00:05:09.991 } 00:05:09.991 } 00:05:09.991 ] 00:05:09.991 }, 00:05:09.991 { 00:05:09.991 "subsystem": "iscsi", 00:05:09.991 "config": [ 00:05:09.991 { 00:05:09.991 "method": "iscsi_set_options", 00:05:09.991 "params": { 00:05:09.991 "node_base": "iqn.2016-06.io.spdk", 00:05:09.991 "max_sessions": 128, 00:05:09.991 "max_connections_per_session": 2, 00:05:09.991 "max_queue_depth": 64, 00:05:09.991 "default_time2wait": 2, 00:05:09.991 "default_time2retain": 20, 00:05:09.991 "first_burst_length": 8192, 00:05:09.991 "immediate_data": true, 00:05:09.991 "allow_duplicated_isid": false, 00:05:09.991 "error_recovery_level": 0, 00:05:09.991 "nop_timeout": 60, 00:05:09.991 "nop_in_interval": 30, 00:05:09.991 "disable_chap": false, 00:05:09.991 "require_chap": false, 00:05:09.991 "mutual_chap": false, 00:05:09.991 "chap_group": 0, 00:05:09.991 "max_large_datain_per_connection": 64, 00:05:09.991 "max_r2t_per_connection": 4, 00:05:09.991 "pdu_pool_size": 36864, 00:05:09.991 "immediate_data_pool_size": 16384, 00:05:09.991 "data_out_pool_size": 2048 00:05:09.991 } 00:05:09.991 } 00:05:09.991 ] 00:05:09.991 } 00:05:09.991 ] 00:05:09.991 } 00:05:09.991 09:22:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:09.991 09:22:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2538215 00:05:09.991 09:22:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 2538215 ']' 00:05:09.991 09:22:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 2538215 00:05:09.991 09:22:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:09.991 09:22:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:09.991 09:22:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2538215 00:05:09.991 09:22:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:09.991 09:22:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:09.991 09:22:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2538215' 00:05:09.991 killing process with pid 2538215 00:05:09.991 09:22:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 2538215 00:05:09.991 09:22:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 2538215 00:05:10.250 09:22:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2538515 00:05:10.250 09:22:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:10.250 09:22:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:15.531 09:22:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2538515 00:05:15.531 09:22:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 2538515 ']' 00:05:15.531 09:22:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 2538515 00:05:15.531 09:22:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:15.531 09:22:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:15.531 09:22:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2538515 00:05:15.531 09:22:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:15.531 09:22:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:15.531 09:22:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2538515' 00:05:15.531 killing process with pid 2538515 00:05:15.531 09:22:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 2538515 00:05:15.531 09:22:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 2538515 00:05:15.532 09:22:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:15.532 09:22:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:15.532 00:05:15.532 real 0m6.497s 00:05:15.532 user 0m6.381s 00:05:15.532 sys 0m0.536s 00:05:15.532 09:22:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:15.532 09:22:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:15.532 ************************************ 00:05:15.532 END TEST skip_rpc_with_json 00:05:15.532 ************************************ 00:05:15.532 09:22:50 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:15.532 09:22:50 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:15.532 09:22:50 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:15.532 09:22:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.532 ************************************ 00:05:15.532 START TEST skip_rpc_with_delay 00:05:15.532 ************************************ 00:05:15.532 09:22:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:15.532 09:22:50 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:15.532 09:22:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:15.532 09:22:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:15.532 09:22:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:15.532 09:22:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:15.532 09:22:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:15.532 09:22:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:15.532 09:22:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:15.532 09:22:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:15.532 09:22:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:15.532 09:22:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:15.532 09:22:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:15.532 [2024-12-09 09:22:50.927715] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:15.532 09:22:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:15.532 09:22:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:15.532 09:22:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:15.532 09:22:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:15.532 00:05:15.532 real 0m0.083s 00:05:15.532 user 0m0.053s 00:05:15.532 sys 0m0.029s 00:05:15.532 09:22:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:15.532 09:22:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:15.532 ************************************ 00:05:15.532 END TEST skip_rpc_with_delay 00:05:15.532 ************************************ 00:05:15.532 09:22:50 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:15.532 09:22:50 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:15.532 09:22:50 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:15.532 09:22:50 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:15.532 09:22:50 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:15.532 09:22:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.792 ************************************ 00:05:15.792 START TEST exit_on_failed_rpc_init 00:05:15.792 ************************************ 00:05:15.792 09:22:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:15.792 09:22:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2539623 00:05:15.792 09:22:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2539623 00:05:15.792 09:22:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 2539623 ']' 00:05:15.792 09:22:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.792 09:22:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:15.792 09:22:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.792 09:22:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:15.792 09:22:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:15.792 09:22:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:15.792 [2024-12-09 09:22:51.076722] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:05:15.792 [2024-12-09 09:22:51.076776] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2539623 ] 00:05:15.792 [2024-12-09 09:22:51.162643] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.792 [2024-12-09 09:22:51.180455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.734 09:22:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:16.734 09:22:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:16.734 09:22:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:16.734 09:22:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:16.734 09:22:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:16.734 09:22:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:16.734 09:22:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:16.734 09:22:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:16.734 09:22:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:16.734 09:22:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:16.734 09:22:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:16.734 09:22:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:16.734 09:22:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:16.734 09:22:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:16.734 09:22:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:16.734 [2024-12-09 09:22:51.898705] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:05:16.734 [2024-12-09 09:22:51.898756] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2539718 ] 00:05:16.734 [2024-12-09 09:22:51.983089] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.734 [2024-12-09 09:22:52.001034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:16.734 [2024-12-09 09:22:52.001084] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:16.734 [2024-12-09 09:22:52.001093] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:16.734 [2024-12-09 09:22:52.001100] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:16.734 09:22:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:16.734 09:22:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:16.734 09:22:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:16.734 09:22:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:16.734 09:22:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:16.734 09:22:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:16.734 09:22:52 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:16.734 09:22:52 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2539623 00:05:16.734 09:22:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 2539623 ']' 00:05:16.734 09:22:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 2539623 00:05:16.734 09:22:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:16.734 09:22:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:16.734 09:22:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2539623 00:05:16.734 09:22:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:16.734 09:22:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:16.734 09:22:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2539623' 00:05:16.734 killing process with pid 2539623 00:05:16.734 09:22:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 2539623 00:05:16.734 09:22:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 2539623 00:05:16.995 00:05:16.995 real 0m1.256s 00:05:16.995 user 0m1.436s 00:05:16.995 sys 0m0.371s 00:05:16.995 09:22:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:16.995 09:22:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:16.995 ************************************ 00:05:16.995 END TEST exit_on_failed_rpc_init 00:05:16.995 ************************************ 00:05:16.995 09:22:52 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:16.995 00:05:16.995 real 0m13.599s 00:05:16.995 user 0m13.117s 00:05:16.995 sys 0m1.520s 00:05:16.995 09:22:52 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:16.995 09:22:52 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.995 ************************************ 00:05:16.995 END TEST skip_rpc 00:05:16.995 ************************************ 00:05:16.995 09:22:52 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:16.995 09:22:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:16.995 09:22:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:16.995 09:22:52 -- common/autotest_common.sh@10 -- # set +x 00:05:16.995 ************************************ 00:05:16.995 START TEST rpc_client 00:05:16.995 ************************************ 00:05:16.995 09:22:52 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:17.256 * Looking for test storage... 00:05:17.256 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:17.256 09:22:52 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:17.256 09:22:52 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:05:17.256 09:22:52 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:17.256 09:22:52 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:17.256 09:22:52 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:17.256 09:22:52 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:17.256 09:22:52 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:17.256 09:22:52 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:17.256 09:22:52 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:17.256 09:22:52 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:17.256 09:22:52 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:17.256 09:22:52 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:17.256 09:22:52 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:17.256 09:22:52 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:17.256 09:22:52 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:17.256 09:22:52 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:17.256 09:22:52 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:17.256 09:22:52 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:17.256 09:22:52 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:17.256 09:22:52 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:17.256 09:22:52 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:17.256 09:22:52 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:17.256 09:22:52 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:17.256 09:22:52 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:17.256 09:22:52 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:17.256 09:22:52 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:17.256 09:22:52 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:17.256 09:22:52 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:17.256 09:22:52 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:17.256 09:22:52 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:17.256 09:22:52 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:17.256 09:22:52 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:17.256 09:22:52 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:17.256 09:22:52 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:17.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.256 --rc genhtml_branch_coverage=1 00:05:17.256 --rc genhtml_function_coverage=1 00:05:17.256 --rc genhtml_legend=1 00:05:17.256 --rc geninfo_all_blocks=1 00:05:17.256 --rc geninfo_unexecuted_blocks=1 00:05:17.257 00:05:17.257 ' 00:05:17.257 09:22:52 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:17.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.257 --rc genhtml_branch_coverage=1 00:05:17.257 --rc genhtml_function_coverage=1 00:05:17.257 --rc genhtml_legend=1 00:05:17.257 --rc geninfo_all_blocks=1 00:05:17.257 --rc geninfo_unexecuted_blocks=1 00:05:17.257 00:05:17.257 ' 00:05:17.257 09:22:52 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:17.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.257 --rc genhtml_branch_coverage=1 00:05:17.257 --rc genhtml_function_coverage=1 00:05:17.257 --rc genhtml_legend=1 00:05:17.257 --rc geninfo_all_blocks=1 00:05:17.257 --rc geninfo_unexecuted_blocks=1 00:05:17.257 00:05:17.257 ' 00:05:17.257 09:22:52 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:17.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.257 --rc genhtml_branch_coverage=1 00:05:17.257 --rc genhtml_function_coverage=1 00:05:17.257 --rc genhtml_legend=1 00:05:17.257 --rc geninfo_all_blocks=1 00:05:17.257 --rc geninfo_unexecuted_blocks=1 00:05:17.257 00:05:17.257 ' 00:05:17.257 09:22:52 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:17.257 OK 00:05:17.257 09:22:52 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:17.257 00:05:17.257 real 0m0.219s 00:05:17.257 user 0m0.123s 00:05:17.257 sys 0m0.108s 00:05:17.257 09:22:52 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:17.257 09:22:52 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:17.257 ************************************ 00:05:17.257 END TEST rpc_client 00:05:17.257 ************************************ 00:05:17.257 09:22:52 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:17.257 09:22:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:17.257 09:22:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:17.257 09:22:52 -- common/autotest_common.sh@10 -- # set +x 00:05:17.257 ************************************ 00:05:17.257 START TEST json_config 00:05:17.257 ************************************ 00:05:17.257 09:22:52 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:17.519 09:22:52 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:17.519 09:22:52 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:05:17.519 09:22:52 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:17.519 09:22:52 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:17.519 09:22:52 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:17.519 09:22:52 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:17.519 09:22:52 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:17.519 09:22:52 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:17.519 09:22:52 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:17.519 09:22:52 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:17.519 09:22:52 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:17.519 09:22:52 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:17.519 09:22:52 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:17.519 09:22:52 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:17.519 09:22:52 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:17.519 09:22:52 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:17.519 09:22:52 json_config -- scripts/common.sh@345 -- # : 1 00:05:17.519 09:22:52 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:17.519 09:22:52 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:17.519 09:22:52 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:17.519 09:22:52 json_config -- scripts/common.sh@353 -- # local d=1 00:05:17.519 09:22:52 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:17.519 09:22:52 json_config -- scripts/common.sh@355 -- # echo 1 00:05:17.519 09:22:52 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:17.519 09:22:52 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:17.519 09:22:52 json_config -- scripts/common.sh@353 -- # local d=2 00:05:17.519 09:22:52 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:17.519 09:22:52 json_config -- scripts/common.sh@355 -- # echo 2 00:05:17.519 09:22:52 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:17.519 09:22:52 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:17.519 09:22:52 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:17.519 09:22:52 json_config -- scripts/common.sh@368 -- # return 0 00:05:17.519 09:22:52 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:17.519 09:22:52 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:17.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.519 --rc genhtml_branch_coverage=1 00:05:17.519 --rc genhtml_function_coverage=1 00:05:17.519 --rc genhtml_legend=1 00:05:17.519 --rc geninfo_all_blocks=1 00:05:17.519 --rc geninfo_unexecuted_blocks=1 00:05:17.519 00:05:17.519 ' 00:05:17.519 09:22:52 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:17.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.519 --rc genhtml_branch_coverage=1 00:05:17.519 --rc genhtml_function_coverage=1 00:05:17.519 --rc genhtml_legend=1 00:05:17.519 --rc geninfo_all_blocks=1 00:05:17.519 --rc geninfo_unexecuted_blocks=1 00:05:17.519 00:05:17.519 ' 00:05:17.519 09:22:52 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:17.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.519 --rc genhtml_branch_coverage=1 00:05:17.519 --rc genhtml_function_coverage=1 00:05:17.519 --rc genhtml_legend=1 00:05:17.519 --rc geninfo_all_blocks=1 00:05:17.519 --rc geninfo_unexecuted_blocks=1 00:05:17.519 00:05:17.520 ' 00:05:17.520 09:22:52 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:17.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.520 --rc genhtml_branch_coverage=1 00:05:17.520 --rc genhtml_function_coverage=1 00:05:17.520 --rc genhtml_legend=1 00:05:17.520 --rc geninfo_all_blocks=1 00:05:17.520 --rc geninfo_unexecuted_blocks=1 00:05:17.520 00:05:17.520 ' 00:05:17.520 09:22:52 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:17.520 09:22:52 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:17.520 09:22:52 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:17.520 09:22:52 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:17.520 09:22:52 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:17.520 09:22:52 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:17.520 09:22:52 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:17.520 09:22:52 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:17.520 09:22:52 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:17.520 09:22:52 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:17.520 09:22:52 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:17.520 09:22:52 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:17.520 09:22:52 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:17.520 09:22:52 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:17.520 09:22:52 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:17.520 09:22:52 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:17.520 09:22:52 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:17.520 09:22:52 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:17.520 09:22:52 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:17.520 09:22:52 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:17.520 09:22:52 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:17.520 09:22:52 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:17.520 09:22:52 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:17.520 09:22:52 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:17.520 09:22:52 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:17.520 09:22:52 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:17.520 09:22:52 json_config -- paths/export.sh@5 -- # export PATH 00:05:17.520 09:22:52 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:17.520 09:22:52 json_config -- nvmf/common.sh@51 -- # : 0 00:05:17.520 09:22:52 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:17.520 09:22:52 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:17.520 09:22:52 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:17.520 09:22:52 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:17.520 09:22:52 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:17.520 09:22:52 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:17.520 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:17.520 09:22:52 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:17.520 09:22:52 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:17.520 09:22:52 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:17.520 09:22:52 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:17.520 09:22:52 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:17.520 09:22:52 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:17.520 09:22:52 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:17.520 09:22:52 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:17.520 09:22:52 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:17.520 09:22:52 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:17.520 09:22:52 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:17.520 09:22:52 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:17.520 09:22:52 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:17.520 09:22:52 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:17.520 09:22:52 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:17.520 09:22:52 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:17.520 09:22:52 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:17.520 09:22:52 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:17.520 09:22:52 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:05:17.520 INFO: JSON configuration test init 00:05:17.520 09:22:52 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:05:17.520 09:22:52 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:05:17.520 09:22:52 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:17.520 09:22:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:17.520 09:22:52 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:05:17.520 09:22:52 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:17.520 09:22:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:17.520 09:22:52 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:05:17.520 09:22:52 json_config -- json_config/common.sh@9 -- # local app=target 00:05:17.520 09:22:52 json_config -- json_config/common.sh@10 -- # shift 00:05:17.520 09:22:52 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:17.520 09:22:52 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:17.520 09:22:52 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:17.520 09:22:52 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:17.520 09:22:52 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:17.520 09:22:52 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:17.520 09:22:52 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2540093 00:05:17.520 09:22:52 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:17.520 Waiting for target to run... 00:05:17.520 09:22:52 json_config -- json_config/common.sh@25 -- # waitforlisten 2540093 /var/tmp/spdk_tgt.sock 00:05:17.520 09:22:52 json_config -- common/autotest_common.sh@835 -- # '[' -z 2540093 ']' 00:05:17.520 09:22:52 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:17.520 09:22:52 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:17.520 09:22:52 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:17.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:17.520 09:22:52 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:17.520 09:22:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:17.520 [2024-12-09 09:22:52.945625] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:05:17.520 [2024-12-09 09:22:52.945692] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2540093 ] 00:05:17.781 [2024-12-09 09:22:53.177641] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.781 [2024-12-09 09:22:53.186846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.353 09:22:53 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:18.353 09:22:53 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:18.353 09:22:53 json_config -- json_config/common.sh@26 -- # echo '' 00:05:18.353 00:05:18.353 09:22:53 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:05:18.353 09:22:53 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:05:18.353 09:22:53 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:18.353 09:22:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:18.353 09:22:53 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:05:18.353 09:22:53 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:05:18.353 09:22:53 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:18.353 09:22:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:18.353 09:22:53 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:18.353 09:22:53 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:05:18.353 09:22:53 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:18.923 09:22:54 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:05:18.923 09:22:54 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:18.923 09:22:54 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:18.923 09:22:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:18.923 09:22:54 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:18.923 09:22:54 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:18.923 09:22:54 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:18.923 09:22:54 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:05:18.923 09:22:54 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:05:18.923 09:22:54 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:18.923 09:22:54 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:18.923 09:22:54 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:19.183 09:22:54 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:05:19.183 09:22:54 json_config -- json_config/json_config.sh@51 -- # local get_types 00:05:19.183 09:22:54 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:05:19.183 09:22:54 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:05:19.183 09:22:54 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:05:19.183 09:22:54 json_config -- json_config/json_config.sh@54 -- # sort 00:05:19.183 09:22:54 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:05:19.183 09:22:54 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:05:19.183 09:22:54 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:05:19.183 09:22:54 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:05:19.183 09:22:54 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:19.183 09:22:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:19.183 09:22:54 json_config -- json_config/json_config.sh@62 -- # return 0 00:05:19.183 09:22:54 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:05:19.183 09:22:54 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:05:19.183 09:22:54 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:05:19.183 09:22:54 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:05:19.183 09:22:54 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:05:19.183 09:22:54 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:05:19.183 09:22:54 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:19.183 09:22:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:19.184 09:22:54 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:19.184 09:22:54 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:05:19.184 09:22:54 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:05:19.184 09:22:54 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:19.184 09:22:54 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:19.444 MallocForNvmf0 00:05:19.444 09:22:54 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:19.444 09:22:54 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:19.704 MallocForNvmf1 00:05:19.704 09:22:54 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:19.704 09:22:54 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:19.704 [2024-12-09 09:22:55.070840] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:19.704 09:22:55 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:19.704 09:22:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:19.965 09:22:55 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:19.965 09:22:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:20.225 09:22:55 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:20.225 09:22:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:20.225 09:22:55 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:20.225 09:22:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:20.486 [2024-12-09 09:22:55.797050] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:20.486 09:22:55 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:05:20.486 09:22:55 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:20.486 09:22:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:20.486 09:22:55 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:05:20.486 09:22:55 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:20.486 09:22:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:20.486 09:22:55 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:05:20.486 09:22:55 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:20.486 09:22:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:20.746 MallocBdevForConfigChangeCheck 00:05:20.746 09:22:56 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:05:20.746 09:22:56 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:20.746 09:22:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:20.746 09:22:56 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:05:20.746 09:22:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:21.007 09:22:56 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:05:21.007 INFO: shutting down applications... 00:05:21.007 09:22:56 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:05:21.007 09:22:56 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:05:21.007 09:22:56 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:05:21.007 09:22:56 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:21.576 Calling clear_iscsi_subsystem 00:05:21.576 Calling clear_nvmf_subsystem 00:05:21.576 Calling clear_nbd_subsystem 00:05:21.576 Calling clear_ublk_subsystem 00:05:21.576 Calling clear_vhost_blk_subsystem 00:05:21.576 Calling clear_vhost_scsi_subsystem 00:05:21.576 Calling clear_bdev_subsystem 00:05:21.576 09:22:56 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:21.576 09:22:56 json_config -- json_config/json_config.sh@350 -- # count=100 00:05:21.576 09:22:56 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:05:21.576 09:22:56 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:21.576 09:22:56 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:21.576 09:22:56 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:21.836 09:22:57 json_config -- json_config/json_config.sh@352 -- # break 00:05:21.836 09:22:57 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:05:21.836 09:22:57 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:05:21.836 09:22:57 json_config -- json_config/common.sh@31 -- # local app=target 00:05:21.836 09:22:57 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:21.836 09:22:57 json_config -- json_config/common.sh@35 -- # [[ -n 2540093 ]] 00:05:21.836 09:22:57 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2540093 00:05:21.836 09:22:57 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:21.836 09:22:57 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:21.836 09:22:57 json_config -- json_config/common.sh@41 -- # kill -0 2540093 00:05:21.836 09:22:57 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:22.412 09:22:57 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:22.412 09:22:57 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:22.412 09:22:57 json_config -- json_config/common.sh@41 -- # kill -0 2540093 00:05:22.412 09:22:57 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:22.412 09:22:57 json_config -- json_config/common.sh@43 -- # break 00:05:22.412 09:22:57 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:22.412 09:22:57 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:22.412 SPDK target shutdown done 00:05:22.412 09:22:57 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:05:22.412 INFO: relaunching applications... 00:05:22.412 09:22:57 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:22.412 09:22:57 json_config -- json_config/common.sh@9 -- # local app=target 00:05:22.412 09:22:57 json_config -- json_config/common.sh@10 -- # shift 00:05:22.412 09:22:57 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:22.412 09:22:57 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:22.412 09:22:57 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:22.412 09:22:57 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:22.412 09:22:57 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:22.412 09:22:57 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2541231 00:05:22.412 09:22:57 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:22.412 Waiting for target to run... 00:05:22.412 09:22:57 json_config -- json_config/common.sh@25 -- # waitforlisten 2541231 /var/tmp/spdk_tgt.sock 00:05:22.412 09:22:57 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:22.412 09:22:57 json_config -- common/autotest_common.sh@835 -- # '[' -z 2541231 ']' 00:05:22.412 09:22:57 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:22.412 09:22:57 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:22.412 09:22:57 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:22.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:22.412 09:22:57 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:22.412 09:22:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:22.412 [2024-12-09 09:22:57.779565] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:05:22.412 [2024-12-09 09:22:57.779625] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2541231 ] 00:05:22.671 [2024-12-09 09:22:58.080722] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.671 [2024-12-09 09:22:58.090729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.241 [2024-12-09 09:22:58.565218] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:23.241 [2024-12-09 09:22:58.597578] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:23.241 09:22:58 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:23.241 09:22:58 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:23.241 09:22:58 json_config -- json_config/common.sh@26 -- # echo '' 00:05:23.241 00:05:23.241 09:22:58 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:05:23.241 09:22:58 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:23.241 INFO: Checking if target configuration is the same... 00:05:23.241 09:22:58 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:23.241 09:22:58 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:05:23.241 09:22:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:23.241 + '[' 2 -ne 2 ']' 00:05:23.241 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:23.241 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:23.241 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:23.241 +++ basename /dev/fd/62 00:05:23.241 ++ mktemp /tmp/62.XXX 00:05:23.242 + tmp_file_1=/tmp/62.pSR 00:05:23.242 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:23.242 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:23.242 + tmp_file_2=/tmp/spdk_tgt_config.json.XJ6 00:05:23.242 + ret=0 00:05:23.242 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:23.813 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:23.813 + diff -u /tmp/62.pSR /tmp/spdk_tgt_config.json.XJ6 00:05:23.813 + echo 'INFO: JSON config files are the same' 00:05:23.813 INFO: JSON config files are the same 00:05:23.813 + rm /tmp/62.pSR /tmp/spdk_tgt_config.json.XJ6 00:05:23.813 + exit 0 00:05:23.813 09:22:59 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:05:23.813 09:22:59 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:23.813 INFO: changing configuration and checking if this can be detected... 00:05:23.813 09:22:59 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:23.813 09:22:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:23.813 09:22:59 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:05:23.813 09:22:59 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:23.813 09:22:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:23.813 + '[' 2 -ne 2 ']' 00:05:23.813 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:23.813 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:23.813 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:23.813 +++ basename /dev/fd/62 00:05:23.813 ++ mktemp /tmp/62.XXX 00:05:23.813 + tmp_file_1=/tmp/62.u9R 00:05:23.813 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:23.813 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:23.813 + tmp_file_2=/tmp/spdk_tgt_config.json.Ywe 00:05:23.813 + ret=0 00:05:23.813 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:24.382 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:24.382 + diff -u /tmp/62.u9R /tmp/spdk_tgt_config.json.Ywe 00:05:24.382 + ret=1 00:05:24.382 + echo '=== Start of file: /tmp/62.u9R ===' 00:05:24.382 + cat /tmp/62.u9R 00:05:24.382 + echo '=== End of file: /tmp/62.u9R ===' 00:05:24.382 + echo '' 00:05:24.382 + echo '=== Start of file: /tmp/spdk_tgt_config.json.Ywe ===' 00:05:24.382 + cat /tmp/spdk_tgt_config.json.Ywe 00:05:24.382 + echo '=== End of file: /tmp/spdk_tgt_config.json.Ywe ===' 00:05:24.382 + echo '' 00:05:24.382 + rm /tmp/62.u9R /tmp/spdk_tgt_config.json.Ywe 00:05:24.382 + exit 1 00:05:24.382 09:22:59 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:05:24.382 INFO: configuration change detected. 00:05:24.382 09:22:59 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:05:24.382 09:22:59 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:05:24.382 09:22:59 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:24.382 09:22:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:24.382 09:22:59 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:05:24.382 09:22:59 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:05:24.382 09:22:59 json_config -- json_config/json_config.sh@324 -- # [[ -n 2541231 ]] 00:05:24.382 09:22:59 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:05:24.382 09:22:59 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:05:24.382 09:22:59 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:24.382 09:22:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:24.382 09:22:59 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:05:24.382 09:22:59 json_config -- json_config/json_config.sh@200 -- # uname -s 00:05:24.382 09:22:59 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:05:24.382 09:22:59 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:05:24.382 09:22:59 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:05:24.382 09:22:59 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:05:24.382 09:22:59 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:24.382 09:22:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:24.382 09:22:59 json_config -- json_config/json_config.sh@330 -- # killprocess 2541231 00:05:24.382 09:22:59 json_config -- common/autotest_common.sh@954 -- # '[' -z 2541231 ']' 00:05:24.382 09:22:59 json_config -- common/autotest_common.sh@958 -- # kill -0 2541231 00:05:24.382 09:22:59 json_config -- common/autotest_common.sh@959 -- # uname 00:05:24.382 09:22:59 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:24.382 09:22:59 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2541231 00:05:24.383 09:22:59 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:24.383 09:22:59 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:24.383 09:22:59 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2541231' 00:05:24.383 killing process with pid 2541231 00:05:24.383 09:22:59 json_config -- common/autotest_common.sh@973 -- # kill 2541231 00:05:24.383 09:22:59 json_config -- common/autotest_common.sh@978 -- # wait 2541231 00:05:24.642 09:22:59 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:24.642 09:22:59 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:05:24.642 09:22:59 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:24.642 09:22:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:24.642 09:22:59 json_config -- json_config/json_config.sh@335 -- # return 0 00:05:24.642 09:22:59 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:05:24.642 INFO: Success 00:05:24.642 00:05:24.642 real 0m7.305s 00:05:24.642 user 0m9.185s 00:05:24.642 sys 0m1.616s 00:05:24.642 09:22:59 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:24.642 09:22:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:24.642 ************************************ 00:05:24.642 END TEST json_config 00:05:24.642 ************************************ 00:05:24.642 09:23:00 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:24.642 09:23:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:24.642 09:23:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:24.642 09:23:00 -- common/autotest_common.sh@10 -- # set +x 00:05:24.642 ************************************ 00:05:24.642 START TEST json_config_extra_key 00:05:24.642 ************************************ 00:05:24.642 09:23:00 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:24.903 09:23:00 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:24.903 09:23:00 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:05:24.903 09:23:00 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:24.903 09:23:00 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:24.903 09:23:00 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:24.903 09:23:00 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:24.903 09:23:00 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:24.903 09:23:00 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:24.903 09:23:00 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:24.903 09:23:00 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:24.903 09:23:00 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:24.903 09:23:00 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:24.903 09:23:00 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:24.903 09:23:00 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:24.903 09:23:00 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:24.903 09:23:00 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:24.903 09:23:00 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:24.903 09:23:00 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:24.903 09:23:00 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:24.903 09:23:00 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:24.903 09:23:00 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:24.903 09:23:00 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:24.903 09:23:00 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:24.903 09:23:00 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:24.903 09:23:00 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:24.903 09:23:00 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:24.903 09:23:00 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:24.903 09:23:00 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:24.903 09:23:00 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:24.903 09:23:00 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:24.903 09:23:00 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:24.903 09:23:00 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:24.903 09:23:00 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:24.903 09:23:00 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:24.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.903 --rc genhtml_branch_coverage=1 00:05:24.903 --rc genhtml_function_coverage=1 00:05:24.903 --rc genhtml_legend=1 00:05:24.903 --rc geninfo_all_blocks=1 00:05:24.903 --rc geninfo_unexecuted_blocks=1 00:05:24.903 00:05:24.903 ' 00:05:24.903 09:23:00 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:24.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.903 --rc genhtml_branch_coverage=1 00:05:24.903 --rc genhtml_function_coverage=1 00:05:24.903 --rc genhtml_legend=1 00:05:24.903 --rc geninfo_all_blocks=1 00:05:24.903 --rc geninfo_unexecuted_blocks=1 00:05:24.903 00:05:24.903 ' 00:05:24.903 09:23:00 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:24.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.903 --rc genhtml_branch_coverage=1 00:05:24.903 --rc genhtml_function_coverage=1 00:05:24.903 --rc genhtml_legend=1 00:05:24.903 --rc geninfo_all_blocks=1 00:05:24.903 --rc geninfo_unexecuted_blocks=1 00:05:24.903 00:05:24.903 ' 00:05:24.903 09:23:00 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:24.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.903 --rc genhtml_branch_coverage=1 00:05:24.903 --rc genhtml_function_coverage=1 00:05:24.903 --rc genhtml_legend=1 00:05:24.903 --rc geninfo_all_blocks=1 00:05:24.903 --rc geninfo_unexecuted_blocks=1 00:05:24.903 00:05:24.903 ' 00:05:24.903 09:23:00 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:24.903 09:23:00 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:24.903 09:23:00 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:24.903 09:23:00 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:24.903 09:23:00 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:24.903 09:23:00 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:24.903 09:23:00 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:24.903 09:23:00 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:24.903 09:23:00 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:24.903 09:23:00 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:24.903 09:23:00 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:24.903 09:23:00 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:24.903 09:23:00 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:24.903 09:23:00 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:24.903 09:23:00 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:24.903 09:23:00 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:24.903 09:23:00 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:24.903 09:23:00 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:24.903 09:23:00 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:24.903 09:23:00 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:24.903 09:23:00 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:24.903 09:23:00 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:24.903 09:23:00 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:24.903 09:23:00 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.903 09:23:00 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.904 09:23:00 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.904 09:23:00 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:24.904 09:23:00 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.904 09:23:00 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:24.904 09:23:00 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:24.904 09:23:00 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:24.904 09:23:00 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:24.904 09:23:00 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:24.904 09:23:00 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:24.904 09:23:00 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:24.904 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:24.904 09:23:00 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:24.904 09:23:00 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:24.904 09:23:00 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:24.904 09:23:00 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:24.904 09:23:00 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:24.904 09:23:00 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:24.904 09:23:00 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:24.904 09:23:00 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:24.904 09:23:00 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:24.904 09:23:00 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:24.904 09:23:00 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:24.904 09:23:00 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:24.904 09:23:00 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:24.904 09:23:00 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:24.904 INFO: launching applications... 00:05:24.904 09:23:00 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:24.904 09:23:00 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:24.904 09:23:00 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:24.904 09:23:00 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:24.904 09:23:00 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:24.904 09:23:00 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:24.904 09:23:00 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:24.904 09:23:00 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:24.904 09:23:00 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2541788 00:05:24.904 09:23:00 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:24.904 Waiting for target to run... 00:05:24.904 09:23:00 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2541788 /var/tmp/spdk_tgt.sock 00:05:24.904 09:23:00 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 2541788 ']' 00:05:24.904 09:23:00 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:24.904 09:23:00 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:24.904 09:23:00 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:24.904 09:23:00 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:24.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:24.904 09:23:00 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:24.904 09:23:00 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:24.904 [2024-12-09 09:23:00.350326] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:05:24.904 [2024-12-09 09:23:00.350400] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2541788 ] 00:05:25.475 [2024-12-09 09:23:00.743146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.475 [2024-12-09 09:23:00.759599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.735 09:23:01 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:25.735 09:23:01 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:25.735 09:23:01 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:25.735 00:05:25.735 09:23:01 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:25.735 INFO: shutting down applications... 00:05:25.735 09:23:01 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:25.735 09:23:01 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:25.735 09:23:01 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:25.735 09:23:01 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2541788 ]] 00:05:25.735 09:23:01 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2541788 00:05:25.735 09:23:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:25.735 09:23:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:25.735 09:23:01 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2541788 00:05:25.735 09:23:01 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:26.305 09:23:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:26.305 09:23:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:26.305 09:23:01 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2541788 00:05:26.305 09:23:01 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:26.305 09:23:01 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:26.305 09:23:01 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:26.305 09:23:01 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:26.305 SPDK target shutdown done 00:05:26.305 09:23:01 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:26.305 Success 00:05:26.305 00:05:26.305 real 0m1.586s 00:05:26.305 user 0m1.086s 00:05:26.305 sys 0m0.525s 00:05:26.305 09:23:01 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:26.305 09:23:01 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:26.305 ************************************ 00:05:26.305 END TEST json_config_extra_key 00:05:26.305 ************************************ 00:05:26.305 09:23:01 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:26.305 09:23:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:26.305 09:23:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.305 09:23:01 -- common/autotest_common.sh@10 -- # set +x 00:05:26.305 ************************************ 00:05:26.305 START TEST alias_rpc 00:05:26.305 ************************************ 00:05:26.305 09:23:01 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:26.567 * Looking for test storage... 00:05:26.567 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:26.567 09:23:01 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:26.567 09:23:01 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:26.567 09:23:01 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:26.567 09:23:01 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:26.567 09:23:01 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:26.567 09:23:01 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:26.567 09:23:01 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:26.567 09:23:01 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:26.567 09:23:01 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:26.567 09:23:01 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:26.567 09:23:01 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:26.567 09:23:01 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:26.567 09:23:01 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:26.567 09:23:01 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:26.567 09:23:01 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:26.567 09:23:01 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:26.567 09:23:01 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:26.567 09:23:01 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:26.567 09:23:01 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:26.567 09:23:01 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:26.567 09:23:01 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:26.567 09:23:01 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:26.567 09:23:01 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:26.567 09:23:01 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:26.567 09:23:01 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:26.567 09:23:01 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:26.567 09:23:01 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:26.567 09:23:01 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:26.567 09:23:01 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:26.567 09:23:01 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:26.567 09:23:01 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:26.567 09:23:01 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:26.567 09:23:01 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:26.567 09:23:01 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:26.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.567 --rc genhtml_branch_coverage=1 00:05:26.567 --rc genhtml_function_coverage=1 00:05:26.567 --rc genhtml_legend=1 00:05:26.567 --rc geninfo_all_blocks=1 00:05:26.567 --rc geninfo_unexecuted_blocks=1 00:05:26.567 00:05:26.567 ' 00:05:26.567 09:23:01 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:26.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.567 --rc genhtml_branch_coverage=1 00:05:26.567 --rc genhtml_function_coverage=1 00:05:26.567 --rc genhtml_legend=1 00:05:26.567 --rc geninfo_all_blocks=1 00:05:26.567 --rc geninfo_unexecuted_blocks=1 00:05:26.567 00:05:26.567 ' 00:05:26.567 09:23:01 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:26.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.567 --rc genhtml_branch_coverage=1 00:05:26.567 --rc genhtml_function_coverage=1 00:05:26.567 --rc genhtml_legend=1 00:05:26.567 --rc geninfo_all_blocks=1 00:05:26.567 --rc geninfo_unexecuted_blocks=1 00:05:26.567 00:05:26.567 ' 00:05:26.567 09:23:01 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:26.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.567 --rc genhtml_branch_coverage=1 00:05:26.567 --rc genhtml_function_coverage=1 00:05:26.567 --rc genhtml_legend=1 00:05:26.567 --rc geninfo_all_blocks=1 00:05:26.567 --rc geninfo_unexecuted_blocks=1 00:05:26.567 00:05:26.567 ' 00:05:26.567 09:23:01 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:26.567 09:23:01 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2542146 00:05:26.567 09:23:01 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2542146 00:05:26.567 09:23:01 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 2542146 ']' 00:05:26.567 09:23:01 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:26.567 09:23:01 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.567 09:23:01 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:26.567 09:23:01 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.567 09:23:01 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:26.567 09:23:01 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.567 [2024-12-09 09:23:01.997929] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:05:26.567 [2024-12-09 09:23:01.998006] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2542146 ] 00:05:26.827 [2024-12-09 09:23:02.086311] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.827 [2024-12-09 09:23:02.105553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.398 09:23:02 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:27.398 09:23:02 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:27.398 09:23:02 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:27.659 09:23:02 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2542146 00:05:27.659 09:23:02 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 2542146 ']' 00:05:27.659 09:23:02 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 2542146 00:05:27.659 09:23:02 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:27.659 09:23:03 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:27.659 09:23:03 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2542146 00:05:27.659 09:23:03 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:27.659 09:23:03 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:27.659 09:23:03 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2542146' 00:05:27.659 killing process with pid 2542146 00:05:27.659 09:23:03 alias_rpc -- common/autotest_common.sh@973 -- # kill 2542146 00:05:27.659 09:23:03 alias_rpc -- common/autotest_common.sh@978 -- # wait 2542146 00:05:27.920 00:05:27.920 real 0m1.506s 00:05:27.920 user 0m1.656s 00:05:27.920 sys 0m0.430s 00:05:27.920 09:23:03 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:27.920 09:23:03 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.920 ************************************ 00:05:27.920 END TEST alias_rpc 00:05:27.920 ************************************ 00:05:27.920 09:23:03 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:27.920 09:23:03 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:27.920 09:23:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:27.920 09:23:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:27.920 09:23:03 -- common/autotest_common.sh@10 -- # set +x 00:05:27.920 ************************************ 00:05:27.920 START TEST spdkcli_tcp 00:05:27.920 ************************************ 00:05:27.920 09:23:03 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:28.181 * Looking for test storage... 00:05:28.181 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:28.181 09:23:03 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:28.181 09:23:03 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:05:28.181 09:23:03 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:28.181 09:23:03 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:28.181 09:23:03 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:28.181 09:23:03 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:28.181 09:23:03 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:28.181 09:23:03 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:28.181 09:23:03 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:28.181 09:23:03 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:28.181 09:23:03 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:28.181 09:23:03 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:28.181 09:23:03 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:28.181 09:23:03 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:28.181 09:23:03 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:28.181 09:23:03 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:28.181 09:23:03 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:28.181 09:23:03 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:28.181 09:23:03 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:28.181 09:23:03 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:28.181 09:23:03 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:28.181 09:23:03 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:28.181 09:23:03 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:28.181 09:23:03 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:28.181 09:23:03 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:28.181 09:23:03 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:28.181 09:23:03 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:28.181 09:23:03 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:28.181 09:23:03 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:28.181 09:23:03 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:28.181 09:23:03 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:28.181 09:23:03 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:28.181 09:23:03 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:28.181 09:23:03 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:28.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.181 --rc genhtml_branch_coverage=1 00:05:28.181 --rc genhtml_function_coverage=1 00:05:28.181 --rc genhtml_legend=1 00:05:28.181 --rc geninfo_all_blocks=1 00:05:28.181 --rc geninfo_unexecuted_blocks=1 00:05:28.181 00:05:28.181 ' 00:05:28.181 09:23:03 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:28.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.181 --rc genhtml_branch_coverage=1 00:05:28.181 --rc genhtml_function_coverage=1 00:05:28.181 --rc genhtml_legend=1 00:05:28.181 --rc geninfo_all_blocks=1 00:05:28.181 --rc geninfo_unexecuted_blocks=1 00:05:28.181 00:05:28.181 ' 00:05:28.181 09:23:03 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:28.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.181 --rc genhtml_branch_coverage=1 00:05:28.181 --rc genhtml_function_coverage=1 00:05:28.181 --rc genhtml_legend=1 00:05:28.181 --rc geninfo_all_blocks=1 00:05:28.181 --rc geninfo_unexecuted_blocks=1 00:05:28.181 00:05:28.181 ' 00:05:28.181 09:23:03 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:28.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.181 --rc genhtml_branch_coverage=1 00:05:28.181 --rc genhtml_function_coverage=1 00:05:28.181 --rc genhtml_legend=1 00:05:28.181 --rc geninfo_all_blocks=1 00:05:28.181 --rc geninfo_unexecuted_blocks=1 00:05:28.181 00:05:28.181 ' 00:05:28.181 09:23:03 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:28.181 09:23:03 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:28.181 09:23:03 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:28.181 09:23:03 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:28.181 09:23:03 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:28.181 09:23:03 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:28.181 09:23:03 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:28.181 09:23:03 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:28.181 09:23:03 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:28.181 09:23:03 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2542499 00:05:28.181 09:23:03 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2542499 00:05:28.181 09:23:03 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:28.181 09:23:03 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 2542499 ']' 00:05:28.181 09:23:03 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.181 09:23:03 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:28.182 09:23:03 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.182 09:23:03 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:28.182 09:23:03 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:28.182 [2024-12-09 09:23:03.579646] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:05:28.182 [2024-12-09 09:23:03.579699] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2542499 ] 00:05:28.443 [2024-12-09 09:23:03.663567] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:28.443 [2024-12-09 09:23:03.681176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.443 [2024-12-09 09:23:03.681176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:29.015 09:23:04 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:29.015 09:23:04 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:29.015 09:23:04 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2542822 00:05:29.015 09:23:04 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:29.015 09:23:04 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:29.277 [ 00:05:29.277 "bdev_malloc_delete", 00:05:29.277 "bdev_malloc_create", 00:05:29.277 "bdev_null_resize", 00:05:29.277 "bdev_null_delete", 00:05:29.277 "bdev_null_create", 00:05:29.277 "bdev_nvme_cuse_unregister", 00:05:29.277 "bdev_nvme_cuse_register", 00:05:29.277 "bdev_opal_new_user", 00:05:29.277 "bdev_opal_set_lock_state", 00:05:29.277 "bdev_opal_delete", 00:05:29.277 "bdev_opal_get_info", 00:05:29.277 "bdev_opal_create", 00:05:29.277 "bdev_nvme_opal_revert", 00:05:29.277 "bdev_nvme_opal_init", 00:05:29.277 "bdev_nvme_send_cmd", 00:05:29.277 "bdev_nvme_set_keys", 00:05:29.277 "bdev_nvme_get_path_iostat", 00:05:29.277 "bdev_nvme_get_mdns_discovery_info", 00:05:29.277 "bdev_nvme_stop_mdns_discovery", 00:05:29.277 "bdev_nvme_start_mdns_discovery", 00:05:29.277 "bdev_nvme_set_multipath_policy", 00:05:29.277 "bdev_nvme_set_preferred_path", 00:05:29.277 "bdev_nvme_get_io_paths", 00:05:29.277 "bdev_nvme_remove_error_injection", 00:05:29.277 "bdev_nvme_add_error_injection", 00:05:29.277 "bdev_nvme_get_discovery_info", 00:05:29.277 "bdev_nvme_stop_discovery", 00:05:29.277 "bdev_nvme_start_discovery", 00:05:29.277 "bdev_nvme_get_controller_health_info", 00:05:29.277 "bdev_nvme_disable_controller", 00:05:29.277 "bdev_nvme_enable_controller", 00:05:29.277 "bdev_nvme_reset_controller", 00:05:29.277 "bdev_nvme_get_transport_statistics", 00:05:29.277 "bdev_nvme_apply_firmware", 00:05:29.277 "bdev_nvme_detach_controller", 00:05:29.277 "bdev_nvme_get_controllers", 00:05:29.277 "bdev_nvme_attach_controller", 00:05:29.277 "bdev_nvme_set_hotplug", 00:05:29.277 "bdev_nvme_set_options", 00:05:29.277 "bdev_passthru_delete", 00:05:29.277 "bdev_passthru_create", 00:05:29.277 "bdev_lvol_set_parent_bdev", 00:05:29.277 "bdev_lvol_set_parent", 00:05:29.277 "bdev_lvol_check_shallow_copy", 00:05:29.277 "bdev_lvol_start_shallow_copy", 00:05:29.277 "bdev_lvol_grow_lvstore", 00:05:29.277 "bdev_lvol_get_lvols", 00:05:29.277 "bdev_lvol_get_lvstores", 00:05:29.277 "bdev_lvol_delete", 00:05:29.277 "bdev_lvol_set_read_only", 00:05:29.277 "bdev_lvol_resize", 00:05:29.277 "bdev_lvol_decouple_parent", 00:05:29.277 "bdev_lvol_inflate", 00:05:29.277 "bdev_lvol_rename", 00:05:29.277 "bdev_lvol_clone_bdev", 00:05:29.277 "bdev_lvol_clone", 00:05:29.277 "bdev_lvol_snapshot", 00:05:29.277 "bdev_lvol_create", 00:05:29.277 "bdev_lvol_delete_lvstore", 00:05:29.277 "bdev_lvol_rename_lvstore", 00:05:29.277 "bdev_lvol_create_lvstore", 00:05:29.277 "bdev_raid_set_options", 00:05:29.277 "bdev_raid_remove_base_bdev", 00:05:29.277 "bdev_raid_add_base_bdev", 00:05:29.277 "bdev_raid_delete", 00:05:29.277 "bdev_raid_create", 00:05:29.277 "bdev_raid_get_bdevs", 00:05:29.277 "bdev_error_inject_error", 00:05:29.277 "bdev_error_delete", 00:05:29.277 "bdev_error_create", 00:05:29.277 "bdev_split_delete", 00:05:29.277 "bdev_split_create", 00:05:29.277 "bdev_delay_delete", 00:05:29.277 "bdev_delay_create", 00:05:29.277 "bdev_delay_update_latency", 00:05:29.277 "bdev_zone_block_delete", 00:05:29.277 "bdev_zone_block_create", 00:05:29.277 "blobfs_create", 00:05:29.277 "blobfs_detect", 00:05:29.277 "blobfs_set_cache_size", 00:05:29.277 "bdev_aio_delete", 00:05:29.277 "bdev_aio_rescan", 00:05:29.277 "bdev_aio_create", 00:05:29.277 "bdev_ftl_set_property", 00:05:29.277 "bdev_ftl_get_properties", 00:05:29.277 "bdev_ftl_get_stats", 00:05:29.277 "bdev_ftl_unmap", 00:05:29.277 "bdev_ftl_unload", 00:05:29.277 "bdev_ftl_delete", 00:05:29.277 "bdev_ftl_load", 00:05:29.277 "bdev_ftl_create", 00:05:29.277 "bdev_virtio_attach_controller", 00:05:29.277 "bdev_virtio_scsi_get_devices", 00:05:29.277 "bdev_virtio_detach_controller", 00:05:29.277 "bdev_virtio_blk_set_hotplug", 00:05:29.277 "bdev_iscsi_delete", 00:05:29.277 "bdev_iscsi_create", 00:05:29.277 "bdev_iscsi_set_options", 00:05:29.277 "accel_error_inject_error", 00:05:29.277 "ioat_scan_accel_module", 00:05:29.277 "dsa_scan_accel_module", 00:05:29.277 "iaa_scan_accel_module", 00:05:29.277 "vfu_virtio_create_fs_endpoint", 00:05:29.277 "vfu_virtio_create_scsi_endpoint", 00:05:29.277 "vfu_virtio_scsi_remove_target", 00:05:29.277 "vfu_virtio_scsi_add_target", 00:05:29.277 "vfu_virtio_create_blk_endpoint", 00:05:29.277 "vfu_virtio_delete_endpoint", 00:05:29.277 "keyring_file_remove_key", 00:05:29.277 "keyring_file_add_key", 00:05:29.277 "keyring_linux_set_options", 00:05:29.277 "fsdev_aio_delete", 00:05:29.277 "fsdev_aio_create", 00:05:29.277 "iscsi_get_histogram", 00:05:29.277 "iscsi_enable_histogram", 00:05:29.277 "iscsi_set_options", 00:05:29.277 "iscsi_get_auth_groups", 00:05:29.277 "iscsi_auth_group_remove_secret", 00:05:29.277 "iscsi_auth_group_add_secret", 00:05:29.277 "iscsi_delete_auth_group", 00:05:29.277 "iscsi_create_auth_group", 00:05:29.277 "iscsi_set_discovery_auth", 00:05:29.277 "iscsi_get_options", 00:05:29.277 "iscsi_target_node_request_logout", 00:05:29.277 "iscsi_target_node_set_redirect", 00:05:29.277 "iscsi_target_node_set_auth", 00:05:29.277 "iscsi_target_node_add_lun", 00:05:29.277 "iscsi_get_stats", 00:05:29.278 "iscsi_get_connections", 00:05:29.278 "iscsi_portal_group_set_auth", 00:05:29.278 "iscsi_start_portal_group", 00:05:29.278 "iscsi_delete_portal_group", 00:05:29.278 "iscsi_create_portal_group", 00:05:29.278 "iscsi_get_portal_groups", 00:05:29.278 "iscsi_delete_target_node", 00:05:29.278 "iscsi_target_node_remove_pg_ig_maps", 00:05:29.278 "iscsi_target_node_add_pg_ig_maps", 00:05:29.278 "iscsi_create_target_node", 00:05:29.278 "iscsi_get_target_nodes", 00:05:29.278 "iscsi_delete_initiator_group", 00:05:29.278 "iscsi_initiator_group_remove_initiators", 00:05:29.278 "iscsi_initiator_group_add_initiators", 00:05:29.278 "iscsi_create_initiator_group", 00:05:29.278 "iscsi_get_initiator_groups", 00:05:29.278 "nvmf_set_crdt", 00:05:29.278 "nvmf_set_config", 00:05:29.278 "nvmf_set_max_subsystems", 00:05:29.278 "nvmf_stop_mdns_prr", 00:05:29.278 "nvmf_publish_mdns_prr", 00:05:29.278 "nvmf_subsystem_get_listeners", 00:05:29.278 "nvmf_subsystem_get_qpairs", 00:05:29.278 "nvmf_subsystem_get_controllers", 00:05:29.278 "nvmf_get_stats", 00:05:29.278 "nvmf_get_transports", 00:05:29.278 "nvmf_create_transport", 00:05:29.278 "nvmf_get_targets", 00:05:29.278 "nvmf_delete_target", 00:05:29.278 "nvmf_create_target", 00:05:29.278 "nvmf_subsystem_allow_any_host", 00:05:29.278 "nvmf_subsystem_set_keys", 00:05:29.278 "nvmf_subsystem_remove_host", 00:05:29.278 "nvmf_subsystem_add_host", 00:05:29.278 "nvmf_ns_remove_host", 00:05:29.278 "nvmf_ns_add_host", 00:05:29.278 "nvmf_subsystem_remove_ns", 00:05:29.278 "nvmf_subsystem_set_ns_ana_group", 00:05:29.278 "nvmf_subsystem_add_ns", 00:05:29.278 "nvmf_subsystem_listener_set_ana_state", 00:05:29.278 "nvmf_discovery_get_referrals", 00:05:29.278 "nvmf_discovery_remove_referral", 00:05:29.278 "nvmf_discovery_add_referral", 00:05:29.278 "nvmf_subsystem_remove_listener", 00:05:29.278 "nvmf_subsystem_add_listener", 00:05:29.278 "nvmf_delete_subsystem", 00:05:29.278 "nvmf_create_subsystem", 00:05:29.278 "nvmf_get_subsystems", 00:05:29.278 "env_dpdk_get_mem_stats", 00:05:29.278 "nbd_get_disks", 00:05:29.278 "nbd_stop_disk", 00:05:29.278 "nbd_start_disk", 00:05:29.278 "ublk_recover_disk", 00:05:29.278 "ublk_get_disks", 00:05:29.278 "ublk_stop_disk", 00:05:29.278 "ublk_start_disk", 00:05:29.278 "ublk_destroy_target", 00:05:29.278 "ublk_create_target", 00:05:29.278 "virtio_blk_create_transport", 00:05:29.278 "virtio_blk_get_transports", 00:05:29.278 "vhost_controller_set_coalescing", 00:05:29.278 "vhost_get_controllers", 00:05:29.278 "vhost_delete_controller", 00:05:29.278 "vhost_create_blk_controller", 00:05:29.278 "vhost_scsi_controller_remove_target", 00:05:29.278 "vhost_scsi_controller_add_target", 00:05:29.278 "vhost_start_scsi_controller", 00:05:29.278 "vhost_create_scsi_controller", 00:05:29.278 "thread_set_cpumask", 00:05:29.278 "scheduler_set_options", 00:05:29.278 "framework_get_governor", 00:05:29.278 "framework_get_scheduler", 00:05:29.278 "framework_set_scheduler", 00:05:29.278 "framework_get_reactors", 00:05:29.278 "thread_get_io_channels", 00:05:29.278 "thread_get_pollers", 00:05:29.278 "thread_get_stats", 00:05:29.278 "framework_monitor_context_switch", 00:05:29.278 "spdk_kill_instance", 00:05:29.278 "log_enable_timestamps", 00:05:29.278 "log_get_flags", 00:05:29.278 "log_clear_flag", 00:05:29.278 "log_set_flag", 00:05:29.278 "log_get_level", 00:05:29.278 "log_set_level", 00:05:29.278 "log_get_print_level", 00:05:29.278 "log_set_print_level", 00:05:29.278 "framework_enable_cpumask_locks", 00:05:29.278 "framework_disable_cpumask_locks", 00:05:29.278 "framework_wait_init", 00:05:29.278 "framework_start_init", 00:05:29.278 "scsi_get_devices", 00:05:29.278 "bdev_get_histogram", 00:05:29.278 "bdev_enable_histogram", 00:05:29.278 "bdev_set_qos_limit", 00:05:29.278 "bdev_set_qd_sampling_period", 00:05:29.278 "bdev_get_bdevs", 00:05:29.278 "bdev_reset_iostat", 00:05:29.278 "bdev_get_iostat", 00:05:29.278 "bdev_examine", 00:05:29.278 "bdev_wait_for_examine", 00:05:29.278 "bdev_set_options", 00:05:29.278 "accel_get_stats", 00:05:29.278 "accel_set_options", 00:05:29.278 "accel_set_driver", 00:05:29.278 "accel_crypto_key_destroy", 00:05:29.278 "accel_crypto_keys_get", 00:05:29.278 "accel_crypto_key_create", 00:05:29.278 "accel_assign_opc", 00:05:29.278 "accel_get_module_info", 00:05:29.278 "accel_get_opc_assignments", 00:05:29.278 "vmd_rescan", 00:05:29.278 "vmd_remove_device", 00:05:29.278 "vmd_enable", 00:05:29.278 "sock_get_default_impl", 00:05:29.278 "sock_set_default_impl", 00:05:29.278 "sock_impl_set_options", 00:05:29.278 "sock_impl_get_options", 00:05:29.278 "iobuf_get_stats", 00:05:29.278 "iobuf_set_options", 00:05:29.278 "keyring_get_keys", 00:05:29.278 "vfu_tgt_set_base_path", 00:05:29.278 "framework_get_pci_devices", 00:05:29.278 "framework_get_config", 00:05:29.278 "framework_get_subsystems", 00:05:29.278 "fsdev_set_opts", 00:05:29.278 "fsdev_get_opts", 00:05:29.278 "trace_get_info", 00:05:29.278 "trace_get_tpoint_group_mask", 00:05:29.278 "trace_disable_tpoint_group", 00:05:29.278 "trace_enable_tpoint_group", 00:05:29.278 "trace_clear_tpoint_mask", 00:05:29.278 "trace_set_tpoint_mask", 00:05:29.278 "notify_get_notifications", 00:05:29.278 "notify_get_types", 00:05:29.278 "spdk_get_version", 00:05:29.278 "rpc_get_methods" 00:05:29.278 ] 00:05:29.278 09:23:04 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:29.278 09:23:04 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:29.278 09:23:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:29.278 09:23:04 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:29.278 09:23:04 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2542499 00:05:29.278 09:23:04 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 2542499 ']' 00:05:29.278 09:23:04 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 2542499 00:05:29.278 09:23:04 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:29.278 09:23:04 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:29.278 09:23:04 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2542499 00:05:29.278 09:23:04 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:29.278 09:23:04 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:29.278 09:23:04 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2542499' 00:05:29.278 killing process with pid 2542499 00:05:29.278 09:23:04 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 2542499 00:05:29.279 09:23:04 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 2542499 00:05:29.540 00:05:29.540 real 0m1.512s 00:05:29.540 user 0m2.775s 00:05:29.540 sys 0m0.465s 00:05:29.540 09:23:04 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:29.540 09:23:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:29.540 ************************************ 00:05:29.540 END TEST spdkcli_tcp 00:05:29.541 ************************************ 00:05:29.541 09:23:04 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:29.541 09:23:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:29.541 09:23:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:29.541 09:23:04 -- common/autotest_common.sh@10 -- # set +x 00:05:29.541 ************************************ 00:05:29.541 START TEST dpdk_mem_utility 00:05:29.541 ************************************ 00:05:29.541 09:23:04 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:29.541 * Looking for test storage... 00:05:29.541 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:29.541 09:23:04 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:29.810 09:23:04 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:05:29.810 09:23:04 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:29.810 09:23:05 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:29.810 09:23:05 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:29.810 09:23:05 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:29.810 09:23:05 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:29.810 09:23:05 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:29.810 09:23:05 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:29.810 09:23:05 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:29.810 09:23:05 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:29.810 09:23:05 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:29.810 09:23:05 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:29.810 09:23:05 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:29.810 09:23:05 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:29.810 09:23:05 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:29.810 09:23:05 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:29.811 09:23:05 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:29.811 09:23:05 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:29.811 09:23:05 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:29.811 09:23:05 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:29.811 09:23:05 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:29.811 09:23:05 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:29.811 09:23:05 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:29.811 09:23:05 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:29.811 09:23:05 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:29.811 09:23:05 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:29.811 09:23:05 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:29.811 09:23:05 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:29.811 09:23:05 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:29.811 09:23:05 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:29.811 09:23:05 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:29.811 09:23:05 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:29.811 09:23:05 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:29.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.811 --rc genhtml_branch_coverage=1 00:05:29.811 --rc genhtml_function_coverage=1 00:05:29.811 --rc genhtml_legend=1 00:05:29.811 --rc geninfo_all_blocks=1 00:05:29.811 --rc geninfo_unexecuted_blocks=1 00:05:29.811 00:05:29.811 ' 00:05:29.811 09:23:05 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:29.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.811 --rc genhtml_branch_coverage=1 00:05:29.811 --rc genhtml_function_coverage=1 00:05:29.811 --rc genhtml_legend=1 00:05:29.811 --rc geninfo_all_blocks=1 00:05:29.811 --rc geninfo_unexecuted_blocks=1 00:05:29.811 00:05:29.811 ' 00:05:29.811 09:23:05 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:29.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.811 --rc genhtml_branch_coverage=1 00:05:29.811 --rc genhtml_function_coverage=1 00:05:29.811 --rc genhtml_legend=1 00:05:29.811 --rc geninfo_all_blocks=1 00:05:29.811 --rc geninfo_unexecuted_blocks=1 00:05:29.811 00:05:29.811 ' 00:05:29.811 09:23:05 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:29.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.811 --rc genhtml_branch_coverage=1 00:05:29.811 --rc genhtml_function_coverage=1 00:05:29.811 --rc genhtml_legend=1 00:05:29.811 --rc geninfo_all_blocks=1 00:05:29.811 --rc geninfo_unexecuted_blocks=1 00:05:29.811 00:05:29.811 ' 00:05:29.811 09:23:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:29.811 09:23:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2542908 00:05:29.811 09:23:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2542908 00:05:29.811 09:23:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:29.811 09:23:05 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 2542908 ']' 00:05:29.811 09:23:05 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.811 09:23:05 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:29.811 09:23:05 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.812 09:23:05 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:29.812 09:23:05 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:29.812 [2024-12-09 09:23:05.147748] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:05:29.812 [2024-12-09 09:23:05.147828] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2542908 ] 00:05:29.812 [2024-12-09 09:23:05.234452] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.812 [2024-12-09 09:23:05.254037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.753 09:23:05 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:30.753 09:23:05 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:30.753 09:23:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:30.753 09:23:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:30.753 09:23:05 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.753 09:23:05 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:30.753 { 00:05:30.753 "filename": "/tmp/spdk_mem_dump.txt" 00:05:30.753 } 00:05:30.753 09:23:05 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.753 09:23:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:30.753 DPDK memory size 818.000000 MiB in 1 heap(s) 00:05:30.753 1 heaps totaling size 818.000000 MiB 00:05:30.753 size: 818.000000 MiB heap id: 0 00:05:30.753 end heaps---------- 00:05:30.753 9 mempools totaling size 603.782043 MiB 00:05:30.754 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:30.754 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:30.754 size: 100.555481 MiB name: bdev_io_2542908 00:05:30.754 size: 50.003479 MiB name: msgpool_2542908 00:05:30.754 size: 36.509338 MiB name: fsdev_io_2542908 00:05:30.754 size: 21.763794 MiB name: PDU_Pool 00:05:30.754 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:30.754 size: 4.133484 MiB name: evtpool_2542908 00:05:30.754 size: 0.026123 MiB name: Session_Pool 00:05:30.754 end mempools------- 00:05:30.754 6 memzones totaling size 4.142822 MiB 00:05:30.754 size: 1.000366 MiB name: RG_ring_0_2542908 00:05:30.754 size: 1.000366 MiB name: RG_ring_1_2542908 00:05:30.754 size: 1.000366 MiB name: RG_ring_4_2542908 00:05:30.754 size: 1.000366 MiB name: RG_ring_5_2542908 00:05:30.754 size: 0.125366 MiB name: RG_ring_2_2542908 00:05:30.754 size: 0.015991 MiB name: RG_ring_3_2542908 00:05:30.754 end memzones------- 00:05:30.754 09:23:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:30.754 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:05:30.754 list of free elements. size: 10.852478 MiB 00:05:30.754 element at address: 0x200019200000 with size: 0.999878 MiB 00:05:30.754 element at address: 0x200019400000 with size: 0.999878 MiB 00:05:30.754 element at address: 0x200000400000 with size: 0.998535 MiB 00:05:30.754 element at address: 0x200032000000 with size: 0.994446 MiB 00:05:30.754 element at address: 0x200006400000 with size: 0.959839 MiB 00:05:30.754 element at address: 0x200012c00000 with size: 0.944275 MiB 00:05:30.754 element at address: 0x200019600000 with size: 0.936584 MiB 00:05:30.754 element at address: 0x200000200000 with size: 0.717346 MiB 00:05:30.754 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:05:30.754 element at address: 0x200000c00000 with size: 0.495422 MiB 00:05:30.754 element at address: 0x20000a600000 with size: 0.490723 MiB 00:05:30.754 element at address: 0x200019800000 with size: 0.485657 MiB 00:05:30.754 element at address: 0x200003e00000 with size: 0.481934 MiB 00:05:30.754 element at address: 0x200028200000 with size: 0.410034 MiB 00:05:30.754 element at address: 0x200000800000 with size: 0.355042 MiB 00:05:30.754 list of standard malloc elements. size: 199.218628 MiB 00:05:30.754 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:05:30.754 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:05:30.754 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:30.754 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:05:30.754 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:05:30.754 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:30.754 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:05:30.754 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:30.754 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:05:30.754 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:30.754 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:30.754 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:05:30.754 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:05:30.754 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:05:30.754 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:05:30.754 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:05:30.754 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:05:30.754 element at address: 0x20000085b040 with size: 0.000183 MiB 00:05:30.754 element at address: 0x20000085f300 with size: 0.000183 MiB 00:05:30.754 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:05:30.754 element at address: 0x20000087f680 with size: 0.000183 MiB 00:05:30.754 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:05:30.754 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:05:30.754 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:05:30.754 element at address: 0x200000cff000 with size: 0.000183 MiB 00:05:30.754 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:05:30.754 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:05:30.754 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:05:30.754 element at address: 0x200003efb980 with size: 0.000183 MiB 00:05:30.754 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:05:30.754 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:05:30.754 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:05:30.754 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:05:30.754 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:05:30.754 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:05:30.754 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:05:30.754 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:05:30.754 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:05:30.754 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:05:30.754 element at address: 0x200028268f80 with size: 0.000183 MiB 00:05:30.754 element at address: 0x200028269040 with size: 0.000183 MiB 00:05:30.754 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:05:30.754 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:05:30.754 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:05:30.754 list of memzone associated elements. size: 607.928894 MiB 00:05:30.754 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:05:30.754 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:30.754 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:05:30.754 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:30.754 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:05:30.754 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_2542908_0 00:05:30.754 element at address: 0x200000dff380 with size: 48.003052 MiB 00:05:30.754 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2542908_0 00:05:30.754 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:05:30.754 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_2542908_0 00:05:30.754 element at address: 0x2000199be940 with size: 20.255554 MiB 00:05:30.754 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:30.754 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:05:30.754 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:30.754 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:05:30.754 associated memzone info: size: 3.000122 MiB name: MP_evtpool_2542908_0 00:05:30.754 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:05:30.754 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2542908 00:05:30.754 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:30.754 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2542908 00:05:30.754 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:05:30.754 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:30.754 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:05:30.754 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:30.754 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:05:30.754 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:30.754 element at address: 0x200003efba40 with size: 1.008118 MiB 00:05:30.754 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:30.754 element at address: 0x200000cff180 with size: 1.000488 MiB 00:05:30.754 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2542908 00:05:30.754 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:05:30.754 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2542908 00:05:30.754 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:05:30.754 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2542908 00:05:30.754 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:05:30.754 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2542908 00:05:30.754 element at address: 0x20000087f740 with size: 0.500488 MiB 00:05:30.754 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_2542908 00:05:30.754 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:05:30.754 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2542908 00:05:30.754 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:05:30.754 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:30.754 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:05:30.754 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:30.754 element at address: 0x20001987c540 with size: 0.250488 MiB 00:05:30.754 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:30.754 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:05:30.754 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_2542908 00:05:30.754 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:05:30.754 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2542908 00:05:30.754 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:05:30.754 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:30.754 element at address: 0x200028269100 with size: 0.023743 MiB 00:05:30.754 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:30.754 element at address: 0x20000085b100 with size: 0.016113 MiB 00:05:30.754 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2542908 00:05:30.754 element at address: 0x20002826f240 with size: 0.002441 MiB 00:05:30.754 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:30.754 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:05:30.754 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2542908 00:05:30.754 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:05:30.754 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_2542908 00:05:30.754 element at address: 0x20000085af00 with size: 0.000305 MiB 00:05:30.754 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2542908 00:05:30.754 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:05:30.754 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:30.754 09:23:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:30.754 09:23:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2542908 00:05:30.754 09:23:06 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 2542908 ']' 00:05:30.754 09:23:06 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 2542908 00:05:30.754 09:23:06 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:30.754 09:23:06 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:30.754 09:23:06 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2542908 00:05:30.754 09:23:06 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:30.754 09:23:06 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:30.754 09:23:06 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2542908' 00:05:30.754 killing process with pid 2542908 00:05:30.754 09:23:06 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 2542908 00:05:30.754 09:23:06 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 2542908 00:05:31.014 00:05:31.014 real 0m1.397s 00:05:31.014 user 0m1.479s 00:05:31.014 sys 0m0.413s 00:05:31.014 09:23:06 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:31.014 09:23:06 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:31.014 ************************************ 00:05:31.014 END TEST dpdk_mem_utility 00:05:31.014 ************************************ 00:05:31.014 09:23:06 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:31.014 09:23:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:31.014 09:23:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:31.014 09:23:06 -- common/autotest_common.sh@10 -- # set +x 00:05:31.014 ************************************ 00:05:31.014 START TEST event 00:05:31.014 ************************************ 00:05:31.014 09:23:06 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:31.014 * Looking for test storage... 00:05:31.014 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:31.014 09:23:06 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:31.014 09:23:06 event -- common/autotest_common.sh@1711 -- # lcov --version 00:05:31.014 09:23:06 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:31.274 09:23:06 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:31.274 09:23:06 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:31.274 09:23:06 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:31.274 09:23:06 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:31.274 09:23:06 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:31.274 09:23:06 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:31.274 09:23:06 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:31.274 09:23:06 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:31.274 09:23:06 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:31.274 09:23:06 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:31.274 09:23:06 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:31.274 09:23:06 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:31.274 09:23:06 event -- scripts/common.sh@344 -- # case "$op" in 00:05:31.274 09:23:06 event -- scripts/common.sh@345 -- # : 1 00:05:31.274 09:23:06 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:31.274 09:23:06 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:31.274 09:23:06 event -- scripts/common.sh@365 -- # decimal 1 00:05:31.274 09:23:06 event -- scripts/common.sh@353 -- # local d=1 00:05:31.274 09:23:06 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:31.274 09:23:06 event -- scripts/common.sh@355 -- # echo 1 00:05:31.274 09:23:06 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:31.274 09:23:06 event -- scripts/common.sh@366 -- # decimal 2 00:05:31.274 09:23:06 event -- scripts/common.sh@353 -- # local d=2 00:05:31.274 09:23:06 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:31.275 09:23:06 event -- scripts/common.sh@355 -- # echo 2 00:05:31.275 09:23:06 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:31.275 09:23:06 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:31.275 09:23:06 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:31.275 09:23:06 event -- scripts/common.sh@368 -- # return 0 00:05:31.275 09:23:06 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:31.275 09:23:06 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:31.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.275 --rc genhtml_branch_coverage=1 00:05:31.275 --rc genhtml_function_coverage=1 00:05:31.275 --rc genhtml_legend=1 00:05:31.275 --rc geninfo_all_blocks=1 00:05:31.275 --rc geninfo_unexecuted_blocks=1 00:05:31.275 00:05:31.275 ' 00:05:31.275 09:23:06 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:31.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.275 --rc genhtml_branch_coverage=1 00:05:31.275 --rc genhtml_function_coverage=1 00:05:31.275 --rc genhtml_legend=1 00:05:31.275 --rc geninfo_all_blocks=1 00:05:31.275 --rc geninfo_unexecuted_blocks=1 00:05:31.275 00:05:31.275 ' 00:05:31.275 09:23:06 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:31.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.275 --rc genhtml_branch_coverage=1 00:05:31.275 --rc genhtml_function_coverage=1 00:05:31.275 --rc genhtml_legend=1 00:05:31.275 --rc geninfo_all_blocks=1 00:05:31.275 --rc geninfo_unexecuted_blocks=1 00:05:31.275 00:05:31.275 ' 00:05:31.275 09:23:06 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:31.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.275 --rc genhtml_branch_coverage=1 00:05:31.275 --rc genhtml_function_coverage=1 00:05:31.275 --rc genhtml_legend=1 00:05:31.275 --rc geninfo_all_blocks=1 00:05:31.275 --rc geninfo_unexecuted_blocks=1 00:05:31.275 00:05:31.275 ' 00:05:31.275 09:23:06 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:31.275 09:23:06 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:31.275 09:23:06 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:31.275 09:23:06 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:31.275 09:23:06 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:31.275 09:23:06 event -- common/autotest_common.sh@10 -- # set +x 00:05:31.275 ************************************ 00:05:31.275 START TEST event_perf 00:05:31.275 ************************************ 00:05:31.275 09:23:06 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:31.275 Running I/O for 1 seconds...[2024-12-09 09:23:06.625499] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:05:31.275 [2024-12-09 09:23:06.625583] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2543305 ] 00:05:31.275 [2024-12-09 09:23:06.716752] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:31.535 [2024-12-09 09:23:06.743328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:31.535 [2024-12-09 09:23:06.743452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:31.535 [2024-12-09 09:23:06.743617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.535 Running I/O for 1 seconds...[2024-12-09 09:23:06.743618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:32.503 00:05:32.503 lcore 0: 182590 00:05:32.503 lcore 1: 182593 00:05:32.503 lcore 2: 182590 00:05:32.503 lcore 3: 182593 00:05:32.503 done. 00:05:32.503 00:05:32.503 real 0m1.163s 00:05:32.503 user 0m4.070s 00:05:32.503 sys 0m0.090s 00:05:32.503 09:23:07 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:32.503 09:23:07 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:32.503 ************************************ 00:05:32.503 END TEST event_perf 00:05:32.503 ************************************ 00:05:32.503 09:23:07 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:32.503 09:23:07 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:32.503 09:23:07 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:32.503 09:23:07 event -- common/autotest_common.sh@10 -- # set +x 00:05:32.503 ************************************ 00:05:32.503 START TEST event_reactor 00:05:32.503 ************************************ 00:05:32.503 09:23:07 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:32.503 [2024-12-09 09:23:07.867931] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:05:32.503 [2024-12-09 09:23:07.868050] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2543656 ] 00:05:32.764 [2024-12-09 09:23:07.961449] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.764 [2024-12-09 09:23:07.981148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.705 test_start 00:05:33.705 oneshot 00:05:33.705 tick 100 00:05:33.705 tick 100 00:05:33.705 tick 250 00:05:33.705 tick 100 00:05:33.705 tick 100 00:05:33.705 tick 100 00:05:33.705 tick 250 00:05:33.705 tick 500 00:05:33.705 tick 100 00:05:33.705 tick 100 00:05:33.705 tick 250 00:05:33.705 tick 100 00:05:33.705 tick 100 00:05:33.705 test_end 00:05:33.705 00:05:33.705 real 0m1.158s 00:05:33.705 user 0m1.071s 00:05:33.705 sys 0m0.083s 00:05:33.705 09:23:09 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:33.705 09:23:09 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:33.705 ************************************ 00:05:33.705 END TEST event_reactor 00:05:33.705 ************************************ 00:05:33.705 09:23:09 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:33.705 09:23:09 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:33.705 09:23:09 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:33.705 09:23:09 event -- common/autotest_common.sh@10 -- # set +x 00:05:33.705 ************************************ 00:05:33.705 START TEST event_reactor_perf 00:05:33.705 ************************************ 00:05:33.705 09:23:09 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:33.705 [2024-12-09 09:23:09.103085] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:05:33.705 [2024-12-09 09:23:09.103170] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2543953 ] 00:05:33.965 [2024-12-09 09:23:09.191802] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.966 [2024-12-09 09:23:09.213658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.907 test_start 00:05:34.907 test_end 00:05:34.907 Performance: 535390 events per second 00:05:34.907 00:05:34.907 real 0m1.155s 00:05:34.907 user 0m1.070s 00:05:34.907 sys 0m0.081s 00:05:34.907 09:23:10 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:34.907 09:23:10 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:34.907 ************************************ 00:05:34.907 END TEST event_reactor_perf 00:05:34.907 ************************************ 00:05:34.907 09:23:10 event -- event/event.sh@49 -- # uname -s 00:05:34.907 09:23:10 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:34.907 09:23:10 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:34.907 09:23:10 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:34.907 09:23:10 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:34.907 09:23:10 event -- common/autotest_common.sh@10 -- # set +x 00:05:34.907 ************************************ 00:05:34.907 START TEST event_scheduler 00:05:34.907 ************************************ 00:05:34.907 09:23:10 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:35.169 * Looking for test storage... 00:05:35.169 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:35.169 09:23:10 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:35.169 09:23:10 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:05:35.169 09:23:10 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:35.169 09:23:10 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:35.169 09:23:10 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:35.169 09:23:10 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:35.169 09:23:10 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:35.170 09:23:10 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:35.170 09:23:10 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:35.170 09:23:10 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:35.170 09:23:10 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:35.170 09:23:10 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:35.170 09:23:10 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:35.170 09:23:10 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:35.170 09:23:10 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:35.170 09:23:10 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:35.170 09:23:10 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:35.170 09:23:10 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:35.170 09:23:10 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:35.170 09:23:10 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:35.170 09:23:10 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:35.170 09:23:10 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:35.170 09:23:10 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:35.170 09:23:10 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:35.170 09:23:10 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:35.170 09:23:10 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:35.170 09:23:10 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:35.170 09:23:10 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:35.170 09:23:10 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:35.170 09:23:10 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:35.170 09:23:10 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:35.170 09:23:10 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:35.170 09:23:10 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:35.170 09:23:10 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:35.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.170 --rc genhtml_branch_coverage=1 00:05:35.170 --rc genhtml_function_coverage=1 00:05:35.170 --rc genhtml_legend=1 00:05:35.170 --rc geninfo_all_blocks=1 00:05:35.170 --rc geninfo_unexecuted_blocks=1 00:05:35.170 00:05:35.170 ' 00:05:35.170 09:23:10 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:35.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.170 --rc genhtml_branch_coverage=1 00:05:35.170 --rc genhtml_function_coverage=1 00:05:35.170 --rc genhtml_legend=1 00:05:35.170 --rc geninfo_all_blocks=1 00:05:35.170 --rc geninfo_unexecuted_blocks=1 00:05:35.170 00:05:35.170 ' 00:05:35.170 09:23:10 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:35.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.170 --rc genhtml_branch_coverage=1 00:05:35.170 --rc genhtml_function_coverage=1 00:05:35.170 --rc genhtml_legend=1 00:05:35.170 --rc geninfo_all_blocks=1 00:05:35.170 --rc geninfo_unexecuted_blocks=1 00:05:35.170 00:05:35.170 ' 00:05:35.170 09:23:10 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:35.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.170 --rc genhtml_branch_coverage=1 00:05:35.170 --rc genhtml_function_coverage=1 00:05:35.170 --rc genhtml_legend=1 00:05:35.170 --rc geninfo_all_blocks=1 00:05:35.170 --rc geninfo_unexecuted_blocks=1 00:05:35.170 00:05:35.170 ' 00:05:35.170 09:23:10 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:35.170 09:23:10 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2544214 00:05:35.170 09:23:10 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:35.170 09:23:10 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2544214 00:05:35.170 09:23:10 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:35.170 09:23:10 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 2544214 ']' 00:05:35.170 09:23:10 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.170 09:23:10 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:35.170 09:23:10 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.170 09:23:10 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:35.170 09:23:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:35.170 [2024-12-09 09:23:10.575553] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:05:35.170 [2024-12-09 09:23:10.575628] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2544214 ] 00:05:35.432 [2024-12-09 09:23:10.644000] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:35.432 [2024-12-09 09:23:10.668686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.432 [2024-12-09 09:23:10.668817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:35.432 [2024-12-09 09:23:10.668818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:35.432 [2024-12-09 09:23:10.668775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:35.432 09:23:10 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:35.432 09:23:10 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:35.432 09:23:10 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:35.432 09:23:10 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.432 09:23:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:35.432 [2024-12-09 09:23:10.729674] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:35.432 [2024-12-09 09:23:10.729689] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:35.432 [2024-12-09 09:23:10.729696] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:35.432 [2024-12-09 09:23:10.729701] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:35.432 [2024-12-09 09:23:10.729705] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:35.432 09:23:10 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.432 09:23:10 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:35.432 09:23:10 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.432 09:23:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:35.432 [2024-12-09 09:23:10.783143] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:35.432 09:23:10 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.432 09:23:10 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:35.432 09:23:10 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:35.432 09:23:10 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:35.432 09:23:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:35.432 ************************************ 00:05:35.432 START TEST scheduler_create_thread 00:05:35.432 ************************************ 00:05:35.432 09:23:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:35.432 09:23:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:35.432 09:23:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.432 09:23:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.432 2 00:05:35.432 09:23:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.432 09:23:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:35.432 09:23:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.432 09:23:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.432 3 00:05:35.432 09:23:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.432 09:23:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:35.432 09:23:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.432 09:23:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.432 4 00:05:35.432 09:23:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.432 09:23:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:35.432 09:23:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.432 09:23:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.432 5 00:05:35.432 09:23:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.432 09:23:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:35.432 09:23:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.432 09:23:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.432 6 00:05:35.694 09:23:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.694 09:23:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:35.694 09:23:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.694 09:23:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.694 7 00:05:35.694 09:23:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.694 09:23:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:35.694 09:23:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.694 09:23:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.694 8 00:05:35.694 09:23:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.694 09:23:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:35.694 09:23:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.694 09:23:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.694 9 00:05:35.694 09:23:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.694 09:23:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:35.694 09:23:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.694 09:23:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.955 10 00:05:35.955 09:23:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.955 09:23:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:35.955 09:23:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.955 09:23:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.464 09:23:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.464 09:23:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:37.464 09:23:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:37.464 09:23:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.464 09:23:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:38.411 09:23:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:38.411 09:23:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:38.411 09:23:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:38.411 09:23:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:38.982 09:23:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:38.982 09:23:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:38.982 09:23:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:38.982 09:23:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:38.982 09:23:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:39.929 09:23:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.930 00:05:39.930 real 0m4.225s 00:05:39.930 user 0m0.029s 00:05:39.930 sys 0m0.003s 00:05:39.930 09:23:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:39.930 09:23:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:39.930 ************************************ 00:05:39.930 END TEST scheduler_create_thread 00:05:39.930 ************************************ 00:05:39.930 09:23:15 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:39.930 09:23:15 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2544214 00:05:39.930 09:23:15 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 2544214 ']' 00:05:39.930 09:23:15 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 2544214 00:05:39.930 09:23:15 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:39.930 09:23:15 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:39.930 09:23:15 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2544214 00:05:39.930 09:23:15 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:39.930 09:23:15 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:39.930 09:23:15 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2544214' 00:05:39.930 killing process with pid 2544214 00:05:39.930 09:23:15 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 2544214 00:05:39.930 09:23:15 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 2544214 00:05:39.930 [2024-12-09 09:23:15.324316] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:40.193 00:05:40.193 real 0m5.156s 00:05:40.193 user 0m10.278s 00:05:40.193 sys 0m0.374s 00:05:40.193 09:23:15 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.193 09:23:15 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:40.193 ************************************ 00:05:40.193 END TEST event_scheduler 00:05:40.193 ************************************ 00:05:40.193 09:23:15 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:40.193 09:23:15 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:40.193 09:23:15 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:40.193 09:23:15 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.193 09:23:15 event -- common/autotest_common.sh@10 -- # set +x 00:05:40.193 ************************************ 00:05:40.193 START TEST app_repeat 00:05:40.193 ************************************ 00:05:40.194 09:23:15 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:40.194 09:23:15 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.194 09:23:15 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.194 09:23:15 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:40.194 09:23:15 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:40.194 09:23:15 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:40.194 09:23:15 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:40.194 09:23:15 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:40.194 09:23:15 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2545262 00:05:40.194 09:23:15 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:40.194 09:23:15 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:40.194 09:23:15 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2545262' 00:05:40.194 Process app_repeat pid: 2545262 00:05:40.194 09:23:15 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:40.194 09:23:15 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:40.194 spdk_app_start Round 0 00:05:40.194 09:23:15 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2545262 /var/tmp/spdk-nbd.sock 00:05:40.194 09:23:15 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2545262 ']' 00:05:40.194 09:23:15 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:40.194 09:23:15 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:40.194 09:23:15 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:40.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:40.194 09:23:15 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:40.194 09:23:15 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:40.194 [2024-12-09 09:23:15.602834] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:05:40.194 [2024-12-09 09:23:15.602902] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2545262 ] 00:05:40.455 [2024-12-09 09:23:15.689734] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:40.455 [2024-12-09 09:23:15.708982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:40.455 [2024-12-09 09:23:15.708983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.455 09:23:15 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:40.455 09:23:15 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:40.455 09:23:15 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:40.715 Malloc0 00:05:40.715 09:23:15 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:40.715 Malloc1 00:05:40.715 09:23:16 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:40.715 09:23:16 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.715 09:23:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:40.715 09:23:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:40.715 09:23:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.715 09:23:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:40.715 09:23:16 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:40.715 09:23:16 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.715 09:23:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:40.716 09:23:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:40.716 09:23:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.716 09:23:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:40.716 09:23:16 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:40.716 09:23:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:40.716 09:23:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:40.716 09:23:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:40.976 /dev/nbd0 00:05:40.976 09:23:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:40.976 09:23:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:40.976 09:23:16 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:40.976 09:23:16 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:40.976 09:23:16 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:40.976 09:23:16 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:40.976 09:23:16 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:40.976 09:23:16 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:40.976 09:23:16 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:40.976 09:23:16 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:40.976 09:23:16 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:40.976 1+0 records in 00:05:40.976 1+0 records out 00:05:40.976 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000282141 s, 14.5 MB/s 00:05:40.976 09:23:16 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:40.976 09:23:16 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:40.976 09:23:16 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:40.976 09:23:16 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:40.976 09:23:16 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:40.976 09:23:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:40.976 09:23:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:40.976 09:23:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:41.238 /dev/nbd1 00:05:41.238 09:23:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:41.238 09:23:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:41.238 09:23:16 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:41.238 09:23:16 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:41.238 09:23:16 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:41.238 09:23:16 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:41.238 09:23:16 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:41.238 09:23:16 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:41.238 09:23:16 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:41.238 09:23:16 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:41.238 09:23:16 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:41.238 1+0 records in 00:05:41.238 1+0 records out 00:05:41.238 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000306612 s, 13.4 MB/s 00:05:41.238 09:23:16 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:41.238 09:23:16 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:41.238 09:23:16 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:41.238 09:23:16 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:41.238 09:23:16 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:41.238 09:23:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:41.238 09:23:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:41.238 09:23:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:41.238 09:23:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.238 09:23:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:41.499 09:23:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:41.499 { 00:05:41.499 "nbd_device": "/dev/nbd0", 00:05:41.499 "bdev_name": "Malloc0" 00:05:41.499 }, 00:05:41.499 { 00:05:41.499 "nbd_device": "/dev/nbd1", 00:05:41.499 "bdev_name": "Malloc1" 00:05:41.499 } 00:05:41.499 ]' 00:05:41.499 09:23:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:41.499 { 00:05:41.499 "nbd_device": "/dev/nbd0", 00:05:41.499 "bdev_name": "Malloc0" 00:05:41.499 }, 00:05:41.499 { 00:05:41.499 "nbd_device": "/dev/nbd1", 00:05:41.499 "bdev_name": "Malloc1" 00:05:41.499 } 00:05:41.499 ]' 00:05:41.499 09:23:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:41.499 09:23:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:41.499 /dev/nbd1' 00:05:41.499 09:23:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:41.499 /dev/nbd1' 00:05:41.499 09:23:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:41.499 09:23:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:41.499 09:23:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:41.499 09:23:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:41.499 09:23:16 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:41.499 09:23:16 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:41.499 09:23:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.499 09:23:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:41.499 09:23:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:41.499 09:23:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:41.499 09:23:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:41.499 09:23:16 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:41.499 256+0 records in 00:05:41.499 256+0 records out 00:05:41.499 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0122886 s, 85.3 MB/s 00:05:41.499 09:23:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:41.499 09:23:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:41.499 256+0 records in 00:05:41.499 256+0 records out 00:05:41.499 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0123914 s, 84.6 MB/s 00:05:41.499 09:23:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:41.499 09:23:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:41.499 256+0 records in 00:05:41.499 256+0 records out 00:05:41.499 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.01321 s, 79.4 MB/s 00:05:41.499 09:23:16 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:41.499 09:23:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.499 09:23:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:41.499 09:23:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:41.499 09:23:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:41.499 09:23:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:41.499 09:23:16 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:41.499 09:23:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:41.499 09:23:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:41.499 09:23:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:41.499 09:23:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:41.499 09:23:16 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:41.499 09:23:16 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:41.499 09:23:16 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.499 09:23:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.499 09:23:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:41.499 09:23:16 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:41.499 09:23:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:41.499 09:23:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:41.769 09:23:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:41.769 09:23:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:41.769 09:23:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:41.769 09:23:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:41.769 09:23:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:41.769 09:23:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:41.769 09:23:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:41.769 09:23:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:41.769 09:23:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:41.769 09:23:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:42.030 09:23:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:42.030 09:23:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:42.030 09:23:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:42.030 09:23:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:42.030 09:23:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:42.030 09:23:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:42.030 09:23:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:42.030 09:23:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:42.030 09:23:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:42.030 09:23:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.030 09:23:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:42.030 09:23:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:42.030 09:23:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:42.030 09:23:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:42.030 09:23:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:42.030 09:23:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:42.030 09:23:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:42.289 09:23:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:42.289 09:23:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:42.289 09:23:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:42.289 09:23:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:42.289 09:23:17 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:42.289 09:23:17 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:42.289 09:23:17 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:42.289 09:23:17 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:42.550 [2024-12-09 09:23:17.746398] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:42.550 [2024-12-09 09:23:17.761383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.550 [2024-12-09 09:23:17.761383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:42.550 [2024-12-09 09:23:17.790178] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:42.550 [2024-12-09 09:23:17.790211] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:45.859 09:23:20 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:45.859 09:23:20 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:45.859 spdk_app_start Round 1 00:05:45.859 09:23:20 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2545262 /var/tmp/spdk-nbd.sock 00:05:45.859 09:23:20 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2545262 ']' 00:05:45.859 09:23:20 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:45.859 09:23:20 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:45.859 09:23:20 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:45.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:45.859 09:23:20 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:45.859 09:23:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:45.859 09:23:20 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:45.859 09:23:20 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:45.860 09:23:20 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:45.860 Malloc0 00:05:45.860 09:23:21 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:45.860 Malloc1 00:05:45.860 09:23:21 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:45.860 09:23:21 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.860 09:23:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:45.860 09:23:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:45.860 09:23:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.860 09:23:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:45.860 09:23:21 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:45.860 09:23:21 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.860 09:23:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:45.860 09:23:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:45.860 09:23:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.860 09:23:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:45.860 09:23:21 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:45.860 09:23:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:45.860 09:23:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:45.860 09:23:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:46.120 /dev/nbd0 00:05:46.120 09:23:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:46.120 09:23:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:46.120 09:23:21 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:46.120 09:23:21 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:46.120 09:23:21 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:46.120 09:23:21 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:46.120 09:23:21 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:46.120 09:23:21 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:46.120 09:23:21 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:46.120 09:23:21 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:46.120 09:23:21 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:46.120 1+0 records in 00:05:46.120 1+0 records out 00:05:46.120 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000270343 s, 15.2 MB/s 00:05:46.120 09:23:21 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:46.120 09:23:21 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:46.120 09:23:21 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:46.120 09:23:21 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:46.120 09:23:21 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:46.120 09:23:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:46.120 09:23:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:46.120 09:23:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:46.381 /dev/nbd1 00:05:46.381 09:23:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:46.381 09:23:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:46.381 09:23:21 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:46.381 09:23:21 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:46.381 09:23:21 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:46.381 09:23:21 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:46.381 09:23:21 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:46.381 09:23:21 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:46.381 09:23:21 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:46.381 09:23:21 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:46.381 09:23:21 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:46.381 1+0 records in 00:05:46.381 1+0 records out 00:05:46.381 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000215144 s, 19.0 MB/s 00:05:46.381 09:23:21 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:46.381 09:23:21 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:46.381 09:23:21 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:46.381 09:23:21 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:46.381 09:23:21 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:46.381 09:23:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:46.381 09:23:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:46.381 09:23:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:46.381 09:23:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.381 09:23:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:46.381 09:23:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:46.381 { 00:05:46.381 "nbd_device": "/dev/nbd0", 00:05:46.381 "bdev_name": "Malloc0" 00:05:46.381 }, 00:05:46.381 { 00:05:46.381 "nbd_device": "/dev/nbd1", 00:05:46.381 "bdev_name": "Malloc1" 00:05:46.381 } 00:05:46.381 ]' 00:05:46.381 09:23:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:46.381 { 00:05:46.381 "nbd_device": "/dev/nbd0", 00:05:46.381 "bdev_name": "Malloc0" 00:05:46.381 }, 00:05:46.381 { 00:05:46.381 "nbd_device": "/dev/nbd1", 00:05:46.381 "bdev_name": "Malloc1" 00:05:46.381 } 00:05:46.381 ]' 00:05:46.381 09:23:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:46.381 09:23:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:46.381 /dev/nbd1' 00:05:46.381 09:23:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:46.381 /dev/nbd1' 00:05:46.381 09:23:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:46.381 09:23:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:46.381 09:23:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:46.643 09:23:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:46.643 09:23:21 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:46.643 09:23:21 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:46.643 09:23:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.643 09:23:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:46.643 09:23:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:46.643 09:23:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:46.643 09:23:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:46.643 09:23:21 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:46.643 256+0 records in 00:05:46.643 256+0 records out 00:05:46.643 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0119606 s, 87.7 MB/s 00:05:46.643 09:23:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:46.643 09:23:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:46.643 256+0 records in 00:05:46.643 256+0 records out 00:05:46.643 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0121256 s, 86.5 MB/s 00:05:46.643 09:23:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:46.643 09:23:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:46.643 256+0 records in 00:05:46.643 256+0 records out 00:05:46.643 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0128354 s, 81.7 MB/s 00:05:46.643 09:23:21 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:46.643 09:23:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.643 09:23:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:46.643 09:23:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:46.643 09:23:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:46.643 09:23:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:46.643 09:23:21 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:46.643 09:23:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:46.643 09:23:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:46.643 09:23:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:46.643 09:23:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:46.643 09:23:21 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:46.643 09:23:21 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:46.643 09:23:21 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.643 09:23:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.643 09:23:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:46.643 09:23:21 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:46.643 09:23:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:46.643 09:23:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:46.643 09:23:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:46.643 09:23:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:46.643 09:23:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:46.643 09:23:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:46.643 09:23:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:46.643 09:23:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:46.643 09:23:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:46.643 09:23:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:46.643 09:23:22 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:46.643 09:23:22 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:46.905 09:23:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:46.905 09:23:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:46.905 09:23:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:46.905 09:23:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:46.905 09:23:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:46.905 09:23:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:46.905 09:23:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:46.905 09:23:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:46.905 09:23:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:46.905 09:23:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.905 09:23:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:47.167 09:23:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:47.167 09:23:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:47.167 09:23:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:47.167 09:23:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:47.167 09:23:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:47.167 09:23:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:47.167 09:23:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:47.167 09:23:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:47.167 09:23:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:47.167 09:23:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:47.167 09:23:22 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:47.167 09:23:22 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:47.167 09:23:22 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:47.428 09:23:22 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:47.428 [2024-12-09 09:23:22.777021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:47.428 [2024-12-09 09:23:22.792060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:47.428 [2024-12-09 09:23:22.792062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.428 [2024-12-09 09:23:22.821605] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:47.428 [2024-12-09 09:23:22.821641] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:50.731 09:23:25 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:50.731 09:23:25 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:50.731 spdk_app_start Round 2 00:05:50.731 09:23:25 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2545262 /var/tmp/spdk-nbd.sock 00:05:50.731 09:23:25 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2545262 ']' 00:05:50.731 09:23:25 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:50.731 09:23:25 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:50.731 09:23:25 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:50.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:50.731 09:23:25 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:50.731 09:23:25 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:50.731 09:23:25 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:50.731 09:23:25 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:50.731 09:23:25 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:50.731 Malloc0 00:05:50.731 09:23:26 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:50.992 Malloc1 00:05:50.992 09:23:26 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:50.992 09:23:26 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.992 09:23:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:50.992 09:23:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:50.992 09:23:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.992 09:23:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:50.992 09:23:26 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:50.992 09:23:26 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.992 09:23:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:50.992 09:23:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:50.992 09:23:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.992 09:23:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:50.992 09:23:26 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:50.992 09:23:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:50.992 09:23:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:50.992 09:23:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:50.992 /dev/nbd0 00:05:50.992 09:23:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:50.992 09:23:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:50.992 09:23:26 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:50.992 09:23:26 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:50.992 09:23:26 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:50.992 09:23:26 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:50.992 09:23:26 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:50.992 09:23:26 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:50.992 09:23:26 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:50.992 09:23:26 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:50.992 09:23:26 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:50.992 1+0 records in 00:05:50.992 1+0 records out 00:05:50.992 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000279358 s, 14.7 MB/s 00:05:50.992 09:23:26 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:50.992 09:23:26 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:50.992 09:23:26 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:50.992 09:23:26 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:50.992 09:23:26 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:50.992 09:23:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:50.992 09:23:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:50.992 09:23:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:51.254 /dev/nbd1 00:05:51.254 09:23:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:51.254 09:23:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:51.254 09:23:26 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:51.254 09:23:26 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:51.254 09:23:26 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:51.254 09:23:26 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:51.254 09:23:26 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:51.254 09:23:26 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:51.254 09:23:26 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:51.254 09:23:26 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:51.254 09:23:26 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:51.254 1+0 records in 00:05:51.254 1+0 records out 00:05:51.254 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000274578 s, 14.9 MB/s 00:05:51.254 09:23:26 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:51.254 09:23:26 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:51.254 09:23:26 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:51.254 09:23:26 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:51.254 09:23:26 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:51.254 09:23:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:51.254 09:23:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:51.254 09:23:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:51.254 09:23:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.254 09:23:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:51.515 09:23:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:51.515 { 00:05:51.515 "nbd_device": "/dev/nbd0", 00:05:51.515 "bdev_name": "Malloc0" 00:05:51.515 }, 00:05:51.515 { 00:05:51.515 "nbd_device": "/dev/nbd1", 00:05:51.515 "bdev_name": "Malloc1" 00:05:51.515 } 00:05:51.515 ]' 00:05:51.515 09:23:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:51.515 { 00:05:51.515 "nbd_device": "/dev/nbd0", 00:05:51.515 "bdev_name": "Malloc0" 00:05:51.515 }, 00:05:51.515 { 00:05:51.515 "nbd_device": "/dev/nbd1", 00:05:51.515 "bdev_name": "Malloc1" 00:05:51.515 } 00:05:51.515 ]' 00:05:51.515 09:23:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:51.515 09:23:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:51.515 /dev/nbd1' 00:05:51.515 09:23:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:51.515 /dev/nbd1' 00:05:51.515 09:23:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:51.515 09:23:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:51.515 09:23:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:51.515 09:23:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:51.515 09:23:26 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:51.515 09:23:26 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:51.515 09:23:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.515 09:23:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:51.515 09:23:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:51.515 09:23:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:51.515 09:23:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:51.515 09:23:26 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:51.515 256+0 records in 00:05:51.515 256+0 records out 00:05:51.515 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127305 s, 82.4 MB/s 00:05:51.515 09:23:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:51.515 09:23:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:51.515 256+0 records in 00:05:51.515 256+0 records out 00:05:51.515 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0121039 s, 86.6 MB/s 00:05:51.515 09:23:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:51.515 09:23:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:51.515 256+0 records in 00:05:51.515 256+0 records out 00:05:51.515 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0130505 s, 80.3 MB/s 00:05:51.516 09:23:26 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:51.516 09:23:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.516 09:23:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:51.516 09:23:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:51.516 09:23:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:51.516 09:23:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:51.516 09:23:26 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:51.516 09:23:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:51.516 09:23:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:51.516 09:23:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:51.516 09:23:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:51.516 09:23:26 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:51.516 09:23:26 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:51.516 09:23:26 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.516 09:23:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.516 09:23:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:51.516 09:23:26 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:51.516 09:23:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:51.516 09:23:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:51.777 09:23:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:51.777 09:23:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:51.777 09:23:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:51.777 09:23:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:51.777 09:23:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:51.777 09:23:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:51.777 09:23:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:51.777 09:23:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:51.777 09:23:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:51.777 09:23:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:52.037 09:23:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:52.037 09:23:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:52.037 09:23:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:52.037 09:23:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:52.037 09:23:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:52.037 09:23:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:52.037 09:23:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:52.037 09:23:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:52.037 09:23:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:52.037 09:23:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.037 09:23:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:52.298 09:23:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:52.298 09:23:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:52.298 09:23:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:52.298 09:23:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:52.298 09:23:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:52.298 09:23:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:52.298 09:23:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:52.298 09:23:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:52.298 09:23:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:52.298 09:23:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:52.298 09:23:27 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:52.298 09:23:27 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:52.298 09:23:27 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:52.298 09:23:27 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:52.559 [2024-12-09 09:23:27.824170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:52.559 [2024-12-09 09:23:27.839216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:52.559 [2024-12-09 09:23:27.839218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.559 [2024-12-09 09:23:27.868302] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:52.559 [2024-12-09 09:23:27.868335] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:55.883 09:23:30 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2545262 /var/tmp/spdk-nbd.sock 00:05:55.883 09:23:30 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2545262 ']' 00:05:55.883 09:23:30 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:55.883 09:23:30 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:55.883 09:23:30 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:55.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:55.883 09:23:30 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:55.883 09:23:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:55.883 09:23:30 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:55.883 09:23:30 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:55.883 09:23:30 event.app_repeat -- event/event.sh@39 -- # killprocess 2545262 00:05:55.883 09:23:30 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 2545262 ']' 00:05:55.883 09:23:30 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 2545262 00:05:55.883 09:23:30 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:55.883 09:23:30 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:55.883 09:23:30 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2545262 00:05:55.883 09:23:30 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:55.883 09:23:30 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:55.883 09:23:30 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2545262' 00:05:55.883 killing process with pid 2545262 00:05:55.883 09:23:30 event.app_repeat -- common/autotest_common.sh@973 -- # kill 2545262 00:05:55.883 09:23:30 event.app_repeat -- common/autotest_common.sh@978 -- # wait 2545262 00:05:55.883 spdk_app_start is called in Round 0. 00:05:55.883 Shutdown signal received, stop current app iteration 00:05:55.883 Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 reinitialization... 00:05:55.883 spdk_app_start is called in Round 1. 00:05:55.883 Shutdown signal received, stop current app iteration 00:05:55.883 Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 reinitialization... 00:05:55.883 spdk_app_start is called in Round 2. 00:05:55.883 Shutdown signal received, stop current app iteration 00:05:55.883 Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 reinitialization... 00:05:55.883 spdk_app_start is called in Round 3. 00:05:55.883 Shutdown signal received, stop current app iteration 00:05:55.883 09:23:31 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:55.883 09:23:31 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:55.883 00:05:55.883 real 0m15.513s 00:05:55.883 user 0m34.024s 00:05:55.883 sys 0m2.221s 00:05:55.883 09:23:31 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:55.883 09:23:31 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:55.883 ************************************ 00:05:55.883 END TEST app_repeat 00:05:55.883 ************************************ 00:05:55.883 09:23:31 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:55.883 09:23:31 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:55.883 09:23:31 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:55.883 09:23:31 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:55.883 09:23:31 event -- common/autotest_common.sh@10 -- # set +x 00:05:55.883 ************************************ 00:05:55.883 START TEST cpu_locks 00:05:55.883 ************************************ 00:05:55.883 09:23:31 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:55.883 * Looking for test storage... 00:05:55.883 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:55.883 09:23:31 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:55.883 09:23:31 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:05:55.883 09:23:31 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:56.143 09:23:31 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:56.143 09:23:31 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:56.143 09:23:31 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:56.143 09:23:31 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:56.143 09:23:31 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:56.143 09:23:31 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:56.143 09:23:31 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:56.143 09:23:31 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:56.143 09:23:31 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:56.143 09:23:31 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:56.143 09:23:31 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:56.143 09:23:31 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:56.143 09:23:31 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:56.143 09:23:31 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:56.143 09:23:31 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:56.143 09:23:31 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:56.143 09:23:31 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:56.143 09:23:31 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:56.143 09:23:31 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:56.143 09:23:31 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:56.143 09:23:31 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:56.143 09:23:31 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:56.143 09:23:31 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:56.143 09:23:31 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:56.143 09:23:31 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:56.143 09:23:31 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:56.143 09:23:31 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:56.143 09:23:31 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:56.143 09:23:31 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:56.143 09:23:31 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:56.143 09:23:31 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:56.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.143 --rc genhtml_branch_coverage=1 00:05:56.143 --rc genhtml_function_coverage=1 00:05:56.143 --rc genhtml_legend=1 00:05:56.143 --rc geninfo_all_blocks=1 00:05:56.143 --rc geninfo_unexecuted_blocks=1 00:05:56.143 00:05:56.143 ' 00:05:56.143 09:23:31 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:56.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.143 --rc genhtml_branch_coverage=1 00:05:56.143 --rc genhtml_function_coverage=1 00:05:56.143 --rc genhtml_legend=1 00:05:56.143 --rc geninfo_all_blocks=1 00:05:56.143 --rc geninfo_unexecuted_blocks=1 00:05:56.143 00:05:56.143 ' 00:05:56.143 09:23:31 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:56.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.143 --rc genhtml_branch_coverage=1 00:05:56.143 --rc genhtml_function_coverage=1 00:05:56.143 --rc genhtml_legend=1 00:05:56.143 --rc geninfo_all_blocks=1 00:05:56.143 --rc geninfo_unexecuted_blocks=1 00:05:56.143 00:05:56.143 ' 00:05:56.143 09:23:31 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:56.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.143 --rc genhtml_branch_coverage=1 00:05:56.143 --rc genhtml_function_coverage=1 00:05:56.143 --rc genhtml_legend=1 00:05:56.143 --rc geninfo_all_blocks=1 00:05:56.143 --rc geninfo_unexecuted_blocks=1 00:05:56.143 00:05:56.143 ' 00:05:56.143 09:23:31 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:56.143 09:23:31 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:56.143 09:23:31 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:56.143 09:23:31 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:56.143 09:23:31 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:56.143 09:23:31 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:56.143 09:23:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:56.143 ************************************ 00:05:56.143 START TEST default_locks 00:05:56.143 ************************************ 00:05:56.143 09:23:31 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:56.143 09:23:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2548724 00:05:56.143 09:23:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2548724 00:05:56.143 09:23:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:56.143 09:23:31 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 2548724 ']' 00:05:56.143 09:23:31 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.143 09:23:31 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:56.143 09:23:31 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.143 09:23:31 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:56.143 09:23:31 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:56.143 [2024-12-09 09:23:31.462985] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:05:56.143 [2024-12-09 09:23:31.463049] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2548724 ] 00:05:56.143 [2024-12-09 09:23:31.550383] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.143 [2024-12-09 09:23:31.569360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.087 09:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:57.087 09:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:57.087 09:23:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2548724 00:05:57.087 09:23:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2548724 00:05:57.087 09:23:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:57.348 lslocks: write error 00:05:57.348 09:23:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2548724 00:05:57.348 09:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 2548724 ']' 00:05:57.348 09:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 2548724 00:05:57.348 09:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:57.348 09:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:57.348 09:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2548724 00:05:57.348 09:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:57.348 09:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:57.348 09:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2548724' 00:05:57.348 killing process with pid 2548724 00:05:57.348 09:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 2548724 00:05:57.348 09:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 2548724 00:05:57.610 09:23:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2548724 00:05:57.610 09:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:57.610 09:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2548724 00:05:57.610 09:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:57.610 09:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:57.610 09:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:57.610 09:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:57.610 09:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 2548724 00:05:57.610 09:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 2548724 ']' 00:05:57.610 09:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.610 09:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:57.610 09:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.610 09:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:57.610 09:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:57.610 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2548724) - No such process 00:05:57.610 ERROR: process (pid: 2548724) is no longer running 00:05:57.610 09:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:57.610 09:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:57.610 09:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:57.610 09:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:57.610 09:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:57.610 09:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:57.610 09:23:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:57.610 09:23:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:57.610 09:23:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:57.610 09:23:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:57.610 00:05:57.610 real 0m1.436s 00:05:57.610 user 0m1.535s 00:05:57.610 sys 0m0.502s 00:05:57.610 09:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:57.610 09:23:32 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:57.610 ************************************ 00:05:57.610 END TEST default_locks 00:05:57.610 ************************************ 00:05:57.610 09:23:32 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:57.610 09:23:32 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:57.610 09:23:32 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:57.610 09:23:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:57.610 ************************************ 00:05:57.610 START TEST default_locks_via_rpc 00:05:57.610 ************************************ 00:05:57.610 09:23:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:57.611 09:23:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2549001 00:05:57.611 09:23:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2549001 00:05:57.611 09:23:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:57.611 09:23:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2549001 ']' 00:05:57.611 09:23:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.611 09:23:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:57.611 09:23:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.611 09:23:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:57.611 09:23:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.611 [2024-12-09 09:23:32.972430] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:05:57.611 [2024-12-09 09:23:32.972489] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2549001 ] 00:05:57.611 [2024-12-09 09:23:33.057524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.871 [2024-12-09 09:23:33.075788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.444 09:23:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:58.444 09:23:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:58.444 09:23:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:58.444 09:23:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.444 09:23:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.444 09:23:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:58.444 09:23:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:58.444 09:23:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:58.444 09:23:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:58.444 09:23:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:58.444 09:23:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:58.444 09:23:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:58.444 09:23:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.444 09:23:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:58.444 09:23:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2549001 00:05:58.444 09:23:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2549001 00:05:58.444 09:23:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:59.016 09:23:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2549001 00:05:59.016 09:23:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 2549001 ']' 00:05:59.016 09:23:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 2549001 00:05:59.016 09:23:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:59.016 09:23:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:59.016 09:23:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2549001 00:05:59.016 09:23:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:59.016 09:23:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:59.016 09:23:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2549001' 00:05:59.016 killing process with pid 2549001 00:05:59.016 09:23:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 2549001 00:05:59.016 09:23:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 2549001 00:05:59.277 00:05:59.277 real 0m1.619s 00:05:59.277 user 0m1.730s 00:05:59.277 sys 0m0.573s 00:05:59.277 09:23:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:59.277 09:23:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.277 ************************************ 00:05:59.277 END TEST default_locks_via_rpc 00:05:59.277 ************************************ 00:05:59.277 09:23:34 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:59.277 09:23:34 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:59.277 09:23:34 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:59.277 09:23:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:59.277 ************************************ 00:05:59.277 START TEST non_locking_app_on_locked_coremask 00:05:59.277 ************************************ 00:05:59.277 09:23:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:59.277 09:23:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2549366 00:05:59.277 09:23:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2549366 /var/tmp/spdk.sock 00:05:59.277 09:23:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:59.277 09:23:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2549366 ']' 00:05:59.277 09:23:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.277 09:23:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:59.277 09:23:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.277 09:23:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:59.277 09:23:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:59.277 [2024-12-09 09:23:34.664929] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:05:59.277 [2024-12-09 09:23:34.664984] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2549366 ] 00:05:59.538 [2024-12-09 09:23:34.750926] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.538 [2024-12-09 09:23:34.768192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.116 09:23:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:00.116 09:23:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:00.116 09:23:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:00.116 09:23:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2549471 00:06:00.116 09:23:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2549471 /var/tmp/spdk2.sock 00:06:00.116 09:23:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2549471 ']' 00:06:00.116 09:23:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:00.116 09:23:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:00.116 09:23:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:00.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:00.116 09:23:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:00.116 09:23:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:00.116 [2024-12-09 09:23:35.483526] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:06:00.116 [2024-12-09 09:23:35.483578] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2549471 ] 00:06:00.116 [2024-12-09 09:23:35.569045] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:00.376 [2024-12-09 09:23:35.569065] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.376 [2024-12-09 09:23:35.601361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.948 09:23:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:00.948 09:23:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:00.948 09:23:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2549366 00:06:00.948 09:23:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2549366 00:06:00.948 09:23:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:01.538 lslocks: write error 00:06:01.538 09:23:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2549366 00:06:01.538 09:23:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2549366 ']' 00:06:01.538 09:23:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2549366 00:06:01.538 09:23:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:01.538 09:23:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:01.538 09:23:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2549366 00:06:01.538 09:23:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:01.538 09:23:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:01.538 09:23:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2549366' 00:06:01.538 killing process with pid 2549366 00:06:01.538 09:23:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2549366 00:06:01.538 09:23:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2549366 00:06:02.110 09:23:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2549471 00:06:02.110 09:23:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2549471 ']' 00:06:02.110 09:23:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2549471 00:06:02.110 09:23:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:02.110 09:23:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:02.110 09:23:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2549471 00:06:02.110 09:23:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:02.110 09:23:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:02.110 09:23:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2549471' 00:06:02.110 killing process with pid 2549471 00:06:02.110 09:23:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2549471 00:06:02.110 09:23:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2549471 00:06:02.110 00:06:02.110 real 0m2.955s 00:06:02.110 user 0m3.280s 00:06:02.110 sys 0m0.919s 00:06:02.110 09:23:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:02.110 09:23:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:02.110 ************************************ 00:06:02.110 END TEST non_locking_app_on_locked_coremask 00:06:02.110 ************************************ 00:06:02.371 09:23:37 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:02.371 09:23:37 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:02.371 09:23:37 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:02.371 09:23:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:02.371 ************************************ 00:06:02.371 START TEST locking_app_on_unlocked_coremask 00:06:02.371 ************************************ 00:06:02.371 09:23:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:02.371 09:23:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2549881 00:06:02.371 09:23:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2549881 /var/tmp/spdk.sock 00:06:02.371 09:23:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:02.371 09:23:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2549881 ']' 00:06:02.371 09:23:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.371 09:23:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:02.371 09:23:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.371 09:23:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:02.371 09:23:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:02.371 [2024-12-09 09:23:37.694794] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:06:02.371 [2024-12-09 09:23:37.694849] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2549881 ] 00:06:02.371 [2024-12-09 09:23:37.784980] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:02.372 [2024-12-09 09:23:37.785005] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.372 [2024-12-09 09:23:37.802658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.314 09:23:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:03.314 09:23:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:03.314 09:23:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:03.314 09:23:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2550179 00:06:03.314 09:23:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2550179 /var/tmp/spdk2.sock 00:06:03.314 09:23:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2550179 ']' 00:06:03.314 09:23:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:03.314 09:23:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:03.314 09:23:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:03.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:03.314 09:23:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:03.314 09:23:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:03.314 [2024-12-09 09:23:38.512657] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:06:03.314 [2024-12-09 09:23:38.512709] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2550179 ] 00:06:03.314 [2024-12-09 09:23:38.598135] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.314 [2024-12-09 09:23:38.630521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.886 09:23:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:03.886 09:23:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:03.886 09:23:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2550179 00:06:03.886 09:23:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2550179 00:06:03.886 09:23:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:04.828 lslocks: write error 00:06:04.829 09:23:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2549881 00:06:04.829 09:23:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2549881 ']' 00:06:04.829 09:23:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 2549881 00:06:04.829 09:23:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:04.829 09:23:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:04.829 09:23:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2549881 00:06:04.829 09:23:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:04.829 09:23:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:04.829 09:23:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2549881' 00:06:04.829 killing process with pid 2549881 00:06:04.829 09:23:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 2549881 00:06:04.829 09:23:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 2549881 00:06:05.088 09:23:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2550179 00:06:05.088 09:23:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2550179 ']' 00:06:05.088 09:23:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 2550179 00:06:05.088 09:23:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:05.088 09:23:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:05.088 09:23:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2550179 00:06:05.088 09:23:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:05.088 09:23:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:05.088 09:23:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2550179' 00:06:05.088 killing process with pid 2550179 00:06:05.088 09:23:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 2550179 00:06:05.088 09:23:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 2550179 00:06:05.349 00:06:05.349 real 0m2.966s 00:06:05.349 user 0m3.309s 00:06:05.349 sys 0m0.890s 00:06:05.349 09:23:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:05.349 09:23:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:05.349 ************************************ 00:06:05.349 END TEST locking_app_on_unlocked_coremask 00:06:05.349 ************************************ 00:06:05.349 09:23:40 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:05.349 09:23:40 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:05.349 09:23:40 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:05.349 09:23:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:05.349 ************************************ 00:06:05.349 START TEST locking_app_on_locked_coremask 00:06:05.349 ************************************ 00:06:05.349 09:23:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:05.349 09:23:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2550557 00:06:05.349 09:23:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2550557 /var/tmp/spdk.sock 00:06:05.349 09:23:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:05.349 09:23:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2550557 ']' 00:06:05.349 09:23:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.349 09:23:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:05.349 09:23:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.349 09:23:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:05.349 09:23:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:05.349 [2024-12-09 09:23:40.735518] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:06:05.349 [2024-12-09 09:23:40.735574] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2550557 ] 00:06:05.610 [2024-12-09 09:23:40.821786] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.610 [2024-12-09 09:23:40.838663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.179 09:23:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:06.179 09:23:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:06.179 09:23:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:06.179 09:23:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2550841 00:06:06.179 09:23:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2550841 /var/tmp/spdk2.sock 00:06:06.179 09:23:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:06.179 09:23:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2550841 /var/tmp/spdk2.sock 00:06:06.179 09:23:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:06.179 09:23:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:06.179 09:23:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:06.179 09:23:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:06.179 09:23:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 2550841 /var/tmp/spdk2.sock 00:06:06.179 09:23:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2550841 ']' 00:06:06.179 09:23:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:06.179 09:23:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:06.179 09:23:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:06.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:06.179 09:23:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:06.179 09:23:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:06.179 [2024-12-09 09:23:41.557123] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:06:06.179 [2024-12-09 09:23:41.557176] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2550841 ] 00:06:06.438 [2024-12-09 09:23:41.643278] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2550557 has claimed it. 00:06:06.438 [2024-12-09 09:23:41.643313] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:07.009 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2550841) - No such process 00:06:07.009 ERROR: process (pid: 2550841) is no longer running 00:06:07.009 09:23:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:07.009 09:23:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:07.009 09:23:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:07.009 09:23:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:07.009 09:23:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:07.009 09:23:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:07.009 09:23:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2550557 00:06:07.009 09:23:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2550557 00:06:07.009 09:23:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:07.269 lslocks: write error 00:06:07.269 09:23:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2550557 00:06:07.269 09:23:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2550557 ']' 00:06:07.269 09:23:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2550557 00:06:07.269 09:23:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:07.269 09:23:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:07.269 09:23:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2550557 00:06:07.529 09:23:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:07.529 09:23:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:07.529 09:23:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2550557' 00:06:07.529 killing process with pid 2550557 00:06:07.529 09:23:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2550557 00:06:07.529 09:23:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2550557 00:06:07.529 00:06:07.529 real 0m2.230s 00:06:07.529 user 0m2.501s 00:06:07.529 sys 0m0.638s 00:06:07.529 09:23:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:07.529 09:23:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:07.529 ************************************ 00:06:07.529 END TEST locking_app_on_locked_coremask 00:06:07.529 ************************************ 00:06:07.529 09:23:42 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:07.529 09:23:42 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:07.529 09:23:42 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:07.529 09:23:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:07.790 ************************************ 00:06:07.790 START TEST locking_overlapped_coremask 00:06:07.790 ************************************ 00:06:07.790 09:23:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:07.790 09:23:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2551088 00:06:07.790 09:23:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2551088 /var/tmp/spdk.sock 00:06:07.790 09:23:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:07.790 09:23:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 2551088 ']' 00:06:07.790 09:23:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.790 09:23:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:07.790 09:23:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.790 09:23:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:07.790 09:23:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:07.790 [2024-12-09 09:23:43.042994] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:06:07.790 [2024-12-09 09:23:43.043048] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2551088 ] 00:06:07.790 [2024-12-09 09:23:43.127222] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:07.790 [2024-12-09 09:23:43.145920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.790 [2024-12-09 09:23:43.146038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:07.790 [2024-12-09 09:23:43.146040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.732 09:23:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:08.732 09:23:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:08.732 09:23:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2551266 00:06:08.732 09:23:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2551266 /var/tmp/spdk2.sock 00:06:08.732 09:23:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:08.732 09:23:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:08.732 09:23:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2551266 /var/tmp/spdk2.sock 00:06:08.732 09:23:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:08.732 09:23:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:08.732 09:23:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:08.732 09:23:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:08.732 09:23:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 2551266 /var/tmp/spdk2.sock 00:06:08.732 09:23:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 2551266 ']' 00:06:08.732 09:23:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:08.732 09:23:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:08.732 09:23:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:08.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:08.732 09:23:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:08.732 09:23:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:08.732 [2024-12-09 09:23:43.893673] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:06:08.732 [2024-12-09 09:23:43.893728] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2551266 ] 00:06:08.732 [2024-12-09 09:23:43.982452] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2551088 has claimed it. 00:06:08.732 [2024-12-09 09:23:43.982482] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:09.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2551266) - No such process 00:06:09.304 ERROR: process (pid: 2551266) is no longer running 00:06:09.304 09:23:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:09.304 09:23:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:09.304 09:23:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:09.304 09:23:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:09.304 09:23:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:09.304 09:23:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:09.304 09:23:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:09.304 09:23:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:09.304 09:23:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:09.304 09:23:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:09.304 09:23:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2551088 00:06:09.304 09:23:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 2551088 ']' 00:06:09.304 09:23:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 2551088 00:06:09.304 09:23:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:09.304 09:23:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:09.304 09:23:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2551088 00:06:09.304 09:23:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:09.304 09:23:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:09.304 09:23:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2551088' 00:06:09.304 killing process with pid 2551088 00:06:09.304 09:23:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 2551088 00:06:09.304 09:23:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 2551088 00:06:09.565 00:06:09.565 real 0m1.774s 00:06:09.565 user 0m5.199s 00:06:09.565 sys 0m0.369s 00:06:09.565 09:23:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:09.565 09:23:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:09.565 ************************************ 00:06:09.565 END TEST locking_overlapped_coremask 00:06:09.565 ************************************ 00:06:09.565 09:23:44 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:09.565 09:23:44 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:09.565 09:23:44 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:09.565 09:23:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:09.565 ************************************ 00:06:09.565 START TEST locking_overlapped_coremask_via_rpc 00:06:09.565 ************************************ 00:06:09.565 09:23:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:09.565 09:23:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2551530 00:06:09.565 09:23:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2551530 /var/tmp/spdk.sock 00:06:09.565 09:23:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:09.565 09:23:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2551530 ']' 00:06:09.565 09:23:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.565 09:23:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:09.565 09:23:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.565 09:23:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:09.565 09:23:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.565 [2024-12-09 09:23:44.892591] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:06:09.565 [2024-12-09 09:23:44.892655] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2551530 ] 00:06:09.565 [2024-12-09 09:23:44.977652] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:09.565 [2024-12-09 09:23:44.977679] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:09.565 [2024-12-09 09:23:44.998241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:09.565 [2024-12-09 09:23:44.998361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:09.565 [2024-12-09 09:23:44.998362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.506 09:23:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:10.506 09:23:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:10.506 09:23:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2551647 00:06:10.506 09:23:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2551647 /var/tmp/spdk2.sock 00:06:10.506 09:23:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2551647 ']' 00:06:10.506 09:23:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:10.506 09:23:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:10.506 09:23:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:10.506 09:23:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:10.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:10.506 09:23:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:10.506 09:23:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.506 [2024-12-09 09:23:45.751386] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:06:10.506 [2024-12-09 09:23:45.751441] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2551647 ] 00:06:10.506 [2024-12-09 09:23:45.838292] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:10.506 [2024-12-09 09:23:45.838313] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:10.506 [2024-12-09 09:23:45.872552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:10.506 [2024-12-09 09:23:45.879721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:10.506 [2024-12-09 09:23:45.879723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:11.446 09:23:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:11.446 09:23:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:11.446 09:23:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:11.446 09:23:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:11.446 09:23:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.446 09:23:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:11.446 09:23:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:11.446 09:23:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:11.446 09:23:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:11.446 09:23:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:11.446 09:23:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:11.446 09:23:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:11.446 09:23:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:11.446 09:23:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:11.446 09:23:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:11.446 09:23:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.446 [2024-12-09 09:23:46.551700] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2551530 has claimed it. 00:06:11.446 request: 00:06:11.446 { 00:06:11.446 "method": "framework_enable_cpumask_locks", 00:06:11.446 "req_id": 1 00:06:11.446 } 00:06:11.446 Got JSON-RPC error response 00:06:11.446 response: 00:06:11.446 { 00:06:11.446 "code": -32603, 00:06:11.446 "message": "Failed to claim CPU core: 2" 00:06:11.446 } 00:06:11.446 09:23:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:11.446 09:23:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:11.446 09:23:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:11.446 09:23:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:11.446 09:23:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:11.446 09:23:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2551530 /var/tmp/spdk.sock 00:06:11.446 09:23:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2551530 ']' 00:06:11.446 09:23:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.446 09:23:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:11.446 09:23:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.446 09:23:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:11.446 09:23:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.446 09:23:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:11.446 09:23:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:11.446 09:23:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2551647 /var/tmp/spdk2.sock 00:06:11.446 09:23:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2551647 ']' 00:06:11.446 09:23:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:11.446 09:23:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:11.446 09:23:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:11.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:11.446 09:23:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:11.446 09:23:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.707 09:23:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:11.707 09:23:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:11.707 09:23:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:11.707 09:23:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:11.707 09:23:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:11.707 09:23:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:11.707 00:06:11.707 real 0m2.094s 00:06:11.707 user 0m0.868s 00:06:11.707 sys 0m0.146s 00:06:11.707 09:23:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:11.707 09:23:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.707 ************************************ 00:06:11.707 END TEST locking_overlapped_coremask_via_rpc 00:06:11.707 ************************************ 00:06:11.707 09:23:46 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:11.707 09:23:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2551530 ]] 00:06:11.707 09:23:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2551530 00:06:11.707 09:23:46 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2551530 ']' 00:06:11.707 09:23:46 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2551530 00:06:11.707 09:23:46 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:11.707 09:23:46 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:11.707 09:23:46 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2551530 00:06:11.707 09:23:47 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:11.707 09:23:47 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:11.707 09:23:47 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2551530' 00:06:11.707 killing process with pid 2551530 00:06:11.707 09:23:47 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 2551530 00:06:11.707 09:23:47 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 2551530 00:06:11.967 09:23:47 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2551647 ]] 00:06:11.967 09:23:47 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2551647 00:06:11.967 09:23:47 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2551647 ']' 00:06:11.967 09:23:47 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2551647 00:06:11.967 09:23:47 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:11.967 09:23:47 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:11.967 09:23:47 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2551647 00:06:11.967 09:23:47 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:11.968 09:23:47 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:11.968 09:23:47 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2551647' 00:06:11.968 killing process with pid 2551647 00:06:11.968 09:23:47 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 2551647 00:06:11.968 09:23:47 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 2551647 00:06:12.228 09:23:47 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:12.228 09:23:47 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:12.228 09:23:47 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2551530 ]] 00:06:12.228 09:23:47 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2551530 00:06:12.228 09:23:47 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2551530 ']' 00:06:12.228 09:23:47 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2551530 00:06:12.228 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2551530) - No such process 00:06:12.228 09:23:47 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 2551530 is not found' 00:06:12.228 Process with pid 2551530 is not found 00:06:12.228 09:23:47 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2551647 ]] 00:06:12.228 09:23:47 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2551647 00:06:12.228 09:23:47 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2551647 ']' 00:06:12.228 09:23:47 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2551647 00:06:12.228 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2551647) - No such process 00:06:12.228 09:23:47 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 2551647 is not found' 00:06:12.228 Process with pid 2551647 is not found 00:06:12.228 09:23:47 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:12.228 00:06:12.228 real 0m16.323s 00:06:12.228 user 0m28.619s 00:06:12.228 sys 0m4.984s 00:06:12.228 09:23:47 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.228 09:23:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:12.228 ************************************ 00:06:12.228 END TEST cpu_locks 00:06:12.228 ************************************ 00:06:12.228 00:06:12.228 real 0m41.160s 00:06:12.228 user 1m19.429s 00:06:12.228 sys 0m8.262s 00:06:12.228 09:23:47 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.228 09:23:47 event -- common/autotest_common.sh@10 -- # set +x 00:06:12.228 ************************************ 00:06:12.228 END TEST event 00:06:12.228 ************************************ 00:06:12.228 09:23:47 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:12.228 09:23:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:12.228 09:23:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.228 09:23:47 -- common/autotest_common.sh@10 -- # set +x 00:06:12.228 ************************************ 00:06:12.228 START TEST thread 00:06:12.228 ************************************ 00:06:12.229 09:23:47 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:12.490 * Looking for test storage... 00:06:12.490 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:12.490 09:23:47 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:12.490 09:23:47 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:06:12.490 09:23:47 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:12.490 09:23:47 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:12.490 09:23:47 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:12.490 09:23:47 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:12.490 09:23:47 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:12.490 09:23:47 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:12.490 09:23:47 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:12.490 09:23:47 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:12.490 09:23:47 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:12.490 09:23:47 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:12.490 09:23:47 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:12.490 09:23:47 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:12.490 09:23:47 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:12.490 09:23:47 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:12.490 09:23:47 thread -- scripts/common.sh@345 -- # : 1 00:06:12.490 09:23:47 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:12.490 09:23:47 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:12.490 09:23:47 thread -- scripts/common.sh@365 -- # decimal 1 00:06:12.490 09:23:47 thread -- scripts/common.sh@353 -- # local d=1 00:06:12.490 09:23:47 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:12.490 09:23:47 thread -- scripts/common.sh@355 -- # echo 1 00:06:12.490 09:23:47 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:12.490 09:23:47 thread -- scripts/common.sh@366 -- # decimal 2 00:06:12.490 09:23:47 thread -- scripts/common.sh@353 -- # local d=2 00:06:12.490 09:23:47 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:12.490 09:23:47 thread -- scripts/common.sh@355 -- # echo 2 00:06:12.490 09:23:47 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:12.490 09:23:47 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:12.490 09:23:47 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:12.490 09:23:47 thread -- scripts/common.sh@368 -- # return 0 00:06:12.490 09:23:47 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:12.490 09:23:47 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:12.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.490 --rc genhtml_branch_coverage=1 00:06:12.490 --rc genhtml_function_coverage=1 00:06:12.490 --rc genhtml_legend=1 00:06:12.490 --rc geninfo_all_blocks=1 00:06:12.490 --rc geninfo_unexecuted_blocks=1 00:06:12.490 00:06:12.490 ' 00:06:12.490 09:23:47 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:12.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.490 --rc genhtml_branch_coverage=1 00:06:12.490 --rc genhtml_function_coverage=1 00:06:12.490 --rc genhtml_legend=1 00:06:12.490 --rc geninfo_all_blocks=1 00:06:12.490 --rc geninfo_unexecuted_blocks=1 00:06:12.490 00:06:12.490 ' 00:06:12.490 09:23:47 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:12.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.490 --rc genhtml_branch_coverage=1 00:06:12.490 --rc genhtml_function_coverage=1 00:06:12.490 --rc genhtml_legend=1 00:06:12.490 --rc geninfo_all_blocks=1 00:06:12.490 --rc geninfo_unexecuted_blocks=1 00:06:12.490 00:06:12.490 ' 00:06:12.490 09:23:47 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:12.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.490 --rc genhtml_branch_coverage=1 00:06:12.490 --rc genhtml_function_coverage=1 00:06:12.490 --rc genhtml_legend=1 00:06:12.490 --rc geninfo_all_blocks=1 00:06:12.490 --rc geninfo_unexecuted_blocks=1 00:06:12.490 00:06:12.490 ' 00:06:12.490 09:23:47 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:12.490 09:23:47 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:12.490 09:23:47 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.490 09:23:47 thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.490 ************************************ 00:06:12.490 START TEST thread_poller_perf 00:06:12.490 ************************************ 00:06:12.490 09:23:47 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:12.491 [2024-12-09 09:23:47.859400] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:06:12.491 [2024-12-09 09:23:47.859491] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2552117 ] 00:06:12.752 [2024-12-09 09:23:47.946225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.752 [2024-12-09 09:23:47.965378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.752 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:13.695 [2024-12-09T08:23:49.148Z] ====================================== 00:06:13.695 [2024-12-09T08:23:49.148Z] busy:2409006562 (cyc) 00:06:13.695 [2024-12-09T08:23:49.148Z] total_run_count: 419000 00:06:13.695 [2024-12-09T08:23:49.148Z] tsc_hz: 2400000000 (cyc) 00:06:13.695 [2024-12-09T08:23:49.148Z] ====================================== 00:06:13.695 [2024-12-09T08:23:49.148Z] poller_cost: 5749 (cyc), 2395 (nsec) 00:06:13.695 00:06:13.695 real 0m1.156s 00:06:13.695 user 0m1.076s 00:06:13.695 sys 0m0.076s 00:06:13.695 09:23:48 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:13.695 09:23:48 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:13.695 ************************************ 00:06:13.695 END TEST thread_poller_perf 00:06:13.695 ************************************ 00:06:13.695 09:23:49 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:13.695 09:23:49 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:13.695 09:23:49 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:13.695 09:23:49 thread -- common/autotest_common.sh@10 -- # set +x 00:06:13.695 ************************************ 00:06:13.695 START TEST thread_poller_perf 00:06:13.695 ************************************ 00:06:13.695 09:23:49 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:13.695 [2024-12-09 09:23:49.092600] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:06:13.695 [2024-12-09 09:23:49.092695] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2552440 ] 00:06:13.956 [2024-12-09 09:23:49.196031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.956 [2024-12-09 09:23:49.217854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.956 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:14.898 [2024-12-09T08:23:50.351Z] ====================================== 00:06:14.898 [2024-12-09T08:23:50.351Z] busy:2401669662 (cyc) 00:06:14.898 [2024-12-09T08:23:50.351Z] total_run_count: 5091000 00:06:14.898 [2024-12-09T08:23:50.351Z] tsc_hz: 2400000000 (cyc) 00:06:14.898 [2024-12-09T08:23:50.351Z] ====================================== 00:06:14.898 [2024-12-09T08:23:50.351Z] poller_cost: 471 (cyc), 196 (nsec) 00:06:14.898 00:06:14.898 real 0m1.170s 00:06:14.898 user 0m1.067s 00:06:14.898 sys 0m0.098s 00:06:14.898 09:23:50 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:14.898 09:23:50 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:14.898 ************************************ 00:06:14.898 END TEST thread_poller_perf 00:06:14.898 ************************************ 00:06:14.898 09:23:50 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:14.898 00:06:14.898 real 0m2.684s 00:06:14.898 user 0m2.324s 00:06:14.898 sys 0m0.375s 00:06:14.898 09:23:50 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:14.898 09:23:50 thread -- common/autotest_common.sh@10 -- # set +x 00:06:14.898 ************************************ 00:06:14.898 END TEST thread 00:06:14.898 ************************************ 00:06:14.898 09:23:50 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:14.898 09:23:50 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:14.898 09:23:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:14.898 09:23:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:14.898 09:23:50 -- common/autotest_common.sh@10 -- # set +x 00:06:15.159 ************************************ 00:06:15.159 START TEST app_cmdline 00:06:15.159 ************************************ 00:06:15.159 09:23:50 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:15.159 * Looking for test storage... 00:06:15.159 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:15.159 09:23:50 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:15.159 09:23:50 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:06:15.159 09:23:50 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:15.159 09:23:50 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:15.159 09:23:50 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:15.159 09:23:50 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:15.159 09:23:50 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:15.159 09:23:50 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:15.159 09:23:50 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:15.159 09:23:50 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:15.159 09:23:50 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:15.159 09:23:50 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:15.159 09:23:50 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:15.159 09:23:50 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:15.159 09:23:50 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:15.159 09:23:50 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:15.159 09:23:50 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:15.159 09:23:50 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:15.159 09:23:50 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:15.159 09:23:50 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:15.159 09:23:50 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:15.159 09:23:50 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:15.159 09:23:50 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:15.159 09:23:50 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:15.159 09:23:50 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:15.159 09:23:50 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:15.159 09:23:50 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:15.159 09:23:50 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:15.159 09:23:50 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:15.159 09:23:50 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:15.159 09:23:50 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:15.159 09:23:50 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:15.159 09:23:50 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:15.159 09:23:50 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:15.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.159 --rc genhtml_branch_coverage=1 00:06:15.159 --rc genhtml_function_coverage=1 00:06:15.159 --rc genhtml_legend=1 00:06:15.159 --rc geninfo_all_blocks=1 00:06:15.159 --rc geninfo_unexecuted_blocks=1 00:06:15.159 00:06:15.159 ' 00:06:15.159 09:23:50 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:15.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.159 --rc genhtml_branch_coverage=1 00:06:15.159 --rc genhtml_function_coverage=1 00:06:15.159 --rc genhtml_legend=1 00:06:15.159 --rc geninfo_all_blocks=1 00:06:15.159 --rc geninfo_unexecuted_blocks=1 00:06:15.159 00:06:15.159 ' 00:06:15.159 09:23:50 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:15.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.159 --rc genhtml_branch_coverage=1 00:06:15.159 --rc genhtml_function_coverage=1 00:06:15.159 --rc genhtml_legend=1 00:06:15.159 --rc geninfo_all_blocks=1 00:06:15.159 --rc geninfo_unexecuted_blocks=1 00:06:15.159 00:06:15.159 ' 00:06:15.159 09:23:50 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:15.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.159 --rc genhtml_branch_coverage=1 00:06:15.159 --rc genhtml_function_coverage=1 00:06:15.159 --rc genhtml_legend=1 00:06:15.159 --rc geninfo_all_blocks=1 00:06:15.159 --rc geninfo_unexecuted_blocks=1 00:06:15.159 00:06:15.159 ' 00:06:15.159 09:23:50 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:15.159 09:23:50 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2552847 00:06:15.159 09:23:50 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2552847 00:06:15.159 09:23:50 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 2552847 ']' 00:06:15.159 09:23:50 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:15.159 09:23:50 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.159 09:23:50 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:15.159 09:23:50 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.159 09:23:50 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:15.159 09:23:50 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:15.420 [2024-12-09 09:23:50.619905] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:06:15.420 [2024-12-09 09:23:50.619975] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2552847 ] 00:06:15.420 [2024-12-09 09:23:50.709539] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.420 [2024-12-09 09:23:50.728537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.990 09:23:51 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:15.990 09:23:51 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:15.990 09:23:51 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:16.251 { 00:06:16.251 "version": "SPDK v25.01-pre git sha1 a2f5e1c2d", 00:06:16.251 "fields": { 00:06:16.251 "major": 25, 00:06:16.251 "minor": 1, 00:06:16.251 "patch": 0, 00:06:16.251 "suffix": "-pre", 00:06:16.251 "commit": "a2f5e1c2d" 00:06:16.251 } 00:06:16.251 } 00:06:16.251 09:23:51 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:16.251 09:23:51 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:16.251 09:23:51 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:16.251 09:23:51 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:16.251 09:23:51 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:16.251 09:23:51 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:16.251 09:23:51 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.251 09:23:51 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:16.251 09:23:51 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:16.251 09:23:51 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.251 09:23:51 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:16.251 09:23:51 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:16.251 09:23:51 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:16.251 09:23:51 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:16.251 09:23:51 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:16.251 09:23:51 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:16.251 09:23:51 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:16.251 09:23:51 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:16.251 09:23:51 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:16.251 09:23:51 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:16.251 09:23:51 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:16.251 09:23:51 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:16.251 09:23:51 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:16.251 09:23:51 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:16.511 request: 00:06:16.511 { 00:06:16.511 "method": "env_dpdk_get_mem_stats", 00:06:16.511 "req_id": 1 00:06:16.511 } 00:06:16.511 Got JSON-RPC error response 00:06:16.511 response: 00:06:16.511 { 00:06:16.511 "code": -32601, 00:06:16.511 "message": "Method not found" 00:06:16.511 } 00:06:16.511 09:23:51 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:16.511 09:23:51 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:16.511 09:23:51 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:16.511 09:23:51 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:16.511 09:23:51 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2552847 00:06:16.511 09:23:51 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 2552847 ']' 00:06:16.511 09:23:51 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 2552847 00:06:16.511 09:23:51 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:16.511 09:23:51 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:16.511 09:23:51 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2552847 00:06:16.511 09:23:51 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:16.511 09:23:51 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:16.511 09:23:51 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2552847' 00:06:16.511 killing process with pid 2552847 00:06:16.511 09:23:51 app_cmdline -- common/autotest_common.sh@973 -- # kill 2552847 00:06:16.511 09:23:51 app_cmdline -- common/autotest_common.sh@978 -- # wait 2552847 00:06:16.769 00:06:16.769 real 0m1.720s 00:06:16.769 user 0m2.055s 00:06:16.769 sys 0m0.482s 00:06:16.769 09:23:52 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.769 09:23:52 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:16.769 ************************************ 00:06:16.769 END TEST app_cmdline 00:06:16.770 ************************************ 00:06:16.770 09:23:52 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:16.770 09:23:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:16.770 09:23:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.770 09:23:52 -- common/autotest_common.sh@10 -- # set +x 00:06:16.770 ************************************ 00:06:16.770 START TEST version 00:06:16.770 ************************************ 00:06:16.770 09:23:52 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:17.029 * Looking for test storage... 00:06:17.029 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:17.029 09:23:52 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:17.029 09:23:52 version -- common/autotest_common.sh@1711 -- # lcov --version 00:06:17.029 09:23:52 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:17.029 09:23:52 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:17.029 09:23:52 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:17.029 09:23:52 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:17.029 09:23:52 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:17.029 09:23:52 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:17.029 09:23:52 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:17.029 09:23:52 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:17.029 09:23:52 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:17.029 09:23:52 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:17.029 09:23:52 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:17.029 09:23:52 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:17.029 09:23:52 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:17.029 09:23:52 version -- scripts/common.sh@344 -- # case "$op" in 00:06:17.029 09:23:52 version -- scripts/common.sh@345 -- # : 1 00:06:17.029 09:23:52 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:17.029 09:23:52 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:17.029 09:23:52 version -- scripts/common.sh@365 -- # decimal 1 00:06:17.029 09:23:52 version -- scripts/common.sh@353 -- # local d=1 00:06:17.029 09:23:52 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:17.029 09:23:52 version -- scripts/common.sh@355 -- # echo 1 00:06:17.029 09:23:52 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:17.029 09:23:52 version -- scripts/common.sh@366 -- # decimal 2 00:06:17.029 09:23:52 version -- scripts/common.sh@353 -- # local d=2 00:06:17.029 09:23:52 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:17.029 09:23:52 version -- scripts/common.sh@355 -- # echo 2 00:06:17.029 09:23:52 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:17.029 09:23:52 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:17.029 09:23:52 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:17.029 09:23:52 version -- scripts/common.sh@368 -- # return 0 00:06:17.029 09:23:52 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:17.029 09:23:52 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:17.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.029 --rc genhtml_branch_coverage=1 00:06:17.029 --rc genhtml_function_coverage=1 00:06:17.029 --rc genhtml_legend=1 00:06:17.029 --rc geninfo_all_blocks=1 00:06:17.029 --rc geninfo_unexecuted_blocks=1 00:06:17.029 00:06:17.029 ' 00:06:17.029 09:23:52 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:17.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.029 --rc genhtml_branch_coverage=1 00:06:17.029 --rc genhtml_function_coverage=1 00:06:17.029 --rc genhtml_legend=1 00:06:17.030 --rc geninfo_all_blocks=1 00:06:17.030 --rc geninfo_unexecuted_blocks=1 00:06:17.030 00:06:17.030 ' 00:06:17.030 09:23:52 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:17.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.030 --rc genhtml_branch_coverage=1 00:06:17.030 --rc genhtml_function_coverage=1 00:06:17.030 --rc genhtml_legend=1 00:06:17.030 --rc geninfo_all_blocks=1 00:06:17.030 --rc geninfo_unexecuted_blocks=1 00:06:17.030 00:06:17.030 ' 00:06:17.030 09:23:52 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:17.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.030 --rc genhtml_branch_coverage=1 00:06:17.030 --rc genhtml_function_coverage=1 00:06:17.030 --rc genhtml_legend=1 00:06:17.030 --rc geninfo_all_blocks=1 00:06:17.030 --rc geninfo_unexecuted_blocks=1 00:06:17.030 00:06:17.030 ' 00:06:17.030 09:23:52 version -- app/version.sh@17 -- # get_header_version major 00:06:17.030 09:23:52 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:17.030 09:23:52 version -- app/version.sh@14 -- # cut -f2 00:06:17.030 09:23:52 version -- app/version.sh@14 -- # tr -d '"' 00:06:17.030 09:23:52 version -- app/version.sh@17 -- # major=25 00:06:17.030 09:23:52 version -- app/version.sh@18 -- # get_header_version minor 00:06:17.030 09:23:52 version -- app/version.sh@14 -- # cut -f2 00:06:17.030 09:23:52 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:17.030 09:23:52 version -- app/version.sh@14 -- # tr -d '"' 00:06:17.030 09:23:52 version -- app/version.sh@18 -- # minor=1 00:06:17.030 09:23:52 version -- app/version.sh@19 -- # get_header_version patch 00:06:17.030 09:23:52 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:17.030 09:23:52 version -- app/version.sh@14 -- # cut -f2 00:06:17.030 09:23:52 version -- app/version.sh@14 -- # tr -d '"' 00:06:17.030 09:23:52 version -- app/version.sh@19 -- # patch=0 00:06:17.030 09:23:52 version -- app/version.sh@20 -- # get_header_version suffix 00:06:17.030 09:23:52 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:17.030 09:23:52 version -- app/version.sh@14 -- # cut -f2 00:06:17.030 09:23:52 version -- app/version.sh@14 -- # tr -d '"' 00:06:17.030 09:23:52 version -- app/version.sh@20 -- # suffix=-pre 00:06:17.030 09:23:52 version -- app/version.sh@22 -- # version=25.1 00:06:17.030 09:23:52 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:17.030 09:23:52 version -- app/version.sh@28 -- # version=25.1rc0 00:06:17.030 09:23:52 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:17.030 09:23:52 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:17.030 09:23:52 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:17.030 09:23:52 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:17.030 00:06:17.030 real 0m0.276s 00:06:17.030 user 0m0.159s 00:06:17.030 sys 0m0.163s 00:06:17.030 09:23:52 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:17.030 09:23:52 version -- common/autotest_common.sh@10 -- # set +x 00:06:17.030 ************************************ 00:06:17.030 END TEST version 00:06:17.030 ************************************ 00:06:17.030 09:23:52 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:17.030 09:23:52 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:17.030 09:23:52 -- spdk/autotest.sh@194 -- # uname -s 00:06:17.030 09:23:52 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:17.030 09:23:52 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:17.030 09:23:52 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:17.289 09:23:52 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:17.289 09:23:52 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:17.289 09:23:52 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:17.289 09:23:52 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:17.289 09:23:52 -- common/autotest_common.sh@10 -- # set +x 00:06:17.289 09:23:52 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:17.289 09:23:52 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:06:17.289 09:23:52 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:06:17.289 09:23:52 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:06:17.289 09:23:52 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:06:17.289 09:23:52 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:06:17.289 09:23:52 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:17.289 09:23:52 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:17.289 09:23:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:17.289 09:23:52 -- common/autotest_common.sh@10 -- # set +x 00:06:17.289 ************************************ 00:06:17.289 START TEST nvmf_tcp 00:06:17.289 ************************************ 00:06:17.289 09:23:52 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:17.289 * Looking for test storage... 00:06:17.289 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:17.289 09:23:52 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:17.289 09:23:52 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:06:17.289 09:23:52 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:17.549 09:23:52 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:17.549 09:23:52 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:17.549 09:23:52 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:17.549 09:23:52 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:17.549 09:23:52 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:17.549 09:23:52 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:17.549 09:23:52 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:17.549 09:23:52 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:17.549 09:23:52 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:17.549 09:23:52 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:17.549 09:23:52 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:17.549 09:23:52 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:17.549 09:23:52 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:17.549 09:23:52 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:06:17.549 09:23:52 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:17.549 09:23:52 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:17.549 09:23:52 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:17.549 09:23:52 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:06:17.549 09:23:52 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:17.549 09:23:52 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:06:17.549 09:23:52 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:17.549 09:23:52 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:17.549 09:23:52 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:06:17.549 09:23:52 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:17.549 09:23:52 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:06:17.549 09:23:52 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:17.549 09:23:52 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:17.549 09:23:52 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:17.549 09:23:52 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:06:17.549 09:23:52 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:17.549 09:23:52 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:17.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.549 --rc genhtml_branch_coverage=1 00:06:17.549 --rc genhtml_function_coverage=1 00:06:17.549 --rc genhtml_legend=1 00:06:17.549 --rc geninfo_all_blocks=1 00:06:17.549 --rc geninfo_unexecuted_blocks=1 00:06:17.549 00:06:17.549 ' 00:06:17.549 09:23:52 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:17.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.549 --rc genhtml_branch_coverage=1 00:06:17.549 --rc genhtml_function_coverage=1 00:06:17.549 --rc genhtml_legend=1 00:06:17.549 --rc geninfo_all_blocks=1 00:06:17.549 --rc geninfo_unexecuted_blocks=1 00:06:17.549 00:06:17.549 ' 00:06:17.549 09:23:52 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:17.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.549 --rc genhtml_branch_coverage=1 00:06:17.549 --rc genhtml_function_coverage=1 00:06:17.549 --rc genhtml_legend=1 00:06:17.549 --rc geninfo_all_blocks=1 00:06:17.549 --rc geninfo_unexecuted_blocks=1 00:06:17.549 00:06:17.549 ' 00:06:17.549 09:23:52 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:17.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.550 --rc genhtml_branch_coverage=1 00:06:17.550 --rc genhtml_function_coverage=1 00:06:17.550 --rc genhtml_legend=1 00:06:17.550 --rc geninfo_all_blocks=1 00:06:17.550 --rc geninfo_unexecuted_blocks=1 00:06:17.550 00:06:17.550 ' 00:06:17.550 09:23:52 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:17.550 09:23:52 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:17.550 09:23:52 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:17.550 09:23:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:17.550 09:23:52 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:17.550 09:23:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:17.550 ************************************ 00:06:17.550 START TEST nvmf_target_core 00:06:17.550 ************************************ 00:06:17.550 09:23:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:17.550 * Looking for test storage... 00:06:17.550 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:17.550 09:23:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:17.550 09:23:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:17.550 09:23:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:06:17.550 09:23:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:17.550 09:23:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:17.550 09:23:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:17.550 09:23:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:17.550 09:23:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:06:17.550 09:23:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:06:17.550 09:23:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:06:17.550 09:23:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:06:17.550 09:23:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:06:17.550 09:23:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:06:17.550 09:23:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:06:17.550 09:23:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:17.550 09:23:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:06:17.550 09:23:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:06:17.550 09:23:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:17.550 09:23:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:17.550 09:23:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:06:17.810 09:23:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:06:17.810 09:23:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:17.810 09:23:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:06:17.810 09:23:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:06:17.810 09:23:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:06:17.810 09:23:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:06:17.810 09:23:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:17.810 09:23:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:06:17.810 09:23:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:06:17.810 09:23:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:17.810 09:23:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:17.810 09:23:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:06:17.810 09:23:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:17.810 09:23:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:17.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.810 --rc genhtml_branch_coverage=1 00:06:17.810 --rc genhtml_function_coverage=1 00:06:17.810 --rc genhtml_legend=1 00:06:17.810 --rc geninfo_all_blocks=1 00:06:17.810 --rc geninfo_unexecuted_blocks=1 00:06:17.810 00:06:17.810 ' 00:06:17.810 09:23:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:17.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.810 --rc genhtml_branch_coverage=1 00:06:17.810 --rc genhtml_function_coverage=1 00:06:17.810 --rc genhtml_legend=1 00:06:17.811 --rc geninfo_all_blocks=1 00:06:17.811 --rc geninfo_unexecuted_blocks=1 00:06:17.811 00:06:17.811 ' 00:06:17.811 09:23:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:17.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.811 --rc genhtml_branch_coverage=1 00:06:17.811 --rc genhtml_function_coverage=1 00:06:17.811 --rc genhtml_legend=1 00:06:17.811 --rc geninfo_all_blocks=1 00:06:17.811 --rc geninfo_unexecuted_blocks=1 00:06:17.811 00:06:17.811 ' 00:06:17.811 09:23:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:17.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.811 --rc genhtml_branch_coverage=1 00:06:17.811 --rc genhtml_function_coverage=1 00:06:17.811 --rc genhtml_legend=1 00:06:17.811 --rc geninfo_all_blocks=1 00:06:17.811 --rc geninfo_unexecuted_blocks=1 00:06:17.811 00:06:17.811 ' 00:06:17.811 09:23:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:06:17.811 09:23:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:17.811 09:23:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:17.811 09:23:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:17.811 09:23:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:17.811 09:23:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:17.811 09:23:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:17.811 09:23:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:17.811 09:23:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:17.811 09:23:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:17.811 09:23:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:17.811 09:23:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:17.811 09:23:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:17.811 09:23:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:17.811 09:23:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:17.811 09:23:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:17.811 09:23:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:17.811 09:23:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:17.811 09:23:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:17.811 09:23:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:17.811 09:23:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:17.811 09:23:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:06:17.811 09:23:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:17.811 09:23:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:17.811 09:23:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:17.811 09:23:53 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.811 09:23:53 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.811 09:23:53 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.811 09:23:53 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:17.811 09:23:53 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.811 09:23:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:06:17.811 09:23:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:17.811 09:23:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:17.811 09:23:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:17.811 09:23:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:17.811 09:23:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:17.811 09:23:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:17.811 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:17.811 09:23:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:17.811 09:23:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:17.811 09:23:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:17.811 09:23:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:17.811 09:23:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:06:17.811 09:23:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:06:17.811 09:23:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:17.811 09:23:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:17.811 09:23:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:17.811 09:23:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:17.811 ************************************ 00:06:17.811 START TEST nvmf_abort 00:06:17.811 ************************************ 00:06:17.811 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:17.811 * Looking for test storage... 00:06:17.811 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:17.811 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:17.811 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:06:17.811 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:18.072 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:18.072 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:18.072 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:18.072 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:18.072 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:06:18.072 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:06:18.072 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:06:18.072 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:06:18.072 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:06:18.072 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:06:18.072 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:06:18.072 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:18.072 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:06:18.072 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:06:18.072 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:18.072 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:18.072 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:06:18.072 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:06:18.072 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:18.072 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:06:18.072 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:06:18.072 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:06:18.072 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:06:18.072 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:18.072 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:06:18.072 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:06:18.072 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:18.072 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:18.072 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:06:18.072 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:18.072 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:18.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.072 --rc genhtml_branch_coverage=1 00:06:18.072 --rc genhtml_function_coverage=1 00:06:18.072 --rc genhtml_legend=1 00:06:18.072 --rc geninfo_all_blocks=1 00:06:18.072 --rc geninfo_unexecuted_blocks=1 00:06:18.072 00:06:18.072 ' 00:06:18.072 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:18.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.072 --rc genhtml_branch_coverage=1 00:06:18.072 --rc genhtml_function_coverage=1 00:06:18.072 --rc genhtml_legend=1 00:06:18.072 --rc geninfo_all_blocks=1 00:06:18.072 --rc geninfo_unexecuted_blocks=1 00:06:18.072 00:06:18.072 ' 00:06:18.072 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:18.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.072 --rc genhtml_branch_coverage=1 00:06:18.072 --rc genhtml_function_coverage=1 00:06:18.072 --rc genhtml_legend=1 00:06:18.072 --rc geninfo_all_blocks=1 00:06:18.072 --rc geninfo_unexecuted_blocks=1 00:06:18.072 00:06:18.072 ' 00:06:18.072 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:18.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.072 --rc genhtml_branch_coverage=1 00:06:18.072 --rc genhtml_function_coverage=1 00:06:18.072 --rc genhtml_legend=1 00:06:18.072 --rc geninfo_all_blocks=1 00:06:18.072 --rc geninfo_unexecuted_blocks=1 00:06:18.072 00:06:18.072 ' 00:06:18.072 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:18.072 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:06:18.072 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:18.072 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:18.072 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:18.072 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:18.072 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:18.072 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:18.072 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:18.072 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:18.072 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:18.072 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:18.072 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:18.072 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:18.072 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:18.073 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:18.073 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:18.073 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:18.073 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:18.073 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:06:18.073 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:18.073 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:18.073 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:18.073 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.073 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.073 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.073 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:06:18.073 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.073 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:06:18.073 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:18.073 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:18.073 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:18.073 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:18.073 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:18.073 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:18.073 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:18.073 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:18.073 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:18.073 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:18.073 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:18.073 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:06:18.073 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:06:18.073 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:18.073 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:18.073 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:18.073 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:18.073 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:18.073 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:18.073 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:18.073 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:18.073 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:18.073 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:18.073 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:06:18.073 09:23:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:26.331 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:26.331 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:06:26.331 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:26.331 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:26.331 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:26.331 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:26.331 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:26.331 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:06:26.331 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:26.331 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:06:26.331 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:06:26.331 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:06:26.331 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:06:26.331 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:06:26.331 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:06:26.331 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:26.331 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:26.331 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:26.331 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:26.331 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:26.331 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:26.331 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:26.331 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:26.331 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:26.331 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:26.331 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:26.331 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:26.331 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:26.332 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:26.332 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:26.332 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:26.332 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:26.332 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:26.332 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:26.332 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:06:26.332 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:06:26.332 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:26.332 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:26.332 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:26.332 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:26.332 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:26.332 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:26.332 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:06:26.332 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:06:26.332 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:26.332 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:26.332 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:26.332 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:26.332 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:26.332 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:26.332 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:26.332 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:26.332 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:26.332 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:26.332 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:26.332 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:26.332 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:26.332 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:26.332 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:26.332 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:06:26.332 Found net devices under 0000:4b:00.0: cvl_0_0 00:06:26.332 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:26.332 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:26.332 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:26.332 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:26.332 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:26.332 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:26.332 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:26.332 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:26.332 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:06:26.332 Found net devices under 0000:4b:00.1: cvl_0_1 00:06:26.332 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:26.332 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:26.332 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:06:26.332 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:26.332 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:26.332 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:26.332 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:26.332 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:26.332 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:26.332 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:26.332 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:26.332 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:26.332 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:26.332 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:26.332 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:26.332 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:26.332 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:26.332 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:26.332 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:26.332 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:26.332 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:26.332 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:26.332 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:26.332 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:26.332 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:26.332 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:26.332 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:26.332 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:26.332 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:26.332 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:26.332 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.633 ms 00:06:26.332 00:06:26.332 --- 10.0.0.2 ping statistics --- 00:06:26.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:26.332 rtt min/avg/max/mdev = 0.633/0.633/0.633/0.000 ms 00:06:26.332 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:26.332 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:26.332 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:06:26.332 00:06:26.332 --- 10.0.0.1 ping statistics --- 00:06:26.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:26.332 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:06:26.332 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:26.332 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:06:26.332 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:26.332 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:26.332 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:26.332 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:26.332 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:26.332 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:26.332 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:26.332 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:06:26.332 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:26.332 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:26.332 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:26.332 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=2557385 00:06:26.332 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 2557385 00:06:26.332 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:26.332 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 2557385 ']' 00:06:26.332 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.332 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:26.332 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.332 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:26.332 09:24:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:26.332 [2024-12-09 09:24:00.910198] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:06:26.332 [2024-12-09 09:24:00.910264] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:26.332 [2024-12-09 09:24:01.013433] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:26.332 [2024-12-09 09:24:01.043255] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:26.332 [2024-12-09 09:24:01.043313] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:26.332 [2024-12-09 09:24:01.043323] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:26.332 [2024-12-09 09:24:01.043330] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:26.333 [2024-12-09 09:24:01.043336] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:26.333 [2024-12-09 09:24:01.045080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:26.333 [2024-12-09 09:24:01.045243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:26.333 [2024-12-09 09:24:01.045245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:26.333 09:24:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:26.333 09:24:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:06:26.333 09:24:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:26.333 09:24:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:26.333 09:24:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:26.333 09:24:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:26.333 09:24:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:06:26.333 09:24:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:26.333 09:24:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:26.333 [2024-12-09 09:24:01.770327] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:26.333 09:24:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:26.333 09:24:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:06:26.333 09:24:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:26.333 09:24:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:26.593 Malloc0 00:06:26.593 09:24:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:26.593 09:24:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:26.593 09:24:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:26.593 09:24:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:26.593 Delay0 00:06:26.593 09:24:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:26.593 09:24:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:26.593 09:24:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:26.593 09:24:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:26.593 09:24:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:26.593 09:24:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:06:26.593 09:24:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:26.593 09:24:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:26.593 09:24:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:26.593 09:24:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:26.593 09:24:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:26.593 09:24:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:26.593 [2024-12-09 09:24:01.847851] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:26.593 09:24:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:26.593 09:24:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:26.593 09:24:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:26.593 09:24:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:26.593 09:24:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:26.593 09:24:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:06:26.593 [2024-12-09 09:24:01.997282] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:06:29.136 Initializing NVMe Controllers 00:06:29.136 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:29.136 controller IO queue size 128 less than required 00:06:29.136 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:06:29.136 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:06:29.136 Initialization complete. Launching workers. 00:06:29.136 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28654 00:06:29.136 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28715, failed to submit 62 00:06:29.136 success 28658, unsuccessful 57, failed 0 00:06:29.136 09:24:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:29.136 09:24:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.136 09:24:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:29.136 09:24:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.136 09:24:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:06:29.136 09:24:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:06:29.136 09:24:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:29.136 09:24:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:06:29.136 09:24:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:29.136 09:24:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:06:29.136 09:24:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:29.136 09:24:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:29.136 rmmod nvme_tcp 00:06:29.136 rmmod nvme_fabrics 00:06:29.136 rmmod nvme_keyring 00:06:29.136 09:24:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:29.136 09:24:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:06:29.136 09:24:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:06:29.136 09:24:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 2557385 ']' 00:06:29.136 09:24:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 2557385 00:06:29.136 09:24:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 2557385 ']' 00:06:29.136 09:24:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 2557385 00:06:29.136 09:24:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:06:29.136 09:24:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:29.136 09:24:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2557385 00:06:29.136 09:24:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:29.136 09:24:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:29.136 09:24:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2557385' 00:06:29.136 killing process with pid 2557385 00:06:29.137 09:24:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 2557385 00:06:29.137 09:24:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 2557385 00:06:29.137 09:24:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:29.137 09:24:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:29.137 09:24:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:29.137 09:24:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:06:29.137 09:24:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:06:29.137 09:24:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:29.137 09:24:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:06:29.137 09:24:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:29.137 09:24:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:29.137 09:24:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:29.137 09:24:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:29.137 09:24:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:31.060 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:31.060 00:06:31.060 real 0m13.407s 00:06:31.060 user 0m14.205s 00:06:31.060 sys 0m6.562s 00:06:31.060 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:31.060 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:31.060 ************************************ 00:06:31.060 END TEST nvmf_abort 00:06:31.060 ************************************ 00:06:31.321 09:24:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:31.321 09:24:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:31.321 09:24:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:31.321 09:24:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:31.321 ************************************ 00:06:31.321 START TEST nvmf_ns_hotplug_stress 00:06:31.321 ************************************ 00:06:31.321 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:31.321 * Looking for test storage... 00:06:31.321 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:31.321 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:31.321 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:06:31.321 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:31.321 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:31.321 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:31.321 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:31.321 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:31.321 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:06:31.321 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:06:31.321 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:06:31.321 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:06:31.321 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:06:31.321 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:06:31.321 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:06:31.321 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:31.321 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:06:31.321 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:06:31.584 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:31.584 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:31.584 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:06:31.584 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:06:31.584 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:31.584 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:06:31.584 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:06:31.584 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:06:31.584 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:06:31.584 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:31.584 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:06:31.584 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:06:31.584 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:31.584 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:31.584 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:06:31.584 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:31.584 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:31.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.584 --rc genhtml_branch_coverage=1 00:06:31.584 --rc genhtml_function_coverage=1 00:06:31.584 --rc genhtml_legend=1 00:06:31.584 --rc geninfo_all_blocks=1 00:06:31.584 --rc geninfo_unexecuted_blocks=1 00:06:31.584 00:06:31.584 ' 00:06:31.584 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:31.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.584 --rc genhtml_branch_coverage=1 00:06:31.584 --rc genhtml_function_coverage=1 00:06:31.584 --rc genhtml_legend=1 00:06:31.584 --rc geninfo_all_blocks=1 00:06:31.584 --rc geninfo_unexecuted_blocks=1 00:06:31.584 00:06:31.584 ' 00:06:31.584 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:31.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.584 --rc genhtml_branch_coverage=1 00:06:31.584 --rc genhtml_function_coverage=1 00:06:31.584 --rc genhtml_legend=1 00:06:31.584 --rc geninfo_all_blocks=1 00:06:31.584 --rc geninfo_unexecuted_blocks=1 00:06:31.584 00:06:31.584 ' 00:06:31.584 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:31.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.584 --rc genhtml_branch_coverage=1 00:06:31.584 --rc genhtml_function_coverage=1 00:06:31.584 --rc genhtml_legend=1 00:06:31.584 --rc geninfo_all_blocks=1 00:06:31.584 --rc geninfo_unexecuted_blocks=1 00:06:31.584 00:06:31.584 ' 00:06:31.584 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:31.584 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:06:31.584 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:31.584 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:31.584 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:31.584 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:31.584 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:31.584 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:31.584 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:31.584 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:31.584 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:31.584 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:31.584 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:31.584 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:31.584 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:31.584 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:31.584 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:31.584 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:31.584 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:31.584 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:06:31.584 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:31.584 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:31.584 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:31.584 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.584 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.585 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.585 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:06:31.585 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.585 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:06:31.585 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:31.585 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:31.585 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:31.585 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:31.585 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:31.585 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:31.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:31.585 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:31.585 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:31.585 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:31.585 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:31.585 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:06:31.585 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:31.585 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:31.585 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:31.585 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:31.585 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:31.585 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:31.585 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:31.585 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:31.585 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:31.585 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:31.585 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:06:31.585 09:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:39.724 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:39.724 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:06:39.724 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:39.724 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:39.724 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:39.724 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:39.724 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:39.724 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:06:39.724 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:39.724 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:06:39.724 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:06:39.724 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:06:39.724 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:06:39.724 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:06:39.724 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:06:39.724 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:39.724 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:39.724 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:39.724 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:39.724 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:39.724 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:39.724 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:39.724 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:39.724 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:39.724 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:39.724 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:39.724 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:39.724 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:39.724 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:39.724 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:39.724 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:39.724 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:39.724 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:39.724 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:39.724 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:06:39.724 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:06:39.724 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:39.724 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:39.724 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:39.724 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:39.724 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:39.724 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:39.724 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:06:39.724 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:06:39.724 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:39.724 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:39.724 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:39.724 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:39.724 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:39.724 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:39.724 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:39.724 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:39.724 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:39.724 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:39.724 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:39.724 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:39.724 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:39.724 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:39.724 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:39.724 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:06:39.724 Found net devices under 0000:4b:00.0: cvl_0_0 00:06:39.724 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:39.724 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:39.724 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:39.724 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:39.724 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:39.724 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:39.724 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:39.724 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:39.724 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:06:39.724 Found net devices under 0000:4b:00.1: cvl_0_1 00:06:39.724 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:39.724 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:39.724 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:06:39.724 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:39.724 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:39.724 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:39.724 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:39.724 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:39.724 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:39.724 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:39.724 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:39.724 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:39.724 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:39.724 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:39.724 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:39.724 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:39.724 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:39.724 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:39.724 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:39.724 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:39.724 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:39.724 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:39.724 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:39.724 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:39.724 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:39.724 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:39.724 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:39.724 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:39.724 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:39.724 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:39.724 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.560 ms 00:06:39.724 00:06:39.724 --- 10.0.0.2 ping statistics --- 00:06:39.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:39.725 rtt min/avg/max/mdev = 0.560/0.560/0.560/0.000 ms 00:06:39.725 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:39.725 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:39.725 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.312 ms 00:06:39.725 00:06:39.725 --- 10.0.0.1 ping statistics --- 00:06:39.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:39.725 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:06:39.725 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:39.725 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:06:39.725 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:39.725 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:39.725 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:39.725 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:39.725 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:39.725 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:39.725 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:39.725 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:39.725 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:39.725 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:39.725 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:39.725 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=2562945 00:06:39.725 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 2562945 00:06:39.725 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:39.725 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 2562945 ']' 00:06:39.725 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.725 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:39.725 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.725 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:39.725 09:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:39.725 [2024-12-09 09:24:14.499861] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:06:39.725 [2024-12-09 09:24:14.499914] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:39.725 [2024-12-09 09:24:14.595096] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:39.725 [2024-12-09 09:24:14.619944] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:39.725 [2024-12-09 09:24:14.619992] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:39.725 [2024-12-09 09:24:14.620000] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:39.725 [2024-12-09 09:24:14.620007] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:39.725 [2024-12-09 09:24:14.620014] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:39.725 [2024-12-09 09:24:14.621890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:39.725 [2024-12-09 09:24:14.622058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:39.725 [2024-12-09 09:24:14.622059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:39.984 09:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:39.984 09:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:06:39.984 09:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:39.984 09:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:39.984 09:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:39.984 09:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:39.984 09:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:39.984 09:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:40.244 [2024-12-09 09:24:15.496066] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:40.244 09:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:40.504 09:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:40.504 [2024-12-09 09:24:15.849534] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:40.504 09:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:40.764 09:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:40.764 Malloc0 00:06:41.023 09:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:41.023 Delay0 00:06:41.023 09:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:41.283 09:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:41.283 NULL1 00:06:41.544 09:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:41.544 09:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2563320 00:06:41.544 09:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:41.544 09:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2563320 00:06:41.544 09:24:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:41.806 09:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:42.082 09:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:42.082 09:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:42.082 true 00:06:42.082 09:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2563320 00:06:42.082 09:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:42.343 09:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:42.603 09:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:42.603 09:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:42.603 true 00:06:42.603 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2563320 00:06:42.603 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:42.863 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:43.124 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:06:43.124 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:06:43.124 true 00:06:43.384 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2563320 00:06:43.384 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:43.384 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:43.645 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:43.645 09:24:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:06:43.906 true 00:06:43.906 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2563320 00:06:43.906 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:43.906 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:44.167 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:44.167 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:44.429 true 00:06:44.429 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2563320 00:06:44.429 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:44.429 09:24:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:44.690 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:06:44.690 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:06:44.951 true 00:06:44.951 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2563320 00:06:44.951 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:45.213 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:45.213 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:45.213 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:06:45.474 true 00:06:45.474 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2563320 00:06:45.474 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:45.733 09:24:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:45.733 09:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:06:45.733 09:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:06:45.993 true 00:06:45.993 09:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2563320 00:06:45.993 09:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:46.255 09:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:46.515 09:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:06:46.515 09:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:06:46.515 true 00:06:46.515 09:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2563320 00:06:46.515 09:24:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:46.775 09:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:47.036 09:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:06:47.036 09:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:06:47.036 true 00:06:47.036 09:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2563320 00:06:47.036 09:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:47.296 09:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:47.556 09:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:06:47.556 09:24:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:06:47.556 true 00:06:47.815 09:24:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2563320 00:06:47.815 09:24:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:47.815 09:24:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:48.074 09:24:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:06:48.074 09:24:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:06:48.333 true 00:06:48.333 09:24:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2563320 00:06:48.333 09:24:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:48.333 09:24:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:48.593 09:24:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:06:48.593 09:24:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:06:48.853 true 00:06:48.853 09:24:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2563320 00:06:48.853 09:24:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:49.114 09:24:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:49.114 09:24:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:06:49.114 09:24:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:06:49.373 true 00:06:49.373 09:24:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2563320 00:06:49.373 09:24:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:49.633 09:24:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:49.633 09:24:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:06:49.633 09:24:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:06:49.893 true 00:06:49.893 09:24:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2563320 00:06:49.893 09:24:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:50.152 09:24:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:50.152 09:24:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:06:50.152 09:24:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:06:50.413 true 00:06:50.413 09:24:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2563320 00:06:50.413 09:24:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:50.673 09:24:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:50.933 09:24:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:06:50.933 09:24:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:06:50.933 true 00:06:50.933 09:24:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2563320 00:06:50.934 09:24:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:51.194 09:24:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:51.455 09:24:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:06:51.455 09:24:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:06:51.455 true 00:06:51.455 09:24:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2563320 00:06:51.455 09:24:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:51.716 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:51.976 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:06:51.976 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:06:52.236 true 00:06:52.236 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2563320 00:06:52.236 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:52.236 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:52.496 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:06:52.497 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:06:52.757 true 00:06:52.757 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2563320 00:06:52.757 09:24:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:52.757 09:24:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:53.018 09:24:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:06:53.018 09:24:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:06:53.279 true 00:06:53.279 09:24:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2563320 00:06:53.279 09:24:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:53.540 09:24:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:53.540 09:24:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:06:53.540 09:24:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:06:53.801 true 00:06:53.801 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2563320 00:06:53.801 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:54.062 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:54.062 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:06:54.062 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:06:54.323 true 00:06:54.323 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2563320 00:06:54.323 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:54.584 09:24:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:54.846 09:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:06:54.846 09:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:06:54.846 true 00:06:54.846 09:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2563320 00:06:54.846 09:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:55.106 09:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:55.367 09:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:06:55.367 09:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:06:55.367 true 00:06:55.367 09:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2563320 00:06:55.367 09:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:55.629 09:24:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:55.891 09:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:06:55.891 09:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:06:55.891 true 00:06:55.891 09:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2563320 00:06:55.891 09:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:56.152 09:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:56.413 09:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:06:56.413 09:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:06:56.413 true 00:06:56.674 09:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2563320 00:06:56.674 09:24:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:56.674 09:24:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:56.935 09:24:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:06:56.935 09:24:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:06:57.196 true 00:06:57.196 09:24:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2563320 00:06:57.196 09:24:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:57.196 09:24:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:57.457 09:24:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:06:57.457 09:24:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:06:57.720 true 00:06:57.720 09:24:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2563320 00:06:57.720 09:24:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:57.980 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:57.980 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:06:57.980 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:06:58.241 true 00:06:58.241 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2563320 00:06:58.241 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:58.501 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:58.501 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:06:58.501 09:24:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:06:58.761 true 00:06:58.761 09:24:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2563320 00:06:58.761 09:24:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:59.022 09:24:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:59.283 09:24:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:06:59.283 09:24:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:06:59.283 true 00:06:59.283 09:24:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2563320 00:06:59.283 09:24:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:59.543 09:24:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:59.802 09:24:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:06:59.802 09:24:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:06:59.802 true 00:06:59.803 09:24:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2563320 00:06:59.803 09:24:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:00.063 09:24:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:00.322 09:24:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:07:00.322 09:24:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:07:00.582 true 00:07:00.582 09:24:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2563320 00:07:00.582 09:24:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:00.582 09:24:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:00.843 09:24:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:07:00.843 09:24:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:07:01.103 true 00:07:01.103 09:24:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2563320 00:07:01.103 09:24:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:01.103 09:24:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:01.362 09:24:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:07:01.363 09:24:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:07:01.622 true 00:07:01.622 09:24:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2563320 00:07:01.622 09:24:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:01.882 09:24:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:01.882 09:24:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:07:01.882 09:24:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:07:02.142 true 00:07:02.142 09:24:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2563320 00:07:02.142 09:24:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:02.404 09:24:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:02.404 09:24:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:07:02.404 09:24:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:07:02.666 true 00:07:02.666 09:24:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2563320 00:07:02.666 09:24:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:02.927 09:24:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:03.187 09:24:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:07:03.187 09:24:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:07:03.187 true 00:07:03.187 09:24:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2563320 00:07:03.187 09:24:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:03.448 09:24:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:03.708 09:24:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:07:03.708 09:24:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:07:03.708 true 00:07:03.708 09:24:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2563320 00:07:03.708 09:24:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:03.969 09:24:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:04.250 09:24:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:07:04.250 09:24:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:07:04.250 true 00:07:04.250 09:24:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2563320 00:07:04.250 09:24:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:04.511 09:24:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:04.771 09:24:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:07:04.771 09:24:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:07:05.031 true 00:07:05.031 09:24:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2563320 00:07:05.031 09:24:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:05.031 09:24:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:05.291 09:24:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:07:05.291 09:24:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:07:05.551 true 00:07:05.551 09:24:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2563320 00:07:05.551 09:24:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:05.812 09:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:05.812 09:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:07:05.812 09:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:07:06.073 true 00:07:06.073 09:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2563320 00:07:06.073 09:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:06.333 09:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:06.333 09:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:07:06.333 09:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:07:06.593 true 00:07:06.593 09:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2563320 00:07:06.593 09:24:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:06.853 09:24:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:07.114 09:24:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:07:07.114 09:24:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:07:07.114 true 00:07:07.114 09:24:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2563320 00:07:07.114 09:24:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.375 09:24:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:07.635 09:24:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:07:07.635 09:24:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:07:07.635 true 00:07:07.635 09:24:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2563320 00:07:07.635 09:24:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.895 09:24:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:08.155 09:24:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:07:08.155 09:24:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:07:08.155 true 00:07:08.155 09:24:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2563320 00:07:08.155 09:24:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:08.424 09:24:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:08.703 09:24:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:07:08.703 09:24:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:07:08.703 true 00:07:09.016 09:24:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2563320 00:07:09.016 09:24:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:09.016 09:24:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:09.347 09:24:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:07:09.347 09:24:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:07:09.347 true 00:07:09.347 09:24:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2563320 00:07:09.347 09:24:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:09.615 09:24:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:09.875 09:24:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:07:09.875 09:24:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:07:09.875 true 00:07:09.875 09:24:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2563320 00:07:09.875 09:24:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:10.134 09:24:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:10.394 09:24:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:07:10.394 09:24:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:07:10.394 true 00:07:10.394 09:24:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2563320 00:07:10.394 09:24:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:10.653 09:24:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:10.913 09:24:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:07:10.913 09:24:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:07:10.913 true 00:07:11.174 09:24:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2563320 00:07:11.174 09:24:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:11.174 09:24:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:11.433 09:24:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:07:11.433 09:24:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:07:11.694 true 00:07:11.694 09:24:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2563320 00:07:11.694 09:24:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:11.694 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:11.954 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:07:11.954 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:07:11.954 Initializing NVMe Controllers 00:07:11.954 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:11.954 Controller IO queue size 128, less than required. 00:07:11.954 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:11.954 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:07:11.954 Initialization complete. Launching workers. 00:07:11.954 ======================================================== 00:07:11.954 Latency(us) 00:07:11.954 Device Information : IOPS MiB/s Average min max 00:07:11.954 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 30504.00 14.89 4196.12 1396.47 10597.40 00:07:11.954 ======================================================== 00:07:11.954 Total : 30504.00 14.89 4196.12 1396.47 10597.40 00:07:11.954 00:07:12.214 true 00:07:12.214 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2563320 00:07:12.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2563320) - No such process 00:07:12.215 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2563320 00:07:12.215 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:12.215 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:12.476 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:07:12.476 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:07:12.476 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:07:12.476 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:12.476 09:24:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:07:12.736 null0 00:07:12.736 09:24:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:12.736 09:24:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:12.736 09:24:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:07:12.736 null1 00:07:12.996 09:24:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:12.996 09:24:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:12.996 09:24:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:07:12.996 null2 00:07:12.996 09:24:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:12.996 09:24:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:12.996 09:24:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:07:13.256 null3 00:07:13.256 09:24:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:13.256 09:24:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:13.256 09:24:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:07:13.256 null4 00:07:13.518 09:24:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:13.518 09:24:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:13.518 09:24:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:07:13.518 null5 00:07:13.518 09:24:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:13.518 09:24:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:13.518 09:24:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:07:13.778 null6 00:07:13.778 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:13.778 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:13.778 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:07:14.040 null7 00:07:14.040 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:14.040 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:14.040 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:07:14.040 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:14.040 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:14.040 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:14.040 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:14.040 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:07:14.040 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:07:14.040 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:14.040 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:07:14.040 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:14.040 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.040 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:07:14.040 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:14.040 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.040 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:14.040 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:14.040 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:14.040 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:14.040 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:14.040 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:14.040 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:14.040 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:07:14.040 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:07:14.040 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:14.040 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.040 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:14.040 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:14.040 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:14.040 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:14.040 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:07:14.040 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:07:14.040 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:14.040 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.040 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:14.040 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:14.040 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:14.040 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:14.040 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:07:14.040 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:07:14.040 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:14.040 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.040 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:14.040 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:14.040 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:14.040 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:07:14.040 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:14.040 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:07:14.040 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:14.040 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.040 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:14.040 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:14.040 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:14.041 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:14.041 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:07:14.041 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:07:14.041 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:14.041 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:14.041 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:14.041 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.041 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:14.041 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:14.041 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2569881 2569882 2569885 2569886 2569888 2569890 2569892 2569894 00:07:14.041 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:07:14.041 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:07:14.041 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:14.041 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.041 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:14.041 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:14.041 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:14.041 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:14.303 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:14.303 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:14.303 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:14.303 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:14.303 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:14.303 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.303 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.303 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:14.303 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.303 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.303 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:14.303 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.303 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.303 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:14.303 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.303 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.303 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:14.303 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.303 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.303 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:14.303 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.303 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.303 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:14.303 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.303 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.303 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:14.303 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.303 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.303 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:14.574 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:14.575 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:14.575 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:14.575 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:14.575 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:14.575 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:14.575 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:14.575 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:14.575 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.575 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.575 09:24:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:14.836 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.836 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.836 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:14.836 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.836 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.836 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:14.836 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.836 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.836 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:14.836 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.836 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.836 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:14.836 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.836 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.836 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:14.836 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.836 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.836 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:14.836 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.836 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.836 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:14.836 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:14.836 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:14.836 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:14.836 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:14.836 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:14.836 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:15.098 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:15.098 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:15.098 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.098 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.098 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:15.098 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.098 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.098 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:15.098 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.098 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.098 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:15.098 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.098 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.098 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:15.098 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.098 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.098 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:15.099 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.099 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.099 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:15.099 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.099 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.099 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:15.099 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:15.099 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.099 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.099 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:15.360 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:15.361 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:15.361 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:15.361 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:15.361 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:15.361 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:15.361 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:15.361 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.361 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.361 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:15.361 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.361 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.361 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:15.361 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.361 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.361 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:15.361 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.361 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.361 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:15.623 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.623 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.623 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:15.623 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.623 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.623 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:15.623 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.623 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.623 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:15.623 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.623 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.623 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:15.623 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:15.623 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:15.623 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:15.623 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:15.623 09:24:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:15.623 09:24:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:15.623 09:24:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:15.623 09:24:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:15.885 09:24:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.885 09:24:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.885 09:24:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:15.885 09:24:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.885 09:24:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.885 09:24:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:15.885 09:24:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.885 09:24:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.885 09:24:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:15.885 09:24:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.885 09:24:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.885 09:24:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:15.885 09:24:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.885 09:24:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.885 09:24:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:15.885 09:24:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.885 09:24:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.885 09:24:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:15.885 09:24:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.885 09:24:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.885 09:24:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:15.885 09:24:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.885 09:24:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.885 09:24:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:15.885 09:24:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:15.885 09:24:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:16.147 09:24:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:16.147 09:24:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:16.147 09:24:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:16.147 09:24:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:16.147 09:24:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:16.147 09:24:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:16.147 09:24:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:16.147 09:24:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:16.147 09:24:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:16.147 09:24:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:16.147 09:24:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:16.147 09:24:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:16.147 09:24:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:16.147 09:24:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:16.147 09:24:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:16.147 09:24:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:16.147 09:24:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:16.147 09:24:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:16.147 09:24:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:16.147 09:24:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:16.147 09:24:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:16.408 09:24:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:16.408 09:24:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:16.408 09:24:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:16.408 09:24:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:16.408 09:24:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:16.408 09:24:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:16.408 09:24:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:16.408 09:24:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:16.408 09:24:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:16.408 09:24:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:16.408 09:24:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:16.408 09:24:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:16.408 09:24:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:16.408 09:24:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:16.408 09:24:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:16.408 09:24:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:16.408 09:24:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:16.408 09:24:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:16.408 09:24:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:16.408 09:24:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:16.408 09:24:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:16.408 09:24:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:16.408 09:24:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:16.669 09:24:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:16.670 09:24:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:16.670 09:24:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:16.670 09:24:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:16.670 09:24:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:16.670 09:24:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:16.670 09:24:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:16.670 09:24:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:16.670 09:24:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:16.670 09:24:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:16.670 09:24:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:16.670 09:24:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:16.670 09:24:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:16.670 09:24:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:16.670 09:24:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:16.670 09:24:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:16.670 09:24:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:16.670 09:24:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:16.670 09:24:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:16.670 09:24:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:16.670 09:24:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:16.932 09:24:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:16.932 09:24:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:16.932 09:24:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:16.932 09:24:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:16.932 09:24:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:16.932 09:24:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:16.932 09:24:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:16.932 09:24:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:16.932 09:24:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:16.932 09:24:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:16.932 09:24:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:16.932 09:24:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:16.932 09:24:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:16.932 09:24:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:16.932 09:24:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:16.932 09:24:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:16.932 09:24:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:16.932 09:24:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:16.932 09:24:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:16.932 09:24:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:16.932 09:24:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:16.932 09:24:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:16.932 09:24:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:16.932 09:24:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:16.932 09:24:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:16.932 09:24:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:16.932 09:24:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:16.932 09:24:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:16.932 09:24:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:16.932 09:24:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:16.932 09:24:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:17.194 09:24:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:17.194 09:24:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:17.194 09:24:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:17.194 09:24:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:17.194 09:24:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:17.194 09:24:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.194 09:24:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:17.194 09:24:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:17.194 09:24:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:17.194 09:24:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.194 09:24:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:17.194 09:24:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:17.194 09:24:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:17.194 09:24:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.194 09:24:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:17.456 09:24:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:17.456 09:24:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.456 09:24:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:17.456 09:24:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:17.456 09:24:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:17.456 09:24:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:17.456 09:24:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.456 09:24:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:17.456 09:24:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:17.456 09:24:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.456 09:24:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:17.456 09:24:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:17.456 09:24:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.456 09:24:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:17.456 09:24:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:17.456 09:24:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.456 09:24:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:17.456 09:24:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:17.456 09:24:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:17.456 09:24:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.456 09:24:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:17.456 09:24:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.456 09:24:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:17.456 09:24:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:17.456 09:24:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:17.718 09:24:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:17.718 09:24:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:17.718 09:24:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.718 09:24:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:17.718 09:24:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:17.718 09:24:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.718 09:24:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:17.718 09:24:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.718 09:24:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:17.718 09:24:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.718 09:24:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:17.718 09:24:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.718 09:24:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:17.718 09:24:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:17.718 09:24:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:07:17.718 09:24:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:07:17.718 09:24:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:17.718 09:24:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:07:17.718 09:24:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:17.718 09:24:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:07:17.718 09:24:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:17.718 09:24:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:17.718 rmmod nvme_tcp 00:07:17.718 rmmod nvme_fabrics 00:07:17.980 rmmod nvme_keyring 00:07:17.980 09:24:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:17.980 09:24:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:07:17.980 09:24:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:07:17.980 09:24:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 2562945 ']' 00:07:17.980 09:24:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 2562945 00:07:17.980 09:24:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 2562945 ']' 00:07:17.980 09:24:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 2562945 00:07:17.980 09:24:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:07:17.980 09:24:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:17.980 09:24:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2562945 00:07:17.980 09:24:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:17.980 09:24:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:17.980 09:24:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2562945' 00:07:17.980 killing process with pid 2562945 00:07:17.980 09:24:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 2562945 00:07:17.980 09:24:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 2562945 00:07:17.980 09:24:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:17.980 09:24:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:17.980 09:24:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:17.980 09:24:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:07:17.980 09:24:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:07:17.980 09:24:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:17.980 09:24:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:07:17.980 09:24:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:17.980 09:24:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:17.980 09:24:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:17.980 09:24:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:17.980 09:24:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:20.535 09:24:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:20.535 00:07:20.535 real 0m48.880s 00:07:20.535 user 3m18.661s 00:07:20.535 sys 0m17.397s 00:07:20.535 09:24:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:20.535 09:24:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:20.535 ************************************ 00:07:20.535 END TEST nvmf_ns_hotplug_stress 00:07:20.535 ************************************ 00:07:20.535 09:24:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:20.535 09:24:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:20.535 09:24:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:20.535 09:24:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:20.535 ************************************ 00:07:20.535 START TEST nvmf_delete_subsystem 00:07:20.535 ************************************ 00:07:20.535 09:24:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:20.535 * Looking for test storage... 00:07:20.535 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:20.535 09:24:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:20.535 09:24:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:07:20.535 09:24:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:20.535 09:24:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:20.535 09:24:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:20.535 09:24:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:20.535 09:24:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:20.535 09:24:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:07:20.535 09:24:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:07:20.535 09:24:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:07:20.535 09:24:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:07:20.535 09:24:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:07:20.535 09:24:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:07:20.535 09:24:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:07:20.535 09:24:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:20.535 09:24:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:07:20.535 09:24:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:07:20.535 09:24:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:20.535 09:24:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:20.535 09:24:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:07:20.535 09:24:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:07:20.535 09:24:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:20.535 09:24:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:07:20.535 09:24:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:07:20.535 09:24:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:07:20.535 09:24:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:07:20.535 09:24:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:20.535 09:24:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:07:20.535 09:24:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:07:20.535 09:24:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:20.535 09:24:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:20.535 09:24:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:07:20.535 09:24:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:20.535 09:24:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:20.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.535 --rc genhtml_branch_coverage=1 00:07:20.535 --rc genhtml_function_coverage=1 00:07:20.535 --rc genhtml_legend=1 00:07:20.535 --rc geninfo_all_blocks=1 00:07:20.535 --rc geninfo_unexecuted_blocks=1 00:07:20.535 00:07:20.535 ' 00:07:20.535 09:24:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:20.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.535 --rc genhtml_branch_coverage=1 00:07:20.535 --rc genhtml_function_coverage=1 00:07:20.535 --rc genhtml_legend=1 00:07:20.535 --rc geninfo_all_blocks=1 00:07:20.535 --rc geninfo_unexecuted_blocks=1 00:07:20.535 00:07:20.535 ' 00:07:20.535 09:24:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:20.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.535 --rc genhtml_branch_coverage=1 00:07:20.535 --rc genhtml_function_coverage=1 00:07:20.535 --rc genhtml_legend=1 00:07:20.535 --rc geninfo_all_blocks=1 00:07:20.535 --rc geninfo_unexecuted_blocks=1 00:07:20.535 00:07:20.535 ' 00:07:20.535 09:24:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:20.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.535 --rc genhtml_branch_coverage=1 00:07:20.535 --rc genhtml_function_coverage=1 00:07:20.535 --rc genhtml_legend=1 00:07:20.535 --rc geninfo_all_blocks=1 00:07:20.535 --rc geninfo_unexecuted_blocks=1 00:07:20.535 00:07:20.535 ' 00:07:20.535 09:24:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:20.535 09:24:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:07:20.535 09:24:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:20.535 09:24:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:20.535 09:24:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:20.535 09:24:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:20.535 09:24:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:20.535 09:24:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:20.535 09:24:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:20.535 09:24:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:20.535 09:24:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:20.535 09:24:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:20.535 09:24:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:20.535 09:24:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:20.535 09:24:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:20.535 09:24:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:20.535 09:24:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:20.535 09:24:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:20.535 09:24:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:20.535 09:24:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:07:20.535 09:24:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:20.535 09:24:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:20.535 09:24:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:20.535 09:24:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.536 09:24:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.536 09:24:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.536 09:24:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:07:20.536 09:24:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.536 09:24:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:07:20.536 09:24:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:20.536 09:24:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:20.536 09:24:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:20.536 09:24:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:20.536 09:24:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:20.536 09:24:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:20.536 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:20.536 09:24:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:20.536 09:24:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:20.536 09:24:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:20.536 09:24:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:07:20.536 09:24:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:20.536 09:24:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:20.536 09:24:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:20.536 09:24:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:20.536 09:24:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:20.536 09:24:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:20.536 09:24:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:20.536 09:24:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:20.536 09:24:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:20.536 09:24:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:20.536 09:24:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:07:20.536 09:24:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:28.676 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:28.676 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:07:28.676 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:28.676 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:28.676 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:28.676 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:28.676 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:28.676 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:07:28.677 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:28.677 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:07:28.677 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:07:28.677 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:07:28.677 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:07:28.677 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:07:28.677 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:07:28.677 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:28.677 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:28.677 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:28.677 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:28.677 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:28.677 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:28.677 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:28.677 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:28.677 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:28.677 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:28.677 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:28.677 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:28.677 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:28.677 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:28.677 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:28.677 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:28.677 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:28.677 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:28.677 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:28.677 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:28.677 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:28.677 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:28.677 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:28.677 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:28.677 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:28.677 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:28.677 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:28.677 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:28.677 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:28.677 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:28.677 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:28.677 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:28.677 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:28.677 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:28.677 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:28.677 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:28.677 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:28.677 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:28.677 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:28.677 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:28.677 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:28.677 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:28.677 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:28.677 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:28.677 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:28.677 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:28.677 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:28.677 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:28.677 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:28.677 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:28.677 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:28.677 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:28.677 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:28.677 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:28.677 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:28.677 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:28.677 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:28.677 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:28.677 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:07:28.677 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:28.677 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:28.677 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:28.677 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:28.677 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:28.677 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:28.677 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:28.677 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:28.677 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:28.677 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:28.677 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:28.677 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:28.677 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:28.677 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:28.677 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:28.678 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:28.678 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:28.678 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:28.678 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:28.678 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:28.678 09:25:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:28.678 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:28.678 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:28.678 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:28.678 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:28.678 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:28.678 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:28.678 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.561 ms 00:07:28.678 00:07:28.678 --- 10.0.0.2 ping statistics --- 00:07:28.678 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:28.678 rtt min/avg/max/mdev = 0.561/0.561/0.561/0.000 ms 00:07:28.678 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:28.678 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:28.678 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.269 ms 00:07:28.678 00:07:28.678 --- 10.0.0.1 ping statistics --- 00:07:28.678 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:28.678 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:07:28.678 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:28.678 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:07:28.678 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:28.678 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:28.678 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:28.678 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:28.678 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:28.678 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:28.678 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:28.678 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:07:28.678 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:28.678 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:28.678 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:28.678 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=2575159 00:07:28.678 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 2575159 00:07:28.678 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:07:28.678 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 2575159 ']' 00:07:28.678 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.678 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:28.678 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.678 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:28.678 09:25:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:28.678 [2024-12-09 09:25:03.257619] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:07:28.678 [2024-12-09 09:25:03.257696] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:28.678 [2024-12-09 09:25:03.355256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:28.678 [2024-12-09 09:25:03.382496] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:28.678 [2024-12-09 09:25:03.382546] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:28.678 [2024-12-09 09:25:03.382555] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:28.678 [2024-12-09 09:25:03.382562] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:28.678 [2024-12-09 09:25:03.382568] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:28.678 [2024-12-09 09:25:03.384154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:28.678 [2024-12-09 09:25:03.384159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.678 09:25:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:28.678 09:25:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:07:28.678 09:25:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:28.678 09:25:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:28.678 09:25:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:28.678 09:25:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:28.678 09:25:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:28.678 09:25:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.678 09:25:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:28.678 [2024-12-09 09:25:04.127302] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:28.939 09:25:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.939 09:25:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:28.939 09:25:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.939 09:25:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:28.939 09:25:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.939 09:25:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:28.939 09:25:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.940 09:25:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:28.940 [2024-12-09 09:25:04.143567] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:28.940 09:25:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.940 09:25:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:07:28.940 09:25:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.940 09:25:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:28.940 NULL1 00:07:28.940 09:25:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.940 09:25:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:28.940 09:25:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.940 09:25:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:28.940 Delay0 00:07:28.940 09:25:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.940 09:25:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:28.940 09:25:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.940 09:25:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:28.940 09:25:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.940 09:25:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2575413 00:07:28.940 09:25:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:07:28.940 09:25:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:28.940 [2024-12-09 09:25:04.248475] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:30.852 09:25:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:30.852 09:25:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.852 09:25:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:30.852 Read completed with error (sct=0, sc=8) 00:07:30.852 Read completed with error (sct=0, sc=8) 00:07:30.852 Read completed with error (sct=0, sc=8) 00:07:30.852 starting I/O failed: -6 00:07:30.852 Write completed with error (sct=0, sc=8) 00:07:30.852 Read completed with error (sct=0, sc=8) 00:07:30.852 Write completed with error (sct=0, sc=8) 00:07:30.852 Write completed with error (sct=0, sc=8) 00:07:30.852 starting I/O failed: -6 00:07:30.852 Write completed with error (sct=0, sc=8) 00:07:30.852 Write completed with error (sct=0, sc=8) 00:07:30.852 Read completed with error (sct=0, sc=8) 00:07:30.852 Read completed with error (sct=0, sc=8) 00:07:30.852 starting I/O failed: -6 00:07:30.852 Read completed with error (sct=0, sc=8) 00:07:30.852 Read completed with error (sct=0, sc=8) 00:07:30.852 Read completed with error (sct=0, sc=8) 00:07:30.852 Write completed with error (sct=0, sc=8) 00:07:30.852 starting I/O failed: -6 00:07:30.852 Write completed with error (sct=0, sc=8) 00:07:30.852 Write completed with error (sct=0, sc=8) 00:07:30.852 Write completed with error (sct=0, sc=8) 00:07:30.852 Write completed with error (sct=0, sc=8) 00:07:30.852 starting I/O failed: -6 00:07:30.852 Read completed with error (sct=0, sc=8) 00:07:30.852 Read completed with error (sct=0, sc=8) 00:07:30.852 Write completed with error (sct=0, sc=8) 00:07:30.852 Read completed with error (sct=0, sc=8) 00:07:30.852 starting I/O failed: -6 00:07:30.852 Read completed with error (sct=0, sc=8) 00:07:30.852 Write completed with error (sct=0, sc=8) 00:07:30.852 Read completed with error (sct=0, sc=8) 00:07:30.852 Read completed with error (sct=0, sc=8) 00:07:30.852 starting I/O failed: -6 00:07:30.852 Read completed with error (sct=0, sc=8) 00:07:30.852 Read completed with error (sct=0, sc=8) 00:07:30.852 Write completed with error (sct=0, sc=8) 00:07:30.852 Read completed with error (sct=0, sc=8) 00:07:30.852 starting I/O failed: -6 00:07:30.852 Write completed with error (sct=0, sc=8) 00:07:30.852 Read completed with error (sct=0, sc=8) 00:07:30.852 Read completed with error (sct=0, sc=8) 00:07:30.852 Write completed with error (sct=0, sc=8) 00:07:30.852 starting I/O failed: -6 00:07:30.852 Read completed with error (sct=0, sc=8) 00:07:30.852 Read completed with error (sct=0, sc=8) 00:07:30.852 Write completed with error (sct=0, sc=8) 00:07:30.852 Read completed with error (sct=0, sc=8) 00:07:30.852 starting I/O failed: -6 00:07:30.852 Read completed with error (sct=0, sc=8) 00:07:30.852 Write completed with error (sct=0, sc=8) 00:07:30.852 Write completed with error (sct=0, sc=8) 00:07:30.852 [2024-12-09 09:25:06.291892] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a3330 is same with the state(6) to be set 00:07:30.852 Read completed with error (sct=0, sc=8) 00:07:30.852 Read completed with error (sct=0, sc=8) 00:07:30.852 Write completed with error (sct=0, sc=8) 00:07:30.852 Read completed with error (sct=0, sc=8) 00:07:30.852 Read completed with error (sct=0, sc=8) 00:07:30.852 Read completed with error (sct=0, sc=8) 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 Write completed with error (sct=0, sc=8) 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 Write completed with error (sct=0, sc=8) 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 Write completed with error (sct=0, sc=8) 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 Write completed with error (sct=0, sc=8) 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 Write completed with error (sct=0, sc=8) 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 Write completed with error (sct=0, sc=8) 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 Write completed with error (sct=0, sc=8) 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 Write completed with error (sct=0, sc=8) 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 Write completed with error (sct=0, sc=8) 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 Write completed with error (sct=0, sc=8) 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 Write completed with error (sct=0, sc=8) 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 Write completed with error (sct=0, sc=8) 00:07:30.853 starting I/O failed: -6 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 Write completed with error (sct=0, sc=8) 00:07:30.853 Write completed with error (sct=0, sc=8) 00:07:30.853 starting I/O failed: -6 00:07:30.853 Write completed with error (sct=0, sc=8) 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 Write completed with error (sct=0, sc=8) 00:07:30.853 starting I/O failed: -6 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 starting I/O failed: -6 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 starting I/O failed: -6 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 starting I/O failed: -6 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 Write completed with error (sct=0, sc=8) 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 starting I/O failed: -6 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 Write completed with error (sct=0, sc=8) 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 Write completed with error (sct=0, sc=8) 00:07:30.853 starting I/O failed: -6 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 Write completed with error (sct=0, sc=8) 00:07:30.853 Write completed with error (sct=0, sc=8) 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 starting I/O failed: -6 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 Write completed with error (sct=0, sc=8) 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 starting I/O failed: -6 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 starting I/O failed: -6 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 Write completed with error (sct=0, sc=8) 00:07:30.853 starting I/O failed: -6 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 Write completed with error (sct=0, sc=8) 00:07:30.853 starting I/O failed: -6 00:07:30.853 Write completed with error (sct=0, sc=8) 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 starting I/O failed: -6 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 starting I/O failed: -6 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 starting I/O failed: -6 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 starting I/O failed: -6 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 starting I/O failed: -6 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 starting I/O failed: -6 00:07:30.853 Write completed with error (sct=0, sc=8) 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 starting I/O failed: -6 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 starting I/O failed: -6 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 starting I/O failed: -6 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 Write completed with error (sct=0, sc=8) 00:07:30.853 starting I/O failed: -6 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 starting I/O failed: -6 00:07:30.853 Write completed with error (sct=0, sc=8) 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 starting I/O failed: -6 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 starting I/O failed: -6 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 starting I/O failed: -6 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 starting I/O failed: -6 00:07:30.853 Write completed with error (sct=0, sc=8) 00:07:30.853 Write completed with error (sct=0, sc=8) 00:07:30.853 starting I/O failed: -6 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 starting I/O failed: -6 00:07:30.853 Write completed with error (sct=0, sc=8) 00:07:30.853 Write completed with error (sct=0, sc=8) 00:07:30.853 starting I/O failed: -6 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 starting I/O failed: -6 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 starting I/O failed: -6 00:07:30.853 Write completed with error (sct=0, sc=8) 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 starting I/O failed: -6 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 starting I/O failed: -6 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 starting I/O failed: -6 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 Read completed with error (sct=0, sc=8) 00:07:30.853 starting I/O failed: -6 00:07:30.853 Write completed with error (sct=0, sc=8) 00:07:30.853 Write completed with error (sct=0, sc=8) 00:07:30.853 starting I/O failed: -6 00:07:30.853 starting I/O failed: -6 00:07:30.853 starting I/O failed: -6 00:07:30.853 starting I/O failed: -6 00:07:30.853 starting I/O failed: -6 00:07:30.853 starting I/O failed: -6 00:07:30.853 starting I/O failed: -6 00:07:32.234 [2024-12-09 09:25:07.265573] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a1190 is same with the state(6) to be set 00:07:32.234 Read completed with error (sct=0, sc=8) 00:07:32.234 Read completed with error (sct=0, sc=8) 00:07:32.234 Read completed with error (sct=0, sc=8) 00:07:32.234 Read completed with error (sct=0, sc=8) 00:07:32.234 Read completed with error (sct=0, sc=8) 00:07:32.234 Write completed with error (sct=0, sc=8) 00:07:32.234 Write completed with error (sct=0, sc=8) 00:07:32.234 Read completed with error (sct=0, sc=8) 00:07:32.234 Read completed with error (sct=0, sc=8) 00:07:32.234 Read completed with error (sct=0, sc=8) 00:07:32.234 Read completed with error (sct=0, sc=8) 00:07:32.234 Read completed with error (sct=0, sc=8) 00:07:32.234 Read completed with error (sct=0, sc=8) 00:07:32.234 Read completed with error (sct=0, sc=8) 00:07:32.234 Read completed with error (sct=0, sc=8) 00:07:32.234 Read completed with error (sct=0, sc=8) 00:07:32.234 Read completed with error (sct=0, sc=8) 00:07:32.234 Write completed with error (sct=0, sc=8) 00:07:32.234 Write completed with error (sct=0, sc=8) 00:07:32.234 Read completed with error (sct=0, sc=8) 00:07:32.234 Read completed with error (sct=0, sc=8) 00:07:32.234 [2024-12-09 09:25:07.295811] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a3150 is same with the state(6) to be set 00:07:32.234 Read completed with error (sct=0, sc=8) 00:07:32.234 Read completed with error (sct=0, sc=8) 00:07:32.234 Write completed with error (sct=0, sc=8) 00:07:32.234 Read completed with error (sct=0, sc=8) 00:07:32.234 Write completed with error (sct=0, sc=8) 00:07:32.234 Read completed with error (sct=0, sc=8) 00:07:32.234 Read completed with error (sct=0, sc=8) 00:07:32.234 Write completed with error (sct=0, sc=8) 00:07:32.234 Write completed with error (sct=0, sc=8) 00:07:32.234 Read completed with error (sct=0, sc=8) 00:07:32.234 Read completed with error (sct=0, sc=8) 00:07:32.234 Read completed with error (sct=0, sc=8) 00:07:32.234 Read completed with error (sct=0, sc=8) 00:07:32.234 Read completed with error (sct=0, sc=8) 00:07:32.234 Read completed with error (sct=0, sc=8) 00:07:32.234 Write completed with error (sct=0, sc=8) 00:07:32.234 Write completed with error (sct=0, sc=8) 00:07:32.234 Read completed with error (sct=0, sc=8) 00:07:32.234 Write completed with error (sct=0, sc=8) 00:07:32.234 Write completed with error (sct=0, sc=8) 00:07:32.234 [2024-12-09 09:25:07.295894] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a3510 is same with the state(6) to be set 00:07:32.234 Read completed with error (sct=0, sc=8) 00:07:32.234 Read completed with error (sct=0, sc=8) 00:07:32.234 Read completed with error (sct=0, sc=8) 00:07:32.234 Read completed with error (sct=0, sc=8) 00:07:32.234 Write completed with error (sct=0, sc=8) 00:07:32.234 Read completed with error (sct=0, sc=8) 00:07:32.234 Write completed with error (sct=0, sc=8) 00:07:32.234 Read completed with error (sct=0, sc=8) 00:07:32.234 Read completed with error (sct=0, sc=8) 00:07:32.234 Read completed with error (sct=0, sc=8) 00:07:32.234 Read completed with error (sct=0, sc=8) 00:07:32.234 Read completed with error (sct=0, sc=8) 00:07:32.234 Read completed with error (sct=0, sc=8) 00:07:32.234 Write completed with error (sct=0, sc=8) 00:07:32.234 Read completed with error (sct=0, sc=8) 00:07:32.234 Write completed with error (sct=0, sc=8) 00:07:32.234 Read completed with error (sct=0, sc=8) 00:07:32.234 Write completed with error (sct=0, sc=8) 00:07:32.234 Read completed with error (sct=0, sc=8) 00:07:32.234 Write completed with error (sct=0, sc=8) 00:07:32.234 Read completed with error (sct=0, sc=8) 00:07:32.234 Read completed with error (sct=0, sc=8) 00:07:32.234 Write completed with error (sct=0, sc=8) 00:07:32.234 Read completed with error (sct=0, sc=8) 00:07:32.234 Read completed with error (sct=0, sc=8) 00:07:32.234 Read completed with error (sct=0, sc=8) 00:07:32.234 Write completed with error (sct=0, sc=8) 00:07:32.234 Read completed with error (sct=0, sc=8) 00:07:32.234 Read completed with error (sct=0, sc=8) 00:07:32.234 Read completed with error (sct=0, sc=8) 00:07:32.234 Read completed with error (sct=0, sc=8) 00:07:32.234 Read completed with error (sct=0, sc=8) 00:07:32.234 Read completed with error (sct=0, sc=8) 00:07:32.234 Read completed with error (sct=0, sc=8) 00:07:32.234 Write completed with error (sct=0, sc=8) 00:07:32.234 Read completed with error (sct=0, sc=8) 00:07:32.234 Read completed with error (sct=0, sc=8) 00:07:32.234 Read completed with error (sct=0, sc=8) 00:07:32.234 Read completed with error (sct=0, sc=8) 00:07:32.235 Write completed with error (sct=0, sc=8) 00:07:32.235 Write completed with error (sct=0, sc=8) 00:07:32.235 [2024-12-09 09:25:07.299682] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd55000d7c0 is same with the state(6) to be set 00:07:32.235 Write completed with error (sct=0, sc=8) 00:07:32.235 Read completed with error (sct=0, sc=8) 00:07:32.235 Read completed with error (sct=0, sc=8) 00:07:32.235 Read completed with error (sct=0, sc=8) 00:07:32.235 Write completed with error (sct=0, sc=8) 00:07:32.235 Read completed with error (sct=0, sc=8) 00:07:32.235 Read completed with error (sct=0, sc=8) 00:07:32.235 Read completed with error (sct=0, sc=8) 00:07:32.235 Read completed with error (sct=0, sc=8) 00:07:32.235 Read completed with error (sct=0, sc=8) 00:07:32.235 Read completed with error (sct=0, sc=8) 00:07:32.235 Read completed with error (sct=0, sc=8) 00:07:32.235 Write completed with error (sct=0, sc=8) 00:07:32.235 Read completed with error (sct=0, sc=8) 00:07:32.235 Write completed with error (sct=0, sc=8) 00:07:32.235 Write completed with error (sct=0, sc=8) 00:07:32.235 Read completed with error (sct=0, sc=8) 00:07:32.235 Read completed with error (sct=0, sc=8) 00:07:32.235 Read completed with error (sct=0, sc=8) 00:07:32.235 Read completed with error (sct=0, sc=8) 00:07:32.235 Write completed with error (sct=0, sc=8) 00:07:32.235 Read completed with error (sct=0, sc=8) 00:07:32.235 Read completed with error (sct=0, sc=8) 00:07:32.235 Read completed with error (sct=0, sc=8) 00:07:32.235 Read completed with error (sct=0, sc=8) 00:07:32.235 Write completed with error (sct=0, sc=8) 00:07:32.235 Write completed with error (sct=0, sc=8) 00:07:32.235 Write completed with error (sct=0, sc=8) 00:07:32.235 Read completed with error (sct=0, sc=8) 00:07:32.235 Read completed with error (sct=0, sc=8) 00:07:32.235 Read completed with error (sct=0, sc=8) 00:07:32.235 Read completed with error (sct=0, sc=8) 00:07:32.235 Read completed with error (sct=0, sc=8) 00:07:32.235 Read completed with error (sct=0, sc=8) 00:07:32.235 Read completed with error (sct=0, sc=8) 00:07:32.235 Read completed with error (sct=0, sc=8) 00:07:32.235 Write completed with error (sct=0, sc=8) 00:07:32.235 Read completed with error (sct=0, sc=8) 00:07:32.235 Read completed with error (sct=0, sc=8) 00:07:32.235 Read completed with error (sct=0, sc=8) 00:07:32.235 Read completed with error (sct=0, sc=8) 00:07:32.235 Read completed with error (sct=0, sc=8) 00:07:32.235 Read completed with error (sct=0, sc=8) 00:07:32.235 [2024-12-09 09:25:07.299833] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd55000d020 is same with the state(6) to be set 00:07:32.235 Initializing NVMe Controllers 00:07:32.235 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:32.235 Controller IO queue size 128, less than required. 00:07:32.235 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:32.235 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:32.235 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:32.235 Initialization complete. Launching workers. 00:07:32.235 ======================================================== 00:07:32.235 Latency(us) 00:07:32.235 Device Information : IOPS MiB/s Average min max 00:07:32.235 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 163.40 0.08 908525.22 217.32 1006091.89 00:07:32.235 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 181.33 0.09 938242.73 408.12 2002511.01 00:07:32.235 ======================================================== 00:07:32.235 Total : 344.73 0.17 924156.97 217.32 2002511.01 00:07:32.235 00:07:32.235 [2024-12-09 09:25:07.300353] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22a1190 (9): Bad file descriptor 00:07:32.235 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:07:32.235 09:25:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.235 09:25:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:07:32.235 09:25:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2575413 00:07:32.235 09:25:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:32.494 09:25:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:32.494 09:25:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2575413 00:07:32.494 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2575413) - No such process 00:07:32.494 09:25:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2575413 00:07:32.494 09:25:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:07:32.494 09:25:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2575413 00:07:32.494 09:25:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:07:32.494 09:25:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:32.494 09:25:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:07:32.494 09:25:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:32.494 09:25:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 2575413 00:07:32.494 09:25:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:07:32.494 09:25:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:32.494 09:25:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:32.494 09:25:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:32.494 09:25:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:32.494 09:25:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.494 09:25:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:32.494 09:25:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.494 09:25:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:32.494 09:25:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.494 09:25:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:32.494 [2024-12-09 09:25:07.830244] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:32.494 09:25:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.494 09:25:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:32.494 09:25:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.494 09:25:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:32.494 09:25:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.494 09:25:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2576096 00:07:32.494 09:25:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:07:32.494 09:25:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:32.494 09:25:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2576096 00:07:32.494 09:25:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:32.494 [2024-12-09 09:25:07.908742] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:33.063 09:25:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:33.063 09:25:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2576096 00:07:33.063 09:25:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:33.633 09:25:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:33.633 09:25:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2576096 00:07:33.633 09:25:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:34.203 09:25:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:34.203 09:25:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2576096 00:07:34.203 09:25:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:34.464 09:25:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:34.464 09:25:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2576096 00:07:34.464 09:25:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:35.034 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:35.034 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2576096 00:07:35.034 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:35.605 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:35.605 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2576096 00:07:35.605 09:25:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:35.865 Initializing NVMe Controllers 00:07:35.865 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:35.865 Controller IO queue size 128, less than required. 00:07:35.865 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:35.865 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:35.865 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:35.865 Initialization complete. Launching workers. 00:07:35.865 ======================================================== 00:07:35.865 Latency(us) 00:07:35.865 Device Information : IOPS MiB/s Average min max 00:07:35.865 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001993.71 1000115.40 1043097.99 00:07:35.865 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003113.85 1000222.34 1041256.43 00:07:35.865 ======================================================== 00:07:35.865 Total : 256.00 0.12 1002553.78 1000115.40 1043097.99 00:07:35.865 00:07:36.126 09:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:36.126 09:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2576096 00:07:36.126 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2576096) - No such process 00:07:36.126 09:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2576096 00:07:36.126 09:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:36.126 09:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:36.126 09:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:36.126 09:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:07:36.126 09:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:36.126 09:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:07:36.126 09:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:36.126 09:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:36.126 rmmod nvme_tcp 00:07:36.126 rmmod nvme_fabrics 00:07:36.126 rmmod nvme_keyring 00:07:36.126 09:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:36.126 09:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:07:36.126 09:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:07:36.126 09:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 2575159 ']' 00:07:36.126 09:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 2575159 00:07:36.126 09:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 2575159 ']' 00:07:36.126 09:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 2575159 00:07:36.126 09:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:07:36.126 09:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:36.126 09:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2575159 00:07:36.126 09:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:36.126 09:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:36.126 09:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2575159' 00:07:36.126 killing process with pid 2575159 00:07:36.126 09:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 2575159 00:07:36.126 09:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 2575159 00:07:36.388 09:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:36.388 09:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:36.388 09:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:36.388 09:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:07:36.388 09:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:07:36.388 09:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:36.388 09:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:07:36.388 09:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:36.388 09:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:36.388 09:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:36.388 09:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:36.388 09:25:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:38.300 09:25:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:38.300 00:07:38.300 real 0m18.157s 00:07:38.300 user 0m30.616s 00:07:38.300 sys 0m6.649s 00:07:38.300 09:25:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:38.300 09:25:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:38.300 ************************************ 00:07:38.300 END TEST nvmf_delete_subsystem 00:07:38.300 ************************************ 00:07:38.300 09:25:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:38.300 09:25:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:38.300 09:25:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:38.300 09:25:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:38.562 ************************************ 00:07:38.562 START TEST nvmf_host_management 00:07:38.562 ************************************ 00:07:38.562 09:25:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:38.562 * Looking for test storage... 00:07:38.562 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:38.562 09:25:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:38.562 09:25:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:07:38.562 09:25:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:38.562 09:25:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:38.562 09:25:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:38.562 09:25:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:38.562 09:25:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:38.562 09:25:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:07:38.562 09:25:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:07:38.562 09:25:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:07:38.562 09:25:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:07:38.562 09:25:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:07:38.562 09:25:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:07:38.562 09:25:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:07:38.562 09:25:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:38.562 09:25:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:07:38.562 09:25:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:07:38.562 09:25:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:38.562 09:25:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:38.562 09:25:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:07:38.562 09:25:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:07:38.562 09:25:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:38.562 09:25:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:07:38.562 09:25:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:07:38.562 09:25:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:07:38.562 09:25:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:07:38.562 09:25:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:38.562 09:25:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:07:38.562 09:25:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:07:38.562 09:25:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:38.562 09:25:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:38.562 09:25:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:07:38.562 09:25:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:38.562 09:25:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:38.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.562 --rc genhtml_branch_coverage=1 00:07:38.562 --rc genhtml_function_coverage=1 00:07:38.562 --rc genhtml_legend=1 00:07:38.562 --rc geninfo_all_blocks=1 00:07:38.562 --rc geninfo_unexecuted_blocks=1 00:07:38.562 00:07:38.562 ' 00:07:38.562 09:25:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:38.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.562 --rc genhtml_branch_coverage=1 00:07:38.562 --rc genhtml_function_coverage=1 00:07:38.562 --rc genhtml_legend=1 00:07:38.562 --rc geninfo_all_blocks=1 00:07:38.562 --rc geninfo_unexecuted_blocks=1 00:07:38.562 00:07:38.562 ' 00:07:38.562 09:25:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:38.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.562 --rc genhtml_branch_coverage=1 00:07:38.562 --rc genhtml_function_coverage=1 00:07:38.562 --rc genhtml_legend=1 00:07:38.562 --rc geninfo_all_blocks=1 00:07:38.562 --rc geninfo_unexecuted_blocks=1 00:07:38.562 00:07:38.562 ' 00:07:38.562 09:25:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:38.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.562 --rc genhtml_branch_coverage=1 00:07:38.562 --rc genhtml_function_coverage=1 00:07:38.562 --rc genhtml_legend=1 00:07:38.562 --rc geninfo_all_blocks=1 00:07:38.562 --rc geninfo_unexecuted_blocks=1 00:07:38.562 00:07:38.562 ' 00:07:38.562 09:25:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:38.562 09:25:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:38.562 09:25:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:38.562 09:25:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:38.562 09:25:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:38.562 09:25:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:38.562 09:25:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:38.562 09:25:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:38.562 09:25:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:38.562 09:25:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:38.562 09:25:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:38.562 09:25:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:38.562 09:25:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:38.562 09:25:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:38.562 09:25:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:38.562 09:25:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:38.562 09:25:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:38.562 09:25:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:38.562 09:25:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:38.562 09:25:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:07:38.562 09:25:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:38.562 09:25:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:38.562 09:25:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:38.562 09:25:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.563 09:25:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.563 09:25:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.563 09:25:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:38.824 09:25:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.824 09:25:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:07:38.824 09:25:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:38.824 09:25:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:38.824 09:25:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:38.824 09:25:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:38.824 09:25:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:38.824 09:25:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:38.824 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:38.824 09:25:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:38.824 09:25:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:38.824 09:25:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:38.824 09:25:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:38.824 09:25:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:38.824 09:25:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:38.824 09:25:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:38.824 09:25:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:38.824 09:25:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:38.824 09:25:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:38.824 09:25:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:38.824 09:25:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:38.824 09:25:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:38.824 09:25:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:38.824 09:25:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:38.824 09:25:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:38.824 09:25:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:07:38.824 09:25:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:46.967 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:46.967 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:07:46.967 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:46.967 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:46.967 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:46.967 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:46.967 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:46.967 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:07:46.967 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:46.967 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:07:46.967 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:07:46.967 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:07:46.967 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:07:46.967 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:07:46.967 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:07:46.967 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:46.967 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:46.967 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:46.967 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:46.967 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:46.967 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:46.967 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:46.967 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:46.967 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:46.967 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:46.967 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:46.967 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:46.967 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:46.967 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:46.967 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:46.967 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:46.967 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:46.967 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:46.967 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:46.967 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:46.967 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:46.967 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:46.967 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:46.967 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:46.967 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:46.967 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:46.967 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:46.967 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:46.967 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:46.967 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:46.967 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:46.967 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:46.967 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:46.967 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:46.967 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:46.967 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:46.967 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:46.967 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:46.967 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:46.967 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:46.967 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:46.967 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:46.967 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:46.967 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:46.967 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:46.967 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:46.967 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:46.967 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:46.967 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:46.967 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:46.967 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:46.967 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:46.967 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:46.967 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:46.967 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:46.967 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:46.967 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:46.967 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:46.967 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:07:46.967 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:46.967 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:46.967 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:46.967 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:46.967 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:46.967 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:46.967 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:46.968 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:46.968 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:46.968 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:46.968 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:46.968 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:46.968 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:46.968 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:46.968 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:46.968 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:46.968 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:46.968 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:46.968 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:46.968 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:46.968 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:46.968 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:46.968 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:46.968 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:46.968 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:46.968 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:46.968 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:46.968 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.690 ms 00:07:46.968 00:07:46.968 --- 10.0.0.2 ping statistics --- 00:07:46.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:46.968 rtt min/avg/max/mdev = 0.690/0.690/0.690/0.000 ms 00:07:46.968 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:46.968 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:46.968 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:07:46.968 00:07:46.968 --- 10.0.0.1 ping statistics --- 00:07:46.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:46.968 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:07:46.968 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:46.968 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:07:46.968 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:46.968 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:46.968 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:46.968 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:46.968 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:46.968 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:46.968 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:46.968 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:46.968 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:46.968 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:46.968 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:46.968 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:46.968 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:46.968 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=2581113 00:07:46.968 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 2581113 00:07:46.968 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:46.968 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2581113 ']' 00:07:46.968 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:46.968 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:46.968 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:46.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:46.968 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:46.968 09:25:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:46.968 [2024-12-09 09:25:21.504180] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:07:46.968 [2024-12-09 09:25:21.504243] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:46.968 [2024-12-09 09:25:21.605678] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:46.968 [2024-12-09 09:25:21.634497] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:46.968 [2024-12-09 09:25:21.634555] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:46.968 [2024-12-09 09:25:21.634564] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:46.968 [2024-12-09 09:25:21.634571] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:46.968 [2024-12-09 09:25:21.634577] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:46.968 [2024-12-09 09:25:21.636874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:46.968 [2024-12-09 09:25:21.637039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:46.968 [2024-12-09 09:25:21.637205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:46.968 [2024-12-09 09:25:21.637205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:46.968 09:25:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:46.968 09:25:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:46.968 09:25:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:46.968 09:25:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:46.968 09:25:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:46.968 09:25:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:46.968 09:25:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:46.968 09:25:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.968 09:25:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:46.968 [2024-12-09 09:25:22.366331] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:46.968 09:25:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.968 09:25:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:46.968 09:25:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:46.968 09:25:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:46.968 09:25:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:46.968 09:25:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:46.968 09:25:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:46.968 09:25:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.968 09:25:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:46.968 Malloc0 00:07:47.230 [2024-12-09 09:25:22.437979] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:47.230 09:25:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.230 09:25:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:47.230 09:25:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:47.230 09:25:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:47.230 09:25:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2581356 00:07:47.230 09:25:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2581356 /var/tmp/bdevperf.sock 00:07:47.230 09:25:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2581356 ']' 00:07:47.230 09:25:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:47.230 09:25:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:47.230 09:25:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:47.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:47.230 09:25:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:47.230 09:25:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:47.230 09:25:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:47.230 09:25:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:47.230 09:25:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:47.230 09:25:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:47.230 09:25:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:47.230 09:25:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:47.230 { 00:07:47.230 "params": { 00:07:47.230 "name": "Nvme$subsystem", 00:07:47.230 "trtype": "$TEST_TRANSPORT", 00:07:47.230 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:47.230 "adrfam": "ipv4", 00:07:47.230 "trsvcid": "$NVMF_PORT", 00:07:47.230 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:47.230 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:47.230 "hdgst": ${hdgst:-false}, 00:07:47.230 "ddgst": ${ddgst:-false} 00:07:47.230 }, 00:07:47.230 "method": "bdev_nvme_attach_controller" 00:07:47.230 } 00:07:47.230 EOF 00:07:47.230 )") 00:07:47.230 09:25:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:47.230 09:25:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:47.230 09:25:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:47.230 09:25:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:47.230 "params": { 00:07:47.230 "name": "Nvme0", 00:07:47.230 "trtype": "tcp", 00:07:47.230 "traddr": "10.0.0.2", 00:07:47.230 "adrfam": "ipv4", 00:07:47.230 "trsvcid": "4420", 00:07:47.230 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:47.230 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:47.230 "hdgst": false, 00:07:47.230 "ddgst": false 00:07:47.230 }, 00:07:47.230 "method": "bdev_nvme_attach_controller" 00:07:47.230 }' 00:07:47.230 [2024-12-09 09:25:22.543035] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:07:47.230 [2024-12-09 09:25:22.543088] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2581356 ] 00:07:47.230 [2024-12-09 09:25:22.632480] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.230 [2024-12-09 09:25:22.650654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.490 Running I/O for 10 seconds... 00:07:48.061 09:25:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:48.061 09:25:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:48.061 09:25:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:48.061 09:25:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.061 09:25:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:48.061 09:25:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.061 09:25:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:48.061 09:25:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:48.061 09:25:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:48.061 09:25:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:48.061 09:25:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:48.061 09:25:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:48.061 09:25:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:48.061 09:25:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:48.061 09:25:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:48.061 09:25:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:48.061 09:25:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.061 09:25:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:48.061 09:25:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.061 09:25:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=911 00:07:48.061 09:25:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 911 -ge 100 ']' 00:07:48.061 09:25:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:48.061 09:25:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:48.061 09:25:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:48.061 09:25:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:48.061 09:25:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.062 09:25:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:48.062 [2024-12-09 09:25:23.421763] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:48.062 [2024-12-09 09:25:23.421806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.062 [2024-12-09 09:25:23.421817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:07:48.062 [2024-12-09 09:25:23.421825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.062 [2024-12-09 09:25:23.421834] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:07:48.062 [2024-12-09 09:25:23.421841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.062 [2024-12-09 09:25:23.421849] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:07:48.062 [2024-12-09 09:25:23.421857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.062 [2024-12-09 09:25:23.421865] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd7790 is same with the state(6) to be set 00:07:48.062 09:25:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.062 09:25:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:48.062 09:25:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.062 09:25:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:48.062 [2024-12-09 09:25:23.433239] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bd7790 (9): Bad file descriptor 00:07:48.062 [2024-12-09 09:25:23.433316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.062 [2024-12-09 09:25:23.433327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.062 [2024-12-09 09:25:23.433342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.062 [2024-12-09 09:25:23.433350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.062 [2024-12-09 09:25:23.433361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.062 [2024-12-09 09:25:23.433369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.062 [2024-12-09 09:25:23.433379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.062 [2024-12-09 09:25:23.433386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.062 [2024-12-09 09:25:23.433396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.062 [2024-12-09 09:25:23.433404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.062 [2024-12-09 09:25:23.433419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.062 [2024-12-09 09:25:23.433427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.062 [2024-12-09 09:25:23.433436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.062 [2024-12-09 09:25:23.433444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.062 [2024-12-09 09:25:23.433453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.062 [2024-12-09 09:25:23.433461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.062 [2024-12-09 09:25:23.433471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.062 [2024-12-09 09:25:23.433478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.062 [2024-12-09 09:25:23.433487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.062 [2024-12-09 09:25:23.433495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.062 [2024-12-09 09:25:23.433505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.062 [2024-12-09 09:25:23.433512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.062 [2024-12-09 09:25:23.433522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.062 [2024-12-09 09:25:23.433530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.062 [2024-12-09 09:25:23.433539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.062 [2024-12-09 09:25:23.433546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.062 [2024-12-09 09:25:23.433556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.062 [2024-12-09 09:25:23.433563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.062 [2024-12-09 09:25:23.433573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.062 [2024-12-09 09:25:23.433581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.062 [2024-12-09 09:25:23.433590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.062 [2024-12-09 09:25:23.433597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.062 [2024-12-09 09:25:23.433607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.062 [2024-12-09 09:25:23.433614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.062 [2024-12-09 09:25:23.433623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:2176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.062 [2024-12-09 09:25:23.433632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.062 [2024-12-09 09:25:23.433647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.062 [2024-12-09 09:25:23.433654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.062 [2024-12-09 09:25:23.433664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.062 [2024-12-09 09:25:23.433671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.062 [2024-12-09 09:25:23.433681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.062 [2024-12-09 09:25:23.433688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.062 [2024-12-09 09:25:23.433698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:2688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.062 [2024-12-09 09:25:23.433705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.062 [2024-12-09 09:25:23.433715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:2816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.062 [2024-12-09 09:25:23.433722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.062 [2024-12-09 09:25:23.433731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.062 [2024-12-09 09:25:23.433739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.062 [2024-12-09 09:25:23.433749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:3072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.062 [2024-12-09 09:25:23.433756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.062 [2024-12-09 09:25:23.433765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:3200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.062 [2024-12-09 09:25:23.433772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.062 [2024-12-09 09:25:23.433782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:3328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.062 [2024-12-09 09:25:23.433789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.062 [2024-12-09 09:25:23.433799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:3456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.062 [2024-12-09 09:25:23.433806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.062 [2024-12-09 09:25:23.433816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:3584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.062 [2024-12-09 09:25:23.433823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.062 [2024-12-09 09:25:23.433832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:3712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.062 [2024-12-09 09:25:23.433840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.062 [2024-12-09 09:25:23.433851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:3840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.062 [2024-12-09 09:25:23.433859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.062 [2024-12-09 09:25:23.433868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:3968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.062 [2024-12-09 09:25:23.433875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.062 [2024-12-09 09:25:23.433885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:4096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.063 [2024-12-09 09:25:23.433892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.063 [2024-12-09 09:25:23.433902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.063 [2024-12-09 09:25:23.433909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.063 [2024-12-09 09:25:23.433919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:4352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.063 [2024-12-09 09:25:23.433926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.063 [2024-12-09 09:25:23.433935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:4480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.063 [2024-12-09 09:25:23.433943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.063 [2024-12-09 09:25:23.433953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.063 [2024-12-09 09:25:23.433962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.063 [2024-12-09 09:25:23.433971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.063 [2024-12-09 09:25:23.433978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.063 [2024-12-09 09:25:23.433987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:4864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.063 [2024-12-09 09:25:23.433995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.063 [2024-12-09 09:25:23.434006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:4992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.063 [2024-12-09 09:25:23.434013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.063 [2024-12-09 09:25:23.434022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:5120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.063 [2024-12-09 09:25:23.434030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.063 [2024-12-09 09:25:23.434039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:5248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.063 [2024-12-09 09:25:23.434047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.063 [2024-12-09 09:25:23.434056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:5376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.063 [2024-12-09 09:25:23.434064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.063 [2024-12-09 09:25:23.434075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:5504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.063 [2024-12-09 09:25:23.434082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.063 [2024-12-09 09:25:23.434092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.063 [2024-12-09 09:25:23.434101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.063 [2024-12-09 09:25:23.434111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.063 [2024-12-09 09:25:23.434118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.063 [2024-12-09 09:25:23.434128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:5888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.063 [2024-12-09 09:25:23.434136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.063 [2024-12-09 09:25:23.434145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:6016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.063 [2024-12-09 09:25:23.434153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.063 [2024-12-09 09:25:23.434162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:6144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.063 [2024-12-09 09:25:23.434170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.063 [2024-12-09 09:25:23.434179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:6272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.063 [2024-12-09 09:25:23.434187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.063 [2024-12-09 09:25:23.434196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:6400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.063 [2024-12-09 09:25:23.434203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.063 [2024-12-09 09:25:23.434213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:6528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.063 [2024-12-09 09:25:23.434220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.063 [2024-12-09 09:25:23.434230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:6656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.063 [2024-12-09 09:25:23.434237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.063 [2024-12-09 09:25:23.434247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:6784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.063 [2024-12-09 09:25:23.434254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.063 [2024-12-09 09:25:23.434264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:6912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.063 [2024-12-09 09:25:23.434271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.063 [2024-12-09 09:25:23.434281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.063 [2024-12-09 09:25:23.434289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.063 [2024-12-09 09:25:23.434299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:7168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.063 [2024-12-09 09:25:23.434307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.063 [2024-12-09 09:25:23.434317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.063 [2024-12-09 09:25:23.434324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.063 [2024-12-09 09:25:23.434333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:7424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.063 [2024-12-09 09:25:23.434341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.063 [2024-12-09 09:25:23.434350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.063 [2024-12-09 09:25:23.434358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.063 [2024-12-09 09:25:23.434367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:7680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.063 [2024-12-09 09:25:23.434375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.063 [2024-12-09 09:25:23.434384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:7808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.063 [2024-12-09 09:25:23.434391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.063 [2024-12-09 09:25:23.434401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:7936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.063 [2024-12-09 09:25:23.434408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.063 [2024-12-09 09:25:23.434418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:8064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:48.063 [2024-12-09 09:25:23.434425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:48.063 [2024-12-09 09:25:23.435636] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:07:48.063 task offset: 0 on job bdev=Nvme0n1 fails 00:07:48.063 00:07:48.063 Latency(us) 00:07:48.063 [2024-12-09T08:25:23.516Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:48.063 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:48.063 Job: Nvme0n1 ended in about 0.60 seconds with error 00:07:48.063 Verification LBA range: start 0x0 length 0x400 00:07:48.063 Nvme0n1 : 0.60 1712.71 107.04 107.04 0.00 34324.26 1542.83 31238.83 00:07:48.063 [2024-12-09T08:25:23.516Z] =================================================================================================================== 00:07:48.063 [2024-12-09T08:25:23.516Z] Total : 1712.71 107.04 107.04 0.00 34324.26 1542.83 31238.83 00:07:48.063 [2024-12-09 09:25:23.437621] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:48.063 09:25:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.063 09:25:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:48.063 [2024-12-09 09:25:23.484908] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:07:49.005 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2581356 00:07:49.005 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2581356) - No such process 00:07:49.005 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:49.005 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:49.005 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:49.005 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:49.005 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:49.005 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:49.005 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:49.005 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:49.005 { 00:07:49.005 "params": { 00:07:49.005 "name": "Nvme$subsystem", 00:07:49.005 "trtype": "$TEST_TRANSPORT", 00:07:49.005 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:49.005 "adrfam": "ipv4", 00:07:49.005 "trsvcid": "$NVMF_PORT", 00:07:49.005 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:49.005 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:49.005 "hdgst": ${hdgst:-false}, 00:07:49.005 "ddgst": ${ddgst:-false} 00:07:49.005 }, 00:07:49.005 "method": "bdev_nvme_attach_controller" 00:07:49.005 } 00:07:49.005 EOF 00:07:49.005 )") 00:07:49.005 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:49.265 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:49.265 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:49.265 09:25:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:49.265 "params": { 00:07:49.265 "name": "Nvme0", 00:07:49.265 "trtype": "tcp", 00:07:49.265 "traddr": "10.0.0.2", 00:07:49.265 "adrfam": "ipv4", 00:07:49.265 "trsvcid": "4420", 00:07:49.265 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:49.265 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:49.265 "hdgst": false, 00:07:49.265 "ddgst": false 00:07:49.265 }, 00:07:49.265 "method": "bdev_nvme_attach_controller" 00:07:49.265 }' 00:07:49.265 [2024-12-09 09:25:24.497263] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:07:49.265 [2024-12-09 09:25:24.497319] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2581839 ] 00:07:49.265 [2024-12-09 09:25:24.586870] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.265 [2024-12-09 09:25:24.603329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.525 Running I/O for 1 seconds... 00:07:50.463 1536.00 IOPS, 96.00 MiB/s 00:07:50.463 Latency(us) 00:07:50.463 [2024-12-09T08:25:25.916Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:50.463 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:50.463 Verification LBA range: start 0x0 length 0x400 00:07:50.463 Nvme0n1 : 1.02 1575.04 98.44 0.00 0.00 39925.53 6690.13 32549.55 00:07:50.463 [2024-12-09T08:25:25.916Z] =================================================================================================================== 00:07:50.463 [2024-12-09T08:25:25.916Z] Total : 1575.04 98.44 0.00 0.00 39925.53 6690.13 32549.55 00:07:50.723 09:25:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:50.723 09:25:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:50.723 09:25:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:07:50.723 09:25:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:50.723 09:25:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:50.723 09:25:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:50.723 09:25:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:50.723 09:25:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:50.723 09:25:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:50.723 09:25:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:50.723 09:25:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:50.723 rmmod nvme_tcp 00:07:50.723 rmmod nvme_fabrics 00:07:50.723 rmmod nvme_keyring 00:07:50.723 09:25:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:50.723 09:25:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:50.723 09:25:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:50.723 09:25:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 2581113 ']' 00:07:50.723 09:25:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 2581113 00:07:50.723 09:25:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 2581113 ']' 00:07:50.723 09:25:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 2581113 00:07:50.723 09:25:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:07:50.723 09:25:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:50.723 09:25:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2581113 00:07:50.723 09:25:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:50.723 09:25:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:50.723 09:25:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2581113' 00:07:50.723 killing process with pid 2581113 00:07:50.723 09:25:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 2581113 00:07:50.723 09:25:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 2581113 00:07:50.983 [2024-12-09 09:25:26.229339] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:50.983 09:25:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:50.983 09:25:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:50.983 09:25:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:50.983 09:25:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:07:50.983 09:25:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:07:50.983 09:25:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:50.983 09:25:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:07:50.983 09:25:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:50.983 09:25:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:50.983 09:25:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:50.983 09:25:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:50.983 09:25:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:52.891 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:52.891 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:52.891 00:07:52.891 real 0m14.556s 00:07:52.891 user 0m23.164s 00:07:52.891 sys 0m6.650s 00:07:52.891 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:52.891 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:52.891 ************************************ 00:07:52.891 END TEST nvmf_host_management 00:07:52.891 ************************************ 00:07:53.152 09:25:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:53.152 09:25:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:53.152 09:25:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:53.152 09:25:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:53.152 ************************************ 00:07:53.152 START TEST nvmf_lvol 00:07:53.152 ************************************ 00:07:53.152 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:53.152 * Looking for test storage... 00:07:53.152 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:53.152 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:53.152 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:07:53.152 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:53.152 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:53.152 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:53.152 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:53.152 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:53.152 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:53.152 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:53.152 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:53.152 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:53.152 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:53.152 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:53.152 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:53.152 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:53.152 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:53.152 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:53.152 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:53.152 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:53.152 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:53.152 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:53.152 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:53.152 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:53.152 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:53.413 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:53.413 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:53.413 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:53.413 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:53.413 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:53.413 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:53.413 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:53.413 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:53.413 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:53.413 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:53.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.413 --rc genhtml_branch_coverage=1 00:07:53.413 --rc genhtml_function_coverage=1 00:07:53.413 --rc genhtml_legend=1 00:07:53.413 --rc geninfo_all_blocks=1 00:07:53.413 --rc geninfo_unexecuted_blocks=1 00:07:53.413 00:07:53.413 ' 00:07:53.413 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:53.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.413 --rc genhtml_branch_coverage=1 00:07:53.413 --rc genhtml_function_coverage=1 00:07:53.413 --rc genhtml_legend=1 00:07:53.413 --rc geninfo_all_blocks=1 00:07:53.413 --rc geninfo_unexecuted_blocks=1 00:07:53.413 00:07:53.413 ' 00:07:53.413 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:53.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.413 --rc genhtml_branch_coverage=1 00:07:53.413 --rc genhtml_function_coverage=1 00:07:53.413 --rc genhtml_legend=1 00:07:53.413 --rc geninfo_all_blocks=1 00:07:53.413 --rc geninfo_unexecuted_blocks=1 00:07:53.413 00:07:53.413 ' 00:07:53.413 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:53.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.413 --rc genhtml_branch_coverage=1 00:07:53.413 --rc genhtml_function_coverage=1 00:07:53.413 --rc genhtml_legend=1 00:07:53.413 --rc geninfo_all_blocks=1 00:07:53.413 --rc geninfo_unexecuted_blocks=1 00:07:53.413 00:07:53.413 ' 00:07:53.413 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:53.413 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:53.413 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:53.413 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:53.413 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:53.413 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:53.413 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:53.413 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:53.413 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:53.413 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:53.413 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:53.413 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:53.413 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:53.413 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:53.413 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:53.413 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:53.413 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:53.413 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:53.413 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:53.413 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:53.413 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:53.413 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:53.413 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:53.413 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.413 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.413 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.413 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:53.413 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.413 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:07:53.413 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:53.413 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:53.413 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:53.413 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:53.413 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:53.413 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:53.413 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:53.413 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:53.413 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:53.413 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:53.413 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:53.413 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:53.413 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:53.413 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:53.413 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:53.413 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:53.413 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:53.413 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:53.413 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:53.413 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:53.413 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:53.413 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:53.413 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:53.413 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:53.413 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:53.413 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:53.413 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:07:53.413 09:25:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:01.556 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:01.556 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:08:01.556 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:01.556 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:01.556 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:01.556 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:01.556 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:01.556 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:08:01.556 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:01.556 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:08:01.556 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:08:01.556 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:08:01.556 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:08:01.556 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:08:01.556 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:08:01.556 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:01.556 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:01.556 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:01.556 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:01.556 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:01.556 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:01.556 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:01.556 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:01.556 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:01.556 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:01.556 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:01.556 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:01.556 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:01.556 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:01.556 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:01.556 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:01.556 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:01.556 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:01.556 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:01.556 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:01.556 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:01.556 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:01.556 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:01.556 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:01.556 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:01.556 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:01.556 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:01.556 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:01.556 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:01.556 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:01.556 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:01.556 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:01.556 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:01.556 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:01.556 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:01.556 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:01.556 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:01.556 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:01.556 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:01.556 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:01.556 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:01.556 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:01.556 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:01.556 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:01.556 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:01.556 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:01.556 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:01.556 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:01.556 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:01.556 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:01.556 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:01.556 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:01.556 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:01.556 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:01.556 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:01.556 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:01.556 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:01.556 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:01.556 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:08:01.556 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:01.556 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:01.556 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:01.556 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:01.556 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:01.556 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:01.556 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:01.556 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:01.556 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:01.556 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:01.556 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:01.556 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:01.556 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:01.556 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:01.556 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:01.556 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:01.556 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:01.556 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:01.556 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:01.556 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:01.556 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:01.556 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:01.556 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:01.556 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:01.556 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:01.556 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:01.556 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:01.556 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.668 ms 00:08:01.556 00:08:01.556 --- 10.0.0.2 ping statistics --- 00:08:01.556 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:01.556 rtt min/avg/max/mdev = 0.668/0.668/0.668/0.000 ms 00:08:01.556 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:01.556 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:01.556 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.299 ms 00:08:01.556 00:08:01.556 --- 10.0.0.1 ping statistics --- 00:08:01.556 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:01.556 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:08:01.556 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:01.557 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:08:01.557 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:01.557 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:01.557 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:01.557 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:01.557 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:01.557 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:01.557 09:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:01.557 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:01.557 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:01.557 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:01.557 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:01.557 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=2586230 00:08:01.557 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 2586230 00:08:01.557 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:01.557 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 2586230 ']' 00:08:01.557 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:01.557 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:01.557 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:01.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:01.557 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:01.557 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:01.557 [2024-12-09 09:25:36.109091] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:08:01.557 [2024-12-09 09:25:36.109160] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:01.557 [2024-12-09 09:25:36.207969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:01.557 [2024-12-09 09:25:36.236367] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:01.557 [2024-12-09 09:25:36.236420] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:01.557 [2024-12-09 09:25:36.236429] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:01.557 [2024-12-09 09:25:36.236436] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:01.557 [2024-12-09 09:25:36.236443] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:01.557 [2024-12-09 09:25:36.238183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:01.557 [2024-12-09 09:25:36.238309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:01.557 [2024-12-09 09:25:36.238310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.557 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:01.557 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:08:01.557 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:01.557 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:01.557 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:01.557 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:01.557 09:25:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:01.817 [2024-12-09 09:25:37.116308] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:01.817 09:25:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:02.076 09:25:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:02.077 09:25:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:02.337 09:25:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:02.337 09:25:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:02.337 09:25:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:02.598 09:25:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=bf7d1121-37c3-4314-9205-1c291fbff8eb 00:08:02.598 09:25:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u bf7d1121-37c3-4314-9205-1c291fbff8eb lvol 20 00:08:02.857 09:25:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=36ecbc16-8e73-4382-b460-38666ce8f02a 00:08:02.857 09:25:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:02.857 09:25:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 36ecbc16-8e73-4382-b460-38666ce8f02a 00:08:03.117 09:25:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:03.376 [2024-12-09 09:25:38.632629] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:03.377 09:25:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:03.636 09:25:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2586901 00:08:03.636 09:25:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:03.636 09:25:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:04.633 09:25:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 36ecbc16-8e73-4382-b460-38666ce8f02a MY_SNAPSHOT 00:08:04.944 09:25:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=79b9ef31-5f5e-416c-a67a-2397a2cf1229 00:08:04.944 09:25:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 36ecbc16-8e73-4382-b460-38666ce8f02a 30 00:08:04.944 09:25:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 79b9ef31-5f5e-416c-a67a-2397a2cf1229 MY_CLONE 00:08:05.204 09:25:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=cb389473-a3df-4a47-be72-2a54037ab3d5 00:08:05.204 09:25:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate cb389473-a3df-4a47-be72-2a54037ab3d5 00:08:05.772 09:25:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2586901 00:08:13.906 Initializing NVMe Controllers 00:08:13.906 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:13.906 Controller IO queue size 128, less than required. 00:08:13.906 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:13.906 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:13.906 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:13.906 Initialization complete. Launching workers. 00:08:13.906 ======================================================== 00:08:13.906 Latency(us) 00:08:13.906 Device Information : IOPS MiB/s Average min max 00:08:13.906 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 16388.40 64.02 7811.77 1506.29 58595.67 00:08:13.906 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 17080.30 66.72 7494.11 360.12 50840.29 00:08:13.906 ======================================================== 00:08:13.906 Total : 33468.70 130.74 7649.66 360.12 58595.67 00:08:13.906 00:08:13.906 09:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:13.906 09:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 36ecbc16-8e73-4382-b460-38666ce8f02a 00:08:14.166 09:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u bf7d1121-37c3-4314-9205-1c291fbff8eb 00:08:14.426 09:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:14.426 09:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:14.426 09:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:14.426 09:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:14.426 09:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:08:14.426 09:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:14.426 09:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:08:14.426 09:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:14.426 09:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:14.426 rmmod nvme_tcp 00:08:14.426 rmmod nvme_fabrics 00:08:14.426 rmmod nvme_keyring 00:08:14.426 09:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:14.427 09:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:08:14.427 09:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:08:14.427 09:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 2586230 ']' 00:08:14.427 09:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 2586230 00:08:14.427 09:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 2586230 ']' 00:08:14.427 09:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 2586230 00:08:14.427 09:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:08:14.427 09:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:14.427 09:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2586230 00:08:14.427 09:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:14.427 09:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:14.427 09:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2586230' 00:08:14.427 killing process with pid 2586230 00:08:14.427 09:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 2586230 00:08:14.427 09:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 2586230 00:08:14.687 09:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:14.687 09:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:14.687 09:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:14.687 09:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:08:14.687 09:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:08:14.687 09:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:14.687 09:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:08:14.687 09:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:14.687 09:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:14.687 09:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:14.687 09:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:14.687 09:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:16.596 09:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:16.596 00:08:16.596 real 0m23.627s 00:08:16.596 user 1m4.233s 00:08:16.596 sys 0m8.352s 00:08:16.596 09:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:16.596 09:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:16.596 ************************************ 00:08:16.596 END TEST nvmf_lvol 00:08:16.596 ************************************ 00:08:16.856 09:25:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:16.856 09:25:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:16.856 09:25:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:16.856 09:25:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:16.856 ************************************ 00:08:16.856 START TEST nvmf_lvs_grow 00:08:16.856 ************************************ 00:08:16.856 09:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:16.856 * Looking for test storage... 00:08:16.856 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:16.856 09:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:16.856 09:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:08:16.856 09:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:16.856 09:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:16.856 09:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:16.856 09:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:16.856 09:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:16.856 09:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:08:17.117 09:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:08:17.117 09:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:08:17.117 09:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:08:17.117 09:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:08:17.117 09:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:08:17.117 09:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:08:17.117 09:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:17.117 09:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:08:17.117 09:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:08:17.117 09:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:17.117 09:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:17.117 09:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:08:17.117 09:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:08:17.117 09:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:17.117 09:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:08:17.117 09:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:08:17.117 09:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:08:17.117 09:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:08:17.117 09:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:17.117 09:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:08:17.117 09:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:08:17.117 09:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:17.117 09:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:17.117 09:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:08:17.117 09:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:17.117 09:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:17.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.117 --rc genhtml_branch_coverage=1 00:08:17.117 --rc genhtml_function_coverage=1 00:08:17.117 --rc genhtml_legend=1 00:08:17.117 --rc geninfo_all_blocks=1 00:08:17.117 --rc geninfo_unexecuted_blocks=1 00:08:17.117 00:08:17.117 ' 00:08:17.117 09:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:17.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.117 --rc genhtml_branch_coverage=1 00:08:17.117 --rc genhtml_function_coverage=1 00:08:17.117 --rc genhtml_legend=1 00:08:17.117 --rc geninfo_all_blocks=1 00:08:17.117 --rc geninfo_unexecuted_blocks=1 00:08:17.117 00:08:17.117 ' 00:08:17.117 09:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:17.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.117 --rc genhtml_branch_coverage=1 00:08:17.117 --rc genhtml_function_coverage=1 00:08:17.118 --rc genhtml_legend=1 00:08:17.118 --rc geninfo_all_blocks=1 00:08:17.118 --rc geninfo_unexecuted_blocks=1 00:08:17.118 00:08:17.118 ' 00:08:17.118 09:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:17.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.118 --rc genhtml_branch_coverage=1 00:08:17.118 --rc genhtml_function_coverage=1 00:08:17.118 --rc genhtml_legend=1 00:08:17.118 --rc geninfo_all_blocks=1 00:08:17.118 --rc geninfo_unexecuted_blocks=1 00:08:17.118 00:08:17.118 ' 00:08:17.118 09:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:17.118 09:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:17.118 09:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:17.118 09:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:17.118 09:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:17.118 09:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:17.118 09:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:17.118 09:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:17.118 09:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:17.118 09:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:17.118 09:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:17.118 09:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:17.118 09:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:17.118 09:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:17.118 09:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:17.118 09:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:17.118 09:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:17.118 09:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:17.118 09:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:17.118 09:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:08:17.118 09:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:17.118 09:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:17.118 09:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:17.118 09:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.118 09:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.118 09:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.118 09:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:17.118 09:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.118 09:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:08:17.118 09:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:17.118 09:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:17.118 09:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:17.118 09:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:17.118 09:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:17.118 09:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:17.118 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:17.118 09:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:17.118 09:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:17.118 09:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:17.118 09:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:17.118 09:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:17.118 09:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:17.118 09:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:17.118 09:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:17.118 09:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:17.118 09:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:17.118 09:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:17.118 09:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:17.118 09:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:17.118 09:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:17.118 09:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:17.118 09:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:17.118 09:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:08:17.118 09:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:23.704 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:23.705 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:08:23.705 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:23.705 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:23.705 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:23.705 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:23.705 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:23.705 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:08:23.705 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:23.705 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:08:23.705 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:08:23.705 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:08:23.705 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:08:23.705 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:08:23.705 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:08:23.705 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:23.705 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:23.705 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:23.705 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:23.705 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:23.705 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:23.705 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:23.705 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:23.705 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:23.705 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:23.705 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:23.705 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:23.705 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:23.705 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:23.705 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:23.705 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:23.705 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:23.705 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:23.705 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:23.705 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:23.705 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:23.705 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:23.705 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:23.705 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:23.705 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:23.705 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:23.705 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:23.705 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:23.705 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:23.705 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:23.705 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:23.705 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:23.705 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:23.705 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:23.705 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:23.705 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:23.705 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:23.705 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:23.705 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:23.705 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:23.705 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:23.705 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:23.705 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:23.705 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:23.705 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:23.705 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:23.705 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:23.705 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:23.705 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:23.705 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:23.705 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:23.705 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:23.705 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:23.705 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:23.705 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:23.705 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:23.705 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:23.705 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:23.705 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:08:23.705 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:23.705 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:23.705 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:23.705 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:23.705 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:23.705 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:23.705 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:23.705 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:23.705 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:23.705 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:23.705 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:23.705 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:23.705 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:23.705 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:23.705 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:23.705 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:23.705 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:23.705 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:23.965 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:23.965 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:23.965 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:23.965 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:23.965 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:23.965 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:23.965 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:23.965 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:23.965 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:23.965 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.695 ms 00:08:23.965 00:08:23.965 --- 10.0.0.2 ping statistics --- 00:08:23.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:23.965 rtt min/avg/max/mdev = 0.695/0.695/0.695/0.000 ms 00:08:23.965 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:23.965 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:23.965 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.326 ms 00:08:23.965 00:08:23.965 --- 10.0.0.1 ping statistics --- 00:08:23.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:23.965 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:08:23.965 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:23.965 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:08:23.965 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:23.965 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:23.965 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:23.965 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:23.965 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:23.965 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:23.965 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:24.226 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:24.226 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:24.226 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:24.226 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:24.226 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=2593270 00:08:24.226 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 2593270 00:08:24.226 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 2593270 ']' 00:08:24.226 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:24.226 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:24.226 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:24.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:24.226 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:24.226 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:24.226 09:25:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:24.226 [2024-12-09 09:25:59.516447] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:08:24.226 [2024-12-09 09:25:59.516506] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:24.226 [2024-12-09 09:25:59.614233] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.226 [2024-12-09 09:25:59.640307] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:24.226 [2024-12-09 09:25:59.640354] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:24.226 [2024-12-09 09:25:59.640362] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:24.226 [2024-12-09 09:25:59.640370] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:24.226 [2024-12-09 09:25:59.640376] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:24.226 [2024-12-09 09:25:59.641141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.169 09:26:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:25.170 09:26:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:08:25.170 09:26:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:25.170 09:26:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:25.170 09:26:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:25.170 09:26:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:25.170 09:26:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:25.170 [2024-12-09 09:26:00.524136] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:25.170 09:26:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:25.170 09:26:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:25.170 09:26:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:25.170 09:26:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:25.170 ************************************ 00:08:25.170 START TEST lvs_grow_clean 00:08:25.170 ************************************ 00:08:25.170 09:26:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:08:25.170 09:26:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:25.170 09:26:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:25.170 09:26:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:25.170 09:26:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:25.170 09:26:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:25.170 09:26:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:25.170 09:26:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:25.170 09:26:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:25.170 09:26:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:25.431 09:26:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:25.431 09:26:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:25.691 09:26:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=a319bd19-5b5a-4996-9120-da086043247a 00:08:25.691 09:26:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a319bd19-5b5a-4996-9120-da086043247a 00:08:25.691 09:26:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:25.952 09:26:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:25.952 09:26:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:25.952 09:26:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a319bd19-5b5a-4996-9120-da086043247a lvol 150 00:08:25.952 09:26:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=7db433cf-7e99-4dd9-b4f8-d0c7ac9ae618 00:08:25.952 09:26:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:25.952 09:26:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:26.213 [2024-12-09 09:26:01.545688] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:26.213 [2024-12-09 09:26:01.545759] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:26.213 true 00:08:26.213 09:26:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a319bd19-5b5a-4996-9120-da086043247a 00:08:26.213 09:26:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:26.473 09:26:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:26.473 09:26:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:26.473 09:26:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7db433cf-7e99-4dd9-b4f8-d0c7ac9ae618 00:08:26.734 09:26:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:26.994 [2024-12-09 09:26:02.271992] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:26.994 09:26:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:27.255 09:26:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2593982 00:08:27.255 09:26:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:27.255 09:26:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2593982 /var/tmp/bdevperf.sock 00:08:27.255 09:26:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 2593982 ']' 00:08:27.255 09:26:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:27.255 09:26:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:27.255 09:26:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:27.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:27.255 09:26:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:27.255 09:26:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:27.255 09:26:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:27.255 [2024-12-09 09:26:02.518004] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:08:27.255 [2024-12-09 09:26:02.518060] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2593982 ] 00:08:27.255 [2024-12-09 09:26:02.606134] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.255 [2024-12-09 09:26:02.633846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:28.196 09:26:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:28.196 09:26:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:08:28.196 09:26:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:28.457 Nvme0n1 00:08:28.457 09:26:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:28.457 [ 00:08:28.457 { 00:08:28.457 "name": "Nvme0n1", 00:08:28.457 "aliases": [ 00:08:28.457 "7db433cf-7e99-4dd9-b4f8-d0c7ac9ae618" 00:08:28.457 ], 00:08:28.457 "product_name": "NVMe disk", 00:08:28.457 "block_size": 4096, 00:08:28.457 "num_blocks": 38912, 00:08:28.457 "uuid": "7db433cf-7e99-4dd9-b4f8-d0c7ac9ae618", 00:08:28.457 "numa_id": 0, 00:08:28.457 "assigned_rate_limits": { 00:08:28.457 "rw_ios_per_sec": 0, 00:08:28.457 "rw_mbytes_per_sec": 0, 00:08:28.457 "r_mbytes_per_sec": 0, 00:08:28.457 "w_mbytes_per_sec": 0 00:08:28.457 }, 00:08:28.457 "claimed": false, 00:08:28.457 "zoned": false, 00:08:28.457 "supported_io_types": { 00:08:28.457 "read": true, 00:08:28.457 "write": true, 00:08:28.457 "unmap": true, 00:08:28.457 "flush": true, 00:08:28.457 "reset": true, 00:08:28.457 "nvme_admin": true, 00:08:28.457 "nvme_io": true, 00:08:28.457 "nvme_io_md": false, 00:08:28.457 "write_zeroes": true, 00:08:28.457 "zcopy": false, 00:08:28.457 "get_zone_info": false, 00:08:28.457 "zone_management": false, 00:08:28.457 "zone_append": false, 00:08:28.457 "compare": true, 00:08:28.457 "compare_and_write": true, 00:08:28.457 "abort": true, 00:08:28.457 "seek_hole": false, 00:08:28.457 "seek_data": false, 00:08:28.457 "copy": true, 00:08:28.457 "nvme_iov_md": false 00:08:28.457 }, 00:08:28.457 "memory_domains": [ 00:08:28.457 { 00:08:28.457 "dma_device_id": "system", 00:08:28.457 "dma_device_type": 1 00:08:28.457 } 00:08:28.457 ], 00:08:28.457 "driver_specific": { 00:08:28.457 "nvme": [ 00:08:28.457 { 00:08:28.457 "trid": { 00:08:28.457 "trtype": "TCP", 00:08:28.457 "adrfam": "IPv4", 00:08:28.457 "traddr": "10.0.0.2", 00:08:28.457 "trsvcid": "4420", 00:08:28.457 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:28.457 }, 00:08:28.457 "ctrlr_data": { 00:08:28.457 "cntlid": 1, 00:08:28.457 "vendor_id": "0x8086", 00:08:28.457 "model_number": "SPDK bdev Controller", 00:08:28.457 "serial_number": "SPDK0", 00:08:28.457 "firmware_revision": "25.01", 00:08:28.457 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:28.457 "oacs": { 00:08:28.457 "security": 0, 00:08:28.457 "format": 0, 00:08:28.457 "firmware": 0, 00:08:28.457 "ns_manage": 0 00:08:28.457 }, 00:08:28.457 "multi_ctrlr": true, 00:08:28.457 "ana_reporting": false 00:08:28.457 }, 00:08:28.457 "vs": { 00:08:28.457 "nvme_version": "1.3" 00:08:28.457 }, 00:08:28.457 "ns_data": { 00:08:28.457 "id": 1, 00:08:28.457 "can_share": true 00:08:28.457 } 00:08:28.457 } 00:08:28.457 ], 00:08:28.457 "mp_policy": "active_passive" 00:08:28.457 } 00:08:28.457 } 00:08:28.457 ] 00:08:28.717 09:26:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2594206 00:08:28.717 09:26:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:28.717 09:26:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:28.717 Running I/O for 10 seconds... 00:08:29.657 Latency(us) 00:08:29.657 [2024-12-09T08:26:05.110Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:29.657 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:29.657 Nvme0n1 : 1.00 24029.00 93.86 0.00 0.00 0.00 0.00 0.00 00:08:29.657 [2024-12-09T08:26:05.110Z] =================================================================================================================== 00:08:29.657 [2024-12-09T08:26:05.110Z] Total : 24029.00 93.86 0.00 0.00 0.00 0.00 0.00 00:08:29.657 00:08:30.599 09:26:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u a319bd19-5b5a-4996-9120-da086043247a 00:08:30.599 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:30.599 Nvme0n1 : 2.00 24809.00 96.91 0.00 0.00 0.00 0.00 0.00 00:08:30.599 [2024-12-09T08:26:06.052Z] =================================================================================================================== 00:08:30.599 [2024-12-09T08:26:06.052Z] Total : 24809.00 96.91 0.00 0.00 0.00 0.00 0.00 00:08:30.599 00:08:30.859 true 00:08:30.859 09:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a319bd19-5b5a-4996-9120-da086043247a 00:08:30.859 09:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:30.859 09:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:30.859 09:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:30.859 09:26:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2594206 00:08:31.801 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:31.801 Nvme0n1 : 3.00 25082.33 97.98 0.00 0.00 0.00 0.00 0.00 00:08:31.801 [2024-12-09T08:26:07.254Z] =================================================================================================================== 00:08:31.801 [2024-12-09T08:26:07.254Z] Total : 25082.33 97.98 0.00 0.00 0.00 0.00 0.00 00:08:31.801 00:08:32.744 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:32.744 Nvme0n1 : 4.00 25226.25 98.54 0.00 0.00 0.00 0.00 0.00 00:08:32.744 [2024-12-09T08:26:08.197Z] =================================================================================================================== 00:08:32.744 [2024-12-09T08:26:08.197Z] Total : 25226.25 98.54 0.00 0.00 0.00 0.00 0.00 00:08:32.744 00:08:33.687 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:33.687 Nvme0n1 : 5.00 25312.00 98.88 0.00 0.00 0.00 0.00 0.00 00:08:33.687 [2024-12-09T08:26:09.140Z] =================================================================================================================== 00:08:33.687 [2024-12-09T08:26:09.140Z] Total : 25312.00 98.88 0.00 0.00 0.00 0.00 0.00 00:08:33.687 00:08:34.632 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:34.632 Nvme0n1 : 6.00 25368.33 99.10 0.00 0.00 0.00 0.00 0.00 00:08:34.632 [2024-12-09T08:26:10.085Z] =================================================================================================================== 00:08:34.632 [2024-12-09T08:26:10.085Z] Total : 25368.33 99.10 0.00 0.00 0.00 0.00 0.00 00:08:34.632 00:08:35.577 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:35.577 Nvme0n1 : 7.00 25410.43 99.26 0.00 0.00 0.00 0.00 0.00 00:08:35.577 [2024-12-09T08:26:11.030Z] =================================================================================================================== 00:08:35.577 [2024-12-09T08:26:11.030Z] Total : 25410.43 99.26 0.00 0.00 0.00 0.00 0.00 00:08:35.577 00:08:36.961 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:36.961 Nvme0n1 : 8.00 25450.12 99.41 0.00 0.00 0.00 0.00 0.00 00:08:36.961 [2024-12-09T08:26:12.414Z] =================================================================================================================== 00:08:36.961 [2024-12-09T08:26:12.414Z] Total : 25450.12 99.41 0.00 0.00 0.00 0.00 0.00 00:08:36.961 00:08:37.904 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:37.904 Nvme0n1 : 9.00 25480.89 99.53 0.00 0.00 0.00 0.00 0.00 00:08:37.904 [2024-12-09T08:26:13.357Z] =================================================================================================================== 00:08:37.904 [2024-12-09T08:26:13.357Z] Total : 25480.89 99.53 0.00 0.00 0.00 0.00 0.00 00:08:37.904 00:08:38.844 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:38.844 Nvme0n1 : 10.00 25504.80 99.63 0.00 0.00 0.00 0.00 0.00 00:08:38.844 [2024-12-09T08:26:14.297Z] =================================================================================================================== 00:08:38.844 [2024-12-09T08:26:14.297Z] Total : 25504.80 99.63 0.00 0.00 0.00 0.00 0.00 00:08:38.844 00:08:38.844 00:08:38.844 Latency(us) 00:08:38.844 [2024-12-09T08:26:14.297Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:38.844 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:38.844 Nvme0n1 : 10.00 25503.77 99.62 0.00 0.00 5015.26 2211.84 12997.97 00:08:38.844 [2024-12-09T08:26:14.297Z] =================================================================================================================== 00:08:38.844 [2024-12-09T08:26:14.297Z] Total : 25503.77 99.62 0.00 0.00 5015.26 2211.84 12997.97 00:08:38.844 { 00:08:38.844 "results": [ 00:08:38.844 { 00:08:38.844 "job": "Nvme0n1", 00:08:38.844 "core_mask": "0x2", 00:08:38.844 "workload": "randwrite", 00:08:38.844 "status": "finished", 00:08:38.844 "queue_depth": 128, 00:08:38.844 "io_size": 4096, 00:08:38.844 "runtime": 10.002952, 00:08:38.844 "iops": 25503.77128671616, 00:08:38.844 "mibps": 99.624106588735, 00:08:38.844 "io_failed": 0, 00:08:38.844 "io_timeout": 0, 00:08:38.844 "avg_latency_us": 5015.260453838103, 00:08:38.844 "min_latency_us": 2211.84, 00:08:38.844 "max_latency_us": 12997.973333333333 00:08:38.844 } 00:08:38.844 ], 00:08:38.844 "core_count": 1 00:08:38.844 } 00:08:38.844 09:26:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2593982 00:08:38.844 09:26:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 2593982 ']' 00:08:38.844 09:26:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 2593982 00:08:38.844 09:26:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:08:38.844 09:26:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:38.844 09:26:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2593982 00:08:38.844 09:26:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:38.844 09:26:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:38.844 09:26:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2593982' 00:08:38.844 killing process with pid 2593982 00:08:38.844 09:26:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 2593982 00:08:38.844 Received shutdown signal, test time was about 10.000000 seconds 00:08:38.844 00:08:38.844 Latency(us) 00:08:38.844 [2024-12-09T08:26:14.297Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:38.844 [2024-12-09T08:26:14.297Z] =================================================================================================================== 00:08:38.844 [2024-12-09T08:26:14.297Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:38.844 09:26:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 2593982 00:08:38.844 09:26:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:39.103 09:26:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:39.363 09:26:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a319bd19-5b5a-4996-9120-da086043247a 00:08:39.363 09:26:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:39.363 09:26:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:39.363 09:26:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:39.363 09:26:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:39.623 [2024-12-09 09:26:14.883123] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:39.623 09:26:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a319bd19-5b5a-4996-9120-da086043247a 00:08:39.623 09:26:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:08:39.623 09:26:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a319bd19-5b5a-4996-9120-da086043247a 00:08:39.623 09:26:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:39.623 09:26:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:39.623 09:26:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:39.623 09:26:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:39.623 09:26:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:39.623 09:26:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:39.623 09:26:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:39.623 09:26:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:39.623 09:26:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a319bd19-5b5a-4996-9120-da086043247a 00:08:39.883 request: 00:08:39.883 { 00:08:39.883 "uuid": "a319bd19-5b5a-4996-9120-da086043247a", 00:08:39.883 "method": "bdev_lvol_get_lvstores", 00:08:39.883 "req_id": 1 00:08:39.883 } 00:08:39.883 Got JSON-RPC error response 00:08:39.883 response: 00:08:39.883 { 00:08:39.883 "code": -19, 00:08:39.883 "message": "No such device" 00:08:39.883 } 00:08:39.883 09:26:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:08:39.883 09:26:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:39.883 09:26:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:39.883 09:26:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:39.883 09:26:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:39.883 aio_bdev 00:08:39.883 09:26:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 7db433cf-7e99-4dd9-b4f8-d0c7ac9ae618 00:08:39.883 09:26:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=7db433cf-7e99-4dd9-b4f8-d0c7ac9ae618 00:08:39.883 09:26:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:39.883 09:26:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:08:39.883 09:26:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:39.883 09:26:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:39.883 09:26:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:40.142 09:26:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 7db433cf-7e99-4dd9-b4f8-d0c7ac9ae618 -t 2000 00:08:40.142 [ 00:08:40.142 { 00:08:40.142 "name": "7db433cf-7e99-4dd9-b4f8-d0c7ac9ae618", 00:08:40.142 "aliases": [ 00:08:40.142 "lvs/lvol" 00:08:40.142 ], 00:08:40.142 "product_name": "Logical Volume", 00:08:40.142 "block_size": 4096, 00:08:40.142 "num_blocks": 38912, 00:08:40.142 "uuid": "7db433cf-7e99-4dd9-b4f8-d0c7ac9ae618", 00:08:40.142 "assigned_rate_limits": { 00:08:40.142 "rw_ios_per_sec": 0, 00:08:40.142 "rw_mbytes_per_sec": 0, 00:08:40.142 "r_mbytes_per_sec": 0, 00:08:40.142 "w_mbytes_per_sec": 0 00:08:40.142 }, 00:08:40.142 "claimed": false, 00:08:40.142 "zoned": false, 00:08:40.142 "supported_io_types": { 00:08:40.142 "read": true, 00:08:40.142 "write": true, 00:08:40.142 "unmap": true, 00:08:40.142 "flush": false, 00:08:40.142 "reset": true, 00:08:40.142 "nvme_admin": false, 00:08:40.142 "nvme_io": false, 00:08:40.142 "nvme_io_md": false, 00:08:40.142 "write_zeroes": true, 00:08:40.142 "zcopy": false, 00:08:40.142 "get_zone_info": false, 00:08:40.142 "zone_management": false, 00:08:40.142 "zone_append": false, 00:08:40.142 "compare": false, 00:08:40.142 "compare_and_write": false, 00:08:40.142 "abort": false, 00:08:40.142 "seek_hole": true, 00:08:40.142 "seek_data": true, 00:08:40.142 "copy": false, 00:08:40.142 "nvme_iov_md": false 00:08:40.142 }, 00:08:40.142 "driver_specific": { 00:08:40.142 "lvol": { 00:08:40.142 "lvol_store_uuid": "a319bd19-5b5a-4996-9120-da086043247a", 00:08:40.142 "base_bdev": "aio_bdev", 00:08:40.142 "thin_provision": false, 00:08:40.142 "num_allocated_clusters": 38, 00:08:40.142 "snapshot": false, 00:08:40.142 "clone": false, 00:08:40.142 "esnap_clone": false 00:08:40.142 } 00:08:40.142 } 00:08:40.142 } 00:08:40.142 ] 00:08:40.142 09:26:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:08:40.401 09:26:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a319bd19-5b5a-4996-9120-da086043247a 00:08:40.401 09:26:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:40.401 09:26:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:40.401 09:26:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a319bd19-5b5a-4996-9120-da086043247a 00:08:40.401 09:26:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:40.660 09:26:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:40.660 09:26:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 7db433cf-7e99-4dd9-b4f8-d0c7ac9ae618 00:08:40.660 09:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a319bd19-5b5a-4996-9120-da086043247a 00:08:40.920 09:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:41.181 09:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:41.181 00:08:41.181 real 0m15.866s 00:08:41.181 user 0m15.580s 00:08:41.181 sys 0m1.405s 00:08:41.181 09:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:41.181 09:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:41.181 ************************************ 00:08:41.181 END TEST lvs_grow_clean 00:08:41.181 ************************************ 00:08:41.181 09:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:41.181 09:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:41.181 09:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:41.181 09:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:41.181 ************************************ 00:08:41.181 START TEST lvs_grow_dirty 00:08:41.181 ************************************ 00:08:41.181 09:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:08:41.181 09:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:41.181 09:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:41.181 09:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:41.181 09:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:41.181 09:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:41.181 09:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:41.181 09:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:41.181 09:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:41.181 09:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:41.441 09:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:41.441 09:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:41.699 09:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=0e01be2b-2ee0-47b7-820a-b0ffdd591249 00:08:41.699 09:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0e01be2b-2ee0-47b7-820a-b0ffdd591249 00:08:41.699 09:26:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:41.699 09:26:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:41.699 09:26:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:41.699 09:26:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 0e01be2b-2ee0-47b7-820a-b0ffdd591249 lvol 150 00:08:41.959 09:26:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=6e448875-65c6-467f-a199-28c1e55a206b 00:08:41.959 09:26:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:41.959 09:26:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:41.959 [2024-12-09 09:26:17.396202] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:41.959 [2024-12-09 09:26:17.396245] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:41.959 true 00:08:42.219 09:26:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0e01be2b-2ee0-47b7-820a-b0ffdd591249 00:08:42.219 09:26:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:42.219 09:26:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:42.219 09:26:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:42.479 09:26:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6e448875-65c6-467f-a199-28c1e55a206b 00:08:42.479 09:26:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:42.738 [2024-12-09 09:26:18.058115] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:42.738 09:26:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:43.000 09:26:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2597078 00:08:43.000 09:26:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:43.000 09:26:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:43.000 09:26:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2597078 /var/tmp/bdevperf.sock 00:08:43.000 09:26:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2597078 ']' 00:08:43.000 09:26:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:43.000 09:26:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:43.000 09:26:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:43.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:43.000 09:26:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:43.000 09:26:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:43.000 [2024-12-09 09:26:18.291531] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:08:43.000 [2024-12-09 09:26:18.291582] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2597078 ] 00:08:43.000 [2024-12-09 09:26:18.375198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.000 [2024-12-09 09:26:18.391385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:43.286 09:26:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:43.286 09:26:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:43.286 09:26:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:43.548 Nvme0n1 00:08:43.548 09:26:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:43.809 [ 00:08:43.809 { 00:08:43.809 "name": "Nvme0n1", 00:08:43.809 "aliases": [ 00:08:43.809 "6e448875-65c6-467f-a199-28c1e55a206b" 00:08:43.809 ], 00:08:43.809 "product_name": "NVMe disk", 00:08:43.809 "block_size": 4096, 00:08:43.809 "num_blocks": 38912, 00:08:43.809 "uuid": "6e448875-65c6-467f-a199-28c1e55a206b", 00:08:43.809 "numa_id": 0, 00:08:43.809 "assigned_rate_limits": { 00:08:43.809 "rw_ios_per_sec": 0, 00:08:43.809 "rw_mbytes_per_sec": 0, 00:08:43.809 "r_mbytes_per_sec": 0, 00:08:43.809 "w_mbytes_per_sec": 0 00:08:43.809 }, 00:08:43.809 "claimed": false, 00:08:43.809 "zoned": false, 00:08:43.809 "supported_io_types": { 00:08:43.809 "read": true, 00:08:43.809 "write": true, 00:08:43.809 "unmap": true, 00:08:43.809 "flush": true, 00:08:43.809 "reset": true, 00:08:43.809 "nvme_admin": true, 00:08:43.809 "nvme_io": true, 00:08:43.809 "nvme_io_md": false, 00:08:43.810 "write_zeroes": true, 00:08:43.810 "zcopy": false, 00:08:43.810 "get_zone_info": false, 00:08:43.810 "zone_management": false, 00:08:43.810 "zone_append": false, 00:08:43.810 "compare": true, 00:08:43.810 "compare_and_write": true, 00:08:43.810 "abort": true, 00:08:43.810 "seek_hole": false, 00:08:43.810 "seek_data": false, 00:08:43.810 "copy": true, 00:08:43.810 "nvme_iov_md": false 00:08:43.810 }, 00:08:43.810 "memory_domains": [ 00:08:43.810 { 00:08:43.810 "dma_device_id": "system", 00:08:43.810 "dma_device_type": 1 00:08:43.810 } 00:08:43.810 ], 00:08:43.810 "driver_specific": { 00:08:43.810 "nvme": [ 00:08:43.810 { 00:08:43.810 "trid": { 00:08:43.810 "trtype": "TCP", 00:08:43.810 "adrfam": "IPv4", 00:08:43.810 "traddr": "10.0.0.2", 00:08:43.810 "trsvcid": "4420", 00:08:43.810 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:43.810 }, 00:08:43.810 "ctrlr_data": { 00:08:43.810 "cntlid": 1, 00:08:43.810 "vendor_id": "0x8086", 00:08:43.810 "model_number": "SPDK bdev Controller", 00:08:43.810 "serial_number": "SPDK0", 00:08:43.810 "firmware_revision": "25.01", 00:08:43.810 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:43.810 "oacs": { 00:08:43.810 "security": 0, 00:08:43.810 "format": 0, 00:08:43.810 "firmware": 0, 00:08:43.810 "ns_manage": 0 00:08:43.810 }, 00:08:43.810 "multi_ctrlr": true, 00:08:43.810 "ana_reporting": false 00:08:43.810 }, 00:08:43.810 "vs": { 00:08:43.810 "nvme_version": "1.3" 00:08:43.810 }, 00:08:43.810 "ns_data": { 00:08:43.810 "id": 1, 00:08:43.810 "can_share": true 00:08:43.810 } 00:08:43.810 } 00:08:43.810 ], 00:08:43.810 "mp_policy": "active_passive" 00:08:43.810 } 00:08:43.810 } 00:08:43.810 ] 00:08:43.810 09:26:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2597194 00:08:43.810 09:26:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:43.810 09:26:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:43.810 Running I/O for 10 seconds... 00:08:44.753 Latency(us) 00:08:44.753 [2024-12-09T08:26:20.206Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:44.753 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:44.753 Nvme0n1 : 1.00 25302.00 98.84 0.00 0.00 0.00 0.00 0.00 00:08:44.753 [2024-12-09T08:26:20.206Z] =================================================================================================================== 00:08:44.753 [2024-12-09T08:26:20.206Z] Total : 25302.00 98.84 0.00 0.00 0.00 0.00 0.00 00:08:44.753 00:08:45.696 09:26:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 0e01be2b-2ee0-47b7-820a-b0ffdd591249 00:08:45.696 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:45.696 Nvme0n1 : 2.00 25417.00 99.29 0.00 0.00 0.00 0.00 0.00 00:08:45.696 [2024-12-09T08:26:21.149Z] =================================================================================================================== 00:08:45.696 [2024-12-09T08:26:21.149Z] Total : 25417.00 99.29 0.00 0.00 0.00 0.00 0.00 00:08:45.696 00:08:45.956 true 00:08:45.957 09:26:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0e01be2b-2ee0-47b7-820a-b0ffdd591249 00:08:45.957 09:26:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:45.957 09:26:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:45.957 09:26:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:45.957 09:26:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2597194 00:08:46.896 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:46.896 Nvme0n1 : 3.00 25497.00 99.60 0.00 0.00 0.00 0.00 0.00 00:08:46.896 [2024-12-09T08:26:22.349Z] =================================================================================================================== 00:08:46.896 [2024-12-09T08:26:22.349Z] Total : 25497.00 99.60 0.00 0.00 0.00 0.00 0.00 00:08:46.896 00:08:47.836 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:47.836 Nvme0n1 : 4.00 25538.25 99.76 0.00 0.00 0.00 0.00 0.00 00:08:47.836 [2024-12-09T08:26:23.289Z] =================================================================================================================== 00:08:47.836 [2024-12-09T08:26:23.289Z] Total : 25538.25 99.76 0.00 0.00 0.00 0.00 0.00 00:08:47.836 00:08:48.778 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:48.778 Nvme0n1 : 5.00 25576.00 99.91 0.00 0.00 0.00 0.00 0.00 00:08:48.778 [2024-12-09T08:26:24.231Z] =================================================================================================================== 00:08:48.778 [2024-12-09T08:26:24.231Z] Total : 25576.00 99.91 0.00 0.00 0.00 0.00 0.00 00:08:48.778 00:08:49.715 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:49.715 Nvme0n1 : 6.00 25601.50 100.01 0.00 0.00 0.00 0.00 0.00 00:08:49.715 [2024-12-09T08:26:25.168Z] =================================================================================================================== 00:08:49.715 [2024-12-09T08:26:25.168Z] Total : 25601.50 100.01 0.00 0.00 0.00 0.00 0.00 00:08:49.715 00:08:51.099 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:51.099 Nvme0n1 : 7.00 25627.43 100.11 0.00 0.00 0.00 0.00 0.00 00:08:51.099 [2024-12-09T08:26:26.552Z] =================================================================================================================== 00:08:51.099 [2024-12-09T08:26:26.552Z] Total : 25627.43 100.11 0.00 0.00 0.00 0.00 0.00 00:08:51.099 00:08:52.040 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:52.040 Nvme0n1 : 8.00 25647.50 100.19 0.00 0.00 0.00 0.00 0.00 00:08:52.040 [2024-12-09T08:26:27.493Z] =================================================================================================================== 00:08:52.040 [2024-12-09T08:26:27.493Z] Total : 25647.50 100.19 0.00 0.00 0.00 0.00 0.00 00:08:52.040 00:08:52.983 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:52.983 Nvme0n1 : 9.00 25663.44 100.25 0.00 0.00 0.00 0.00 0.00 00:08:52.983 [2024-12-09T08:26:28.436Z] =================================================================================================================== 00:08:52.983 [2024-12-09T08:26:28.436Z] Total : 25663.44 100.25 0.00 0.00 0.00 0.00 0.00 00:08:52.983 00:08:53.925 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:53.925 Nvme0n1 : 10.00 25676.30 100.30 0.00 0.00 0.00 0.00 0.00 00:08:53.925 [2024-12-09T08:26:29.378Z] =================================================================================================================== 00:08:53.925 [2024-12-09T08:26:29.378Z] Total : 25676.30 100.30 0.00 0.00 0.00 0.00 0.00 00:08:53.925 00:08:53.925 00:08:53.925 Latency(us) 00:08:53.925 [2024-12-09T08:26:29.378Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:53.925 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:53.925 Nvme0n1 : 10.00 25677.01 100.30 0.00 0.00 4982.11 2416.64 8792.75 00:08:53.925 [2024-12-09T08:26:29.378Z] =================================================================================================================== 00:08:53.925 [2024-12-09T08:26:29.378Z] Total : 25677.01 100.30 0.00 0.00 4982.11 2416.64 8792.75 00:08:53.925 { 00:08:53.925 "results": [ 00:08:53.925 { 00:08:53.925 "job": "Nvme0n1", 00:08:53.925 "core_mask": "0x2", 00:08:53.925 "workload": "randwrite", 00:08:53.925 "status": "finished", 00:08:53.925 "queue_depth": 128, 00:08:53.925 "io_size": 4096, 00:08:53.925 "runtime": 10.00471, 00:08:53.925 "iops": 25677.006130112717, 00:08:53.925 "mibps": 100.3008051957528, 00:08:53.925 "io_failed": 0, 00:08:53.925 "io_timeout": 0, 00:08:53.925 "avg_latency_us": 4982.111915170248, 00:08:53.925 "min_latency_us": 2416.64, 00:08:53.925 "max_latency_us": 8792.746666666666 00:08:53.925 } 00:08:53.925 ], 00:08:53.925 "core_count": 1 00:08:53.925 } 00:08:53.925 09:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2597078 00:08:53.925 09:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 2597078 ']' 00:08:53.925 09:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 2597078 00:08:53.925 09:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:08:53.925 09:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:53.925 09:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2597078 00:08:53.925 09:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:53.925 09:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:53.925 09:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2597078' 00:08:53.925 killing process with pid 2597078 00:08:53.925 09:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 2597078 00:08:53.925 Received shutdown signal, test time was about 10.000000 seconds 00:08:53.925 00:08:53.925 Latency(us) 00:08:53.925 [2024-12-09T08:26:29.378Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:53.925 [2024-12-09T08:26:29.378Z] =================================================================================================================== 00:08:53.925 [2024-12-09T08:26:29.378Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:53.925 09:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 2597078 00:08:53.925 09:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:54.185 09:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:54.445 09:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0e01be2b-2ee0-47b7-820a-b0ffdd591249 00:08:54.445 09:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:54.445 09:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:54.445 09:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:54.445 09:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2593270 00:08:54.445 09:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2593270 00:08:54.445 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2593270 Killed "${NVMF_APP[@]}" "$@" 00:08:54.445 09:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:54.445 09:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:54.445 09:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:54.445 09:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:54.445 09:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:54.706 09:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=2599450 00:08:54.706 09:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 2599450 00:08:54.706 09:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:54.706 09:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2599450 ']' 00:08:54.706 09:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:54.706 09:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:54.706 09:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:54.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:54.706 09:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:54.706 09:26:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:54.706 [2024-12-09 09:26:29.962504] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:08:54.706 [2024-12-09 09:26:29.962562] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:54.706 [2024-12-09 09:26:30.052767] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.706 [2024-12-09 09:26:30.068367] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:54.706 [2024-12-09 09:26:30.068396] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:54.706 [2024-12-09 09:26:30.068402] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:54.706 [2024-12-09 09:26:30.068407] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:54.706 [2024-12-09 09:26:30.068411] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:54.706 [2024-12-09 09:26:30.068871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.649 09:26:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:55.649 09:26:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:55.649 09:26:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:55.649 09:26:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:55.649 09:26:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:55.649 09:26:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:55.649 09:26:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:55.649 [2024-12-09 09:26:30.934210] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:55.649 [2024-12-09 09:26:30.934288] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:55.649 [2024-12-09 09:26:30.934310] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:55.649 09:26:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:55.649 09:26:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 6e448875-65c6-467f-a199-28c1e55a206b 00:08:55.649 09:26:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=6e448875-65c6-467f-a199-28c1e55a206b 00:08:55.649 09:26:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:55.649 09:26:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:55.649 09:26:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:55.649 09:26:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:55.649 09:26:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:55.916 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 6e448875-65c6-467f-a199-28c1e55a206b -t 2000 00:08:55.916 [ 00:08:55.916 { 00:08:55.916 "name": "6e448875-65c6-467f-a199-28c1e55a206b", 00:08:55.916 "aliases": [ 00:08:55.916 "lvs/lvol" 00:08:55.916 ], 00:08:55.916 "product_name": "Logical Volume", 00:08:55.916 "block_size": 4096, 00:08:55.916 "num_blocks": 38912, 00:08:55.916 "uuid": "6e448875-65c6-467f-a199-28c1e55a206b", 00:08:55.916 "assigned_rate_limits": { 00:08:55.916 "rw_ios_per_sec": 0, 00:08:55.916 "rw_mbytes_per_sec": 0, 00:08:55.916 "r_mbytes_per_sec": 0, 00:08:55.916 "w_mbytes_per_sec": 0 00:08:55.916 }, 00:08:55.916 "claimed": false, 00:08:55.916 "zoned": false, 00:08:55.916 "supported_io_types": { 00:08:55.916 "read": true, 00:08:55.916 "write": true, 00:08:55.916 "unmap": true, 00:08:55.916 "flush": false, 00:08:55.916 "reset": true, 00:08:55.916 "nvme_admin": false, 00:08:55.916 "nvme_io": false, 00:08:55.916 "nvme_io_md": false, 00:08:55.916 "write_zeroes": true, 00:08:55.916 "zcopy": false, 00:08:55.916 "get_zone_info": false, 00:08:55.916 "zone_management": false, 00:08:55.916 "zone_append": false, 00:08:55.916 "compare": false, 00:08:55.916 "compare_and_write": false, 00:08:55.916 "abort": false, 00:08:55.916 "seek_hole": true, 00:08:55.916 "seek_data": true, 00:08:55.916 "copy": false, 00:08:55.916 "nvme_iov_md": false 00:08:55.916 }, 00:08:55.916 "driver_specific": { 00:08:55.916 "lvol": { 00:08:55.916 "lvol_store_uuid": "0e01be2b-2ee0-47b7-820a-b0ffdd591249", 00:08:55.916 "base_bdev": "aio_bdev", 00:08:55.916 "thin_provision": false, 00:08:55.916 "num_allocated_clusters": 38, 00:08:55.916 "snapshot": false, 00:08:55.916 "clone": false, 00:08:55.916 "esnap_clone": false 00:08:55.916 } 00:08:55.916 } 00:08:55.916 } 00:08:55.916 ] 00:08:55.916 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:55.916 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0e01be2b-2ee0-47b7-820a-b0ffdd591249 00:08:55.916 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:56.225 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:56.225 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0e01be2b-2ee0-47b7-820a-b0ffdd591249 00:08:56.225 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:56.225 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:56.225 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:56.525 [2024-12-09 09:26:31.782831] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:56.525 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0e01be2b-2ee0-47b7-820a-b0ffdd591249 00:08:56.525 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:08:56.525 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0e01be2b-2ee0-47b7-820a-b0ffdd591249 00:08:56.525 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:56.525 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:56.525 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:56.525 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:56.525 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:56.525 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:56.525 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:56.525 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:56.525 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0e01be2b-2ee0-47b7-820a-b0ffdd591249 00:08:56.795 request: 00:08:56.795 { 00:08:56.795 "uuid": "0e01be2b-2ee0-47b7-820a-b0ffdd591249", 00:08:56.795 "method": "bdev_lvol_get_lvstores", 00:08:56.795 "req_id": 1 00:08:56.795 } 00:08:56.795 Got JSON-RPC error response 00:08:56.795 response: 00:08:56.795 { 00:08:56.795 "code": -19, 00:08:56.795 "message": "No such device" 00:08:56.795 } 00:08:56.795 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:08:56.795 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:56.795 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:56.795 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:56.795 09:26:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:56.795 aio_bdev 00:08:56.795 09:26:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 6e448875-65c6-467f-a199-28c1e55a206b 00:08:56.795 09:26:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=6e448875-65c6-467f-a199-28c1e55a206b 00:08:56.795 09:26:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:56.795 09:26:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:56.795 09:26:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:56.795 09:26:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:56.795 09:26:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:57.056 09:26:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 6e448875-65c6-467f-a199-28c1e55a206b -t 2000 00:08:57.056 [ 00:08:57.056 { 00:08:57.056 "name": "6e448875-65c6-467f-a199-28c1e55a206b", 00:08:57.056 "aliases": [ 00:08:57.056 "lvs/lvol" 00:08:57.056 ], 00:08:57.056 "product_name": "Logical Volume", 00:08:57.056 "block_size": 4096, 00:08:57.056 "num_blocks": 38912, 00:08:57.056 "uuid": "6e448875-65c6-467f-a199-28c1e55a206b", 00:08:57.056 "assigned_rate_limits": { 00:08:57.056 "rw_ios_per_sec": 0, 00:08:57.056 "rw_mbytes_per_sec": 0, 00:08:57.056 "r_mbytes_per_sec": 0, 00:08:57.056 "w_mbytes_per_sec": 0 00:08:57.056 }, 00:08:57.056 "claimed": false, 00:08:57.056 "zoned": false, 00:08:57.056 "supported_io_types": { 00:08:57.056 "read": true, 00:08:57.056 "write": true, 00:08:57.056 "unmap": true, 00:08:57.056 "flush": false, 00:08:57.056 "reset": true, 00:08:57.056 "nvme_admin": false, 00:08:57.056 "nvme_io": false, 00:08:57.056 "nvme_io_md": false, 00:08:57.056 "write_zeroes": true, 00:08:57.056 "zcopy": false, 00:08:57.056 "get_zone_info": false, 00:08:57.056 "zone_management": false, 00:08:57.056 "zone_append": false, 00:08:57.056 "compare": false, 00:08:57.056 "compare_and_write": false, 00:08:57.056 "abort": false, 00:08:57.056 "seek_hole": true, 00:08:57.056 "seek_data": true, 00:08:57.056 "copy": false, 00:08:57.056 "nvme_iov_md": false 00:08:57.056 }, 00:08:57.056 "driver_specific": { 00:08:57.056 "lvol": { 00:08:57.056 "lvol_store_uuid": "0e01be2b-2ee0-47b7-820a-b0ffdd591249", 00:08:57.056 "base_bdev": "aio_bdev", 00:08:57.056 "thin_provision": false, 00:08:57.056 "num_allocated_clusters": 38, 00:08:57.056 "snapshot": false, 00:08:57.056 "clone": false, 00:08:57.056 "esnap_clone": false 00:08:57.056 } 00:08:57.056 } 00:08:57.056 } 00:08:57.056 ] 00:08:57.056 09:26:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:57.056 09:26:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0e01be2b-2ee0-47b7-820a-b0ffdd591249 00:08:57.056 09:26:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:57.317 09:26:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:57.317 09:26:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0e01be2b-2ee0-47b7-820a-b0ffdd591249 00:08:57.317 09:26:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:57.577 09:26:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:57.577 09:26:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 6e448875-65c6-467f-a199-28c1e55a206b 00:08:57.577 09:26:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0e01be2b-2ee0-47b7-820a-b0ffdd591249 00:08:57.839 09:26:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:58.099 09:26:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:58.099 00:08:58.099 real 0m16.847s 00:08:58.099 user 0m44.445s 00:08:58.099 sys 0m2.966s 00:08:58.099 09:26:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:58.099 09:26:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:58.099 ************************************ 00:08:58.099 END TEST lvs_grow_dirty 00:08:58.099 ************************************ 00:08:58.099 09:26:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:58.099 09:26:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:08:58.099 09:26:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:08:58.099 09:26:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:08:58.099 09:26:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:58.099 09:26:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:08:58.099 09:26:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:08:58.099 09:26:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:08:58.100 09:26:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:58.100 nvmf_trace.0 00:08:58.100 09:26:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:08:58.100 09:26:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:58.100 09:26:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:58.100 09:26:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:08:58.100 09:26:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:58.100 09:26:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:08:58.100 09:26:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:58.100 09:26:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:58.100 rmmod nvme_tcp 00:08:58.100 rmmod nvme_fabrics 00:08:58.100 rmmod nvme_keyring 00:08:58.100 09:26:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:58.100 09:26:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:08:58.100 09:26:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:08:58.100 09:26:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 2599450 ']' 00:08:58.100 09:26:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 2599450 00:08:58.100 09:26:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 2599450 ']' 00:08:58.100 09:26:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 2599450 00:08:58.100 09:26:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:08:58.100 09:26:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:58.100 09:26:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2599450 00:08:58.359 09:26:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:58.359 09:26:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:58.359 09:26:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2599450' 00:08:58.359 killing process with pid 2599450 00:08:58.359 09:26:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 2599450 00:08:58.359 09:26:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 2599450 00:08:58.359 09:26:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:58.359 09:26:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:58.359 09:26:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:58.359 09:26:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:08:58.359 09:26:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:08:58.359 09:26:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:58.359 09:26:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:08:58.359 09:26:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:58.359 09:26:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:58.359 09:26:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:58.359 09:26:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:58.359 09:26:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:00.938 09:26:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:00.938 00:09:00.938 real 0m43.663s 00:09:00.938 user 1m6.082s 00:09:00.938 sys 0m10.308s 00:09:00.938 09:26:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:00.938 09:26:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:00.938 ************************************ 00:09:00.938 END TEST nvmf_lvs_grow 00:09:00.938 ************************************ 00:09:00.938 09:26:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:00.938 09:26:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:00.938 09:26:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:00.938 09:26:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:00.938 ************************************ 00:09:00.938 START TEST nvmf_bdev_io_wait 00:09:00.938 ************************************ 00:09:00.938 09:26:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:00.938 * Looking for test storage... 00:09:00.938 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:00.938 09:26:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:00.938 09:26:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:09:00.938 09:26:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:00.938 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:00.938 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:00.938 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:00.938 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:00.938 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:09:00.938 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:09:00.938 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:09:00.938 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:09:00.938 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:09:00.938 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:09:00.938 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:09:00.938 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:00.938 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:09:00.938 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:09:00.938 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:00.938 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:00.938 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:09:00.938 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:09:00.938 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:00.938 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:09:00.938 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:09:00.938 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:09:00.938 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:09:00.938 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:00.938 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:09:00.938 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:09:00.938 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:00.938 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:00.938 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:09:00.938 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:00.938 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:00.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.938 --rc genhtml_branch_coverage=1 00:09:00.938 --rc genhtml_function_coverage=1 00:09:00.938 --rc genhtml_legend=1 00:09:00.938 --rc geninfo_all_blocks=1 00:09:00.938 --rc geninfo_unexecuted_blocks=1 00:09:00.938 00:09:00.938 ' 00:09:00.938 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:00.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.938 --rc genhtml_branch_coverage=1 00:09:00.938 --rc genhtml_function_coverage=1 00:09:00.938 --rc genhtml_legend=1 00:09:00.938 --rc geninfo_all_blocks=1 00:09:00.938 --rc geninfo_unexecuted_blocks=1 00:09:00.938 00:09:00.938 ' 00:09:00.938 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:00.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.938 --rc genhtml_branch_coverage=1 00:09:00.938 --rc genhtml_function_coverage=1 00:09:00.938 --rc genhtml_legend=1 00:09:00.938 --rc geninfo_all_blocks=1 00:09:00.938 --rc geninfo_unexecuted_blocks=1 00:09:00.938 00:09:00.938 ' 00:09:00.938 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:00.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.938 --rc genhtml_branch_coverage=1 00:09:00.938 --rc genhtml_function_coverage=1 00:09:00.938 --rc genhtml_legend=1 00:09:00.938 --rc geninfo_all_blocks=1 00:09:00.938 --rc geninfo_unexecuted_blocks=1 00:09:00.938 00:09:00.938 ' 00:09:00.938 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:00.938 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:00.938 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:00.938 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:00.938 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:00.938 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:00.938 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:00.938 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:00.938 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:00.939 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:00.939 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:00.939 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:00.939 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:00.939 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:00.939 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:00.939 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:00.939 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:00.939 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:00.939 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:00.939 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:09:00.939 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:00.939 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:00.939 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:00.939 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.939 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.939 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.939 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:00.939 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.939 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:09:00.939 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:00.939 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:00.939 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:00.939 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:00.939 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:00.939 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:00.939 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:00.939 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:00.939 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:00.939 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:00.939 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:00.939 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:00.939 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:00.939 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:00.939 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:00.939 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:00.939 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:00.939 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:00.939 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:00.939 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:00.939 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:00.939 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:00.939 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:00.939 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:09:00.939 09:26:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:09.078 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:09.078 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:09:09.078 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:09.078 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:09.078 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:09.078 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:09.078 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:09.078 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:09:09.078 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:09.078 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:09:09.078 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:09:09.078 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:09:09.078 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:09:09.078 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:09:09.078 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:09:09.078 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:09.078 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:09.078 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:09.078 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:09.078 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:09.078 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:09.078 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:09.078 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:09.078 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:09.079 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:09.079 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:09.079 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:09.079 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:09.079 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:09.079 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:09.079 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:09.079 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:09.079 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:09.079 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:09.079 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:09.079 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:09.079 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:09.079 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:09.079 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:09.079 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:09.079 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:09.079 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:09.079 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:09.079 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:09.079 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:09.079 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:09.079 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:09.079 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:09.079 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:09.079 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:09.079 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:09.079 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:09.079 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:09.079 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:09.079 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:09.079 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:09.079 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:09.079 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:09.079 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:09.079 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:09.079 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:09.079 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:09.079 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:09.079 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:09.079 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:09.079 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:09.079 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:09.079 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:09.079 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:09.079 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:09.079 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:09.079 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:09.079 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:09.079 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:09:09.079 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:09.079 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:09.079 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:09.079 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:09.079 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:09.079 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:09.079 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:09.079 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:09.079 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:09.079 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:09.079 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:09.079 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:09.079 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:09.079 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:09.079 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:09.079 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:09.079 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:09.079 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:09.079 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:09.079 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:09.079 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:09.079 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:09.079 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:09.079 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:09.079 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:09.079 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:09.079 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:09.079 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.672 ms 00:09:09.079 00:09:09.079 --- 10.0.0.2 ping statistics --- 00:09:09.079 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:09.079 rtt min/avg/max/mdev = 0.672/0.672/0.672/0.000 ms 00:09:09.079 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:09.079 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:09.079 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:09:09.079 00:09:09.079 --- 10.0.0.1 ping statistics --- 00:09:09.079 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:09.080 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:09:09.080 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:09.080 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:09:09.080 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:09.080 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:09.080 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:09.080 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:09.080 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:09.080 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:09.080 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:09.080 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:09.080 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:09.080 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:09.080 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:09.080 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=2604511 00:09:09.080 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 2604511 00:09:09.080 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:09.080 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 2604511 ']' 00:09:09.080 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:09.080 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:09.080 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:09.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:09.080 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:09.080 09:26:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:09.080 [2024-12-09 09:26:43.599169] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:09:09.080 [2024-12-09 09:26:43.599231] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:09.080 [2024-12-09 09:26:43.700356] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:09.080 [2024-12-09 09:26:43.729717] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:09.080 [2024-12-09 09:26:43.729770] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:09.080 [2024-12-09 09:26:43.729778] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:09.080 [2024-12-09 09:26:43.729785] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:09.080 [2024-12-09 09:26:43.729792] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:09.080 [2024-12-09 09:26:43.732012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:09.080 [2024-12-09 09:26:43.732141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:09.080 [2024-12-09 09:26:43.732310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.080 [2024-12-09 09:26:43.732311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:09.080 09:26:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:09.080 09:26:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:09:09.080 09:26:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:09.080 09:26:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:09.080 09:26:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:09.080 09:26:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:09.080 09:26:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:09.080 09:26:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.080 09:26:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:09.080 09:26:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.080 09:26:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:09.080 09:26:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.080 09:26:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:09.080 09:26:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.080 09:26:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:09.080 09:26:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.080 09:26:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:09.080 [2024-12-09 09:26:44.512652] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:09.080 09:26:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.080 09:26:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:09.080 09:26:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.080 09:26:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:09.342 Malloc0 00:09:09.342 09:26:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.342 09:26:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:09.342 09:26:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.342 09:26:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:09.342 09:26:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.342 09:26:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:09.342 09:26:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.342 09:26:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:09.342 09:26:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.343 09:26:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:09.343 09:26:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.343 09:26:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:09.343 [2024-12-09 09:26:44.571894] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:09.343 09:26:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.343 09:26:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2604576 00:09:09.343 09:26:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:09.343 09:26:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:09.343 09:26:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2604578 00:09:09.343 09:26:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:09.343 09:26:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:09.343 09:26:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:09.343 09:26:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:09.343 { 00:09:09.343 "params": { 00:09:09.343 "name": "Nvme$subsystem", 00:09:09.343 "trtype": "$TEST_TRANSPORT", 00:09:09.343 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:09.343 "adrfam": "ipv4", 00:09:09.343 "trsvcid": "$NVMF_PORT", 00:09:09.343 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:09.343 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:09.343 "hdgst": ${hdgst:-false}, 00:09:09.343 "ddgst": ${ddgst:-false} 00:09:09.343 }, 00:09:09.343 "method": "bdev_nvme_attach_controller" 00:09:09.343 } 00:09:09.343 EOF 00:09:09.343 )") 00:09:09.343 09:26:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2604580 00:09:09.343 09:26:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:09.343 09:26:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:09.343 09:26:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:09.343 09:26:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:09.343 09:26:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:09.343 09:26:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:09.343 { 00:09:09.343 "params": { 00:09:09.343 "name": "Nvme$subsystem", 00:09:09.343 "trtype": "$TEST_TRANSPORT", 00:09:09.343 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:09.343 "adrfam": "ipv4", 00:09:09.343 "trsvcid": "$NVMF_PORT", 00:09:09.343 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:09.343 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:09.343 "hdgst": ${hdgst:-false}, 00:09:09.343 "ddgst": ${ddgst:-false} 00:09:09.343 }, 00:09:09.343 "method": "bdev_nvme_attach_controller" 00:09:09.343 } 00:09:09.343 EOF 00:09:09.343 )") 00:09:09.343 09:26:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2604583 00:09:09.343 09:26:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:09.343 09:26:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:09.343 09:26:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:09.343 09:26:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:09.343 09:26:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:09.343 09:26:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:09.343 09:26:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:09.343 09:26:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:09.343 { 00:09:09.343 "params": { 00:09:09.343 "name": "Nvme$subsystem", 00:09:09.343 "trtype": "$TEST_TRANSPORT", 00:09:09.343 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:09.343 "adrfam": "ipv4", 00:09:09.343 "trsvcid": "$NVMF_PORT", 00:09:09.343 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:09.343 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:09.343 "hdgst": ${hdgst:-false}, 00:09:09.343 "ddgst": ${ddgst:-false} 00:09:09.343 }, 00:09:09.343 "method": "bdev_nvme_attach_controller" 00:09:09.343 } 00:09:09.343 EOF 00:09:09.343 )") 00:09:09.343 09:26:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:09.343 09:26:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:09.343 09:26:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:09.343 09:26:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:09.343 09:26:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:09.343 09:26:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:09.343 09:26:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:09.343 { 00:09:09.343 "params": { 00:09:09.343 "name": "Nvme$subsystem", 00:09:09.343 "trtype": "$TEST_TRANSPORT", 00:09:09.343 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:09.343 "adrfam": "ipv4", 00:09:09.343 "trsvcid": "$NVMF_PORT", 00:09:09.343 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:09.343 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:09.343 "hdgst": ${hdgst:-false}, 00:09:09.343 "ddgst": ${ddgst:-false} 00:09:09.343 }, 00:09:09.343 "method": "bdev_nvme_attach_controller" 00:09:09.343 } 00:09:09.343 EOF 00:09:09.343 )") 00:09:09.343 09:26:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:09.343 09:26:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2604576 00:09:09.343 09:26:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:09.343 09:26:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:09.343 09:26:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:09.343 09:26:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:09.343 09:26:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:09.343 09:26:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:09.343 "params": { 00:09:09.343 "name": "Nvme1", 00:09:09.343 "trtype": "tcp", 00:09:09.343 "traddr": "10.0.0.2", 00:09:09.343 "adrfam": "ipv4", 00:09:09.343 "trsvcid": "4420", 00:09:09.343 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:09.343 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:09.343 "hdgst": false, 00:09:09.343 "ddgst": false 00:09:09.343 }, 00:09:09.343 "method": "bdev_nvme_attach_controller" 00:09:09.343 }' 00:09:09.343 09:26:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:09.343 09:26:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:09.343 09:26:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:09.343 "params": { 00:09:09.343 "name": "Nvme1", 00:09:09.343 "trtype": "tcp", 00:09:09.343 "traddr": "10.0.0.2", 00:09:09.343 "adrfam": "ipv4", 00:09:09.343 "trsvcid": "4420", 00:09:09.343 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:09.343 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:09.343 "hdgst": false, 00:09:09.343 "ddgst": false 00:09:09.343 }, 00:09:09.343 "method": "bdev_nvme_attach_controller" 00:09:09.343 }' 00:09:09.343 09:26:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:09.343 09:26:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:09.343 "params": { 00:09:09.343 "name": "Nvme1", 00:09:09.343 "trtype": "tcp", 00:09:09.343 "traddr": "10.0.0.2", 00:09:09.343 "adrfam": "ipv4", 00:09:09.343 "trsvcid": "4420", 00:09:09.343 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:09.343 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:09.343 "hdgst": false, 00:09:09.343 "ddgst": false 00:09:09.343 }, 00:09:09.343 "method": "bdev_nvme_attach_controller" 00:09:09.343 }' 00:09:09.343 09:26:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:09.343 09:26:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:09.343 "params": { 00:09:09.343 "name": "Nvme1", 00:09:09.343 "trtype": "tcp", 00:09:09.343 "traddr": "10.0.0.2", 00:09:09.343 "adrfam": "ipv4", 00:09:09.343 "trsvcid": "4420", 00:09:09.343 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:09.343 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:09.343 "hdgst": false, 00:09:09.343 "ddgst": false 00:09:09.343 }, 00:09:09.343 "method": "bdev_nvme_attach_controller" 00:09:09.343 }' 00:09:09.343 [2024-12-09 09:26:44.626849] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:09:09.344 [2024-12-09 09:26:44.626902] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:09.344 [2024-12-09 09:26:44.629780] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:09:09.344 [2024-12-09 09:26:44.629831] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:09:09.344 [2024-12-09 09:26:44.630057] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:09:09.344 [2024-12-09 09:26:44.630101] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:09.344 [2024-12-09 09:26:44.630615] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:09:09.344 [2024-12-09 09:26:44.630664] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:09.344 [2024-12-09 09:26:44.779428] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.344 [2024-12-09 09:26:44.792016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:09.604 [2024-12-09 09:26:44.823917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.604 [2024-12-09 09:26:44.850378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:09:09.604 [2024-12-09 09:26:44.867380] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.604 [2024-12-09 09:26:44.878894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:09.604 [2024-12-09 09:26:44.901129] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.604 [2024-12-09 09:26:44.912170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:09.604 Running I/O for 1 seconds... 00:09:09.863 Running I/O for 1 seconds... 00:09:09.863 Running I/O for 1 seconds... 00:09:09.863 Running I/O for 1 seconds... 00:09:10.803 10731.00 IOPS, 41.92 MiB/s 00:09:10.803 Latency(us) 00:09:10.803 [2024-12-09T08:26:46.256Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:10.803 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:10.803 Nvme1n1 : 1.01 10738.56 41.95 0.00 0.00 11839.41 4751.36 18022.40 00:09:10.803 [2024-12-09T08:26:46.256Z] =================================================================================================================== 00:09:10.803 [2024-12-09T08:26:46.256Z] Total : 10738.56 41.95 0.00 0.00 11839.41 4751.36 18022.40 00:09:10.803 12821.00 IOPS, 50.08 MiB/s [2024-12-09T08:26:46.256Z] 11131.00 IOPS, 43.48 MiB/s 00:09:10.803 Latency(us) 00:09:10.803 [2024-12-09T08:26:46.256Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:10.803 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:10.803 Nvme1n1 : 1.01 12877.33 50.30 0.00 0.00 9905.34 5188.27 22391.47 00:09:10.803 [2024-12-09T08:26:46.256Z] =================================================================================================================== 00:09:10.803 [2024-12-09T08:26:46.256Z] Total : 12877.33 50.30 0.00 0.00 9905.34 5188.27 22391.47 00:09:10.803 00:09:10.803 Latency(us) 00:09:10.803 [2024-12-09T08:26:46.256Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:10.803 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:10.803 Nvme1n1 : 1.01 11256.50 43.97 0.00 0.00 11346.97 2853.55 25012.91 00:09:10.803 [2024-12-09T08:26:46.256Z] =================================================================================================================== 00:09:10.803 [2024-12-09T08:26:46.256Z] Total : 11256.50 43.97 0.00 0.00 11346.97 2853.55 25012.91 00:09:10.803 180408.00 IOPS, 704.72 MiB/s 00:09:10.803 Latency(us) 00:09:10.803 [2024-12-09T08:26:46.256Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:10.803 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:10.803 Nvme1n1 : 1.00 180056.43 703.35 0.00 0.00 707.27 296.96 1966.08 00:09:10.803 [2024-12-09T08:26:46.256Z] =================================================================================================================== 00:09:10.803 [2024-12-09T08:26:46.256Z] Total : 180056.43 703.35 0.00 0.00 707.27 296.96 1966.08 00:09:10.803 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2604578 00:09:10.803 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2604580 00:09:11.064 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2604583 00:09:11.064 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:11.064 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.064 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:11.064 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.064 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:11.064 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:11.064 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:11.064 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:09:11.064 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:11.064 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:09:11.064 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:11.064 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:11.064 rmmod nvme_tcp 00:09:11.064 rmmod nvme_fabrics 00:09:11.064 rmmod nvme_keyring 00:09:11.064 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:11.064 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:09:11.064 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:09:11.064 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 2604511 ']' 00:09:11.064 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 2604511 00:09:11.064 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 2604511 ']' 00:09:11.064 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 2604511 00:09:11.064 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:09:11.064 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:11.064 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2604511 00:09:11.064 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:11.064 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:11.064 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2604511' 00:09:11.064 killing process with pid 2604511 00:09:11.064 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 2604511 00:09:11.064 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 2604511 00:09:11.325 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:11.325 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:11.325 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:11.325 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:09:11.325 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:09:11.325 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:09:11.325 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:11.325 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:11.325 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:11.325 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:11.325 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:11.325 09:26:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:13.241 09:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:13.241 00:09:13.241 real 0m12.744s 00:09:13.241 user 0m18.626s 00:09:13.241 sys 0m6.888s 00:09:13.241 09:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:13.241 09:26:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:13.241 ************************************ 00:09:13.241 END TEST nvmf_bdev_io_wait 00:09:13.241 ************************************ 00:09:13.241 09:26:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:13.241 09:26:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:13.241 09:26:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:13.241 09:26:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:13.502 ************************************ 00:09:13.502 START TEST nvmf_queue_depth 00:09:13.502 ************************************ 00:09:13.502 09:26:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:13.502 * Looking for test storage... 00:09:13.502 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:13.502 09:26:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:13.502 09:26:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:09:13.502 09:26:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:13.502 09:26:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:13.502 09:26:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:13.502 09:26:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:13.502 09:26:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:13.502 09:26:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:09:13.502 09:26:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:09:13.502 09:26:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:09:13.502 09:26:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:09:13.502 09:26:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:09:13.502 09:26:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:09:13.502 09:26:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:09:13.502 09:26:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:13.502 09:26:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:09:13.502 09:26:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:09:13.502 09:26:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:13.502 09:26:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:13.502 09:26:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:09:13.502 09:26:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:09:13.502 09:26:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:13.502 09:26:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:09:13.502 09:26:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:09:13.502 09:26:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:09:13.502 09:26:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:09:13.502 09:26:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:13.502 09:26:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:09:13.502 09:26:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:09:13.502 09:26:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:13.502 09:26:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:13.502 09:26:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:09:13.502 09:26:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:13.502 09:26:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:13.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.502 --rc genhtml_branch_coverage=1 00:09:13.502 --rc genhtml_function_coverage=1 00:09:13.502 --rc genhtml_legend=1 00:09:13.502 --rc geninfo_all_blocks=1 00:09:13.502 --rc geninfo_unexecuted_blocks=1 00:09:13.502 00:09:13.502 ' 00:09:13.502 09:26:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:13.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.502 --rc genhtml_branch_coverage=1 00:09:13.502 --rc genhtml_function_coverage=1 00:09:13.502 --rc genhtml_legend=1 00:09:13.502 --rc geninfo_all_blocks=1 00:09:13.502 --rc geninfo_unexecuted_blocks=1 00:09:13.502 00:09:13.502 ' 00:09:13.502 09:26:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:13.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.502 --rc genhtml_branch_coverage=1 00:09:13.502 --rc genhtml_function_coverage=1 00:09:13.502 --rc genhtml_legend=1 00:09:13.502 --rc geninfo_all_blocks=1 00:09:13.503 --rc geninfo_unexecuted_blocks=1 00:09:13.503 00:09:13.503 ' 00:09:13.503 09:26:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:13.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.503 --rc genhtml_branch_coverage=1 00:09:13.503 --rc genhtml_function_coverage=1 00:09:13.503 --rc genhtml_legend=1 00:09:13.503 --rc geninfo_all_blocks=1 00:09:13.503 --rc geninfo_unexecuted_blocks=1 00:09:13.503 00:09:13.503 ' 00:09:13.503 09:26:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:13.503 09:26:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:13.503 09:26:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:13.503 09:26:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:13.503 09:26:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:13.503 09:26:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:13.503 09:26:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:13.503 09:26:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:13.503 09:26:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:13.503 09:26:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:13.503 09:26:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:13.503 09:26:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:13.503 09:26:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:13.503 09:26:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:13.503 09:26:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:13.503 09:26:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:13.503 09:26:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:13.503 09:26:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:13.503 09:26:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:13.503 09:26:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:09:13.503 09:26:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:13.503 09:26:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:13.503 09:26:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:13.503 09:26:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.503 09:26:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.503 09:26:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.503 09:26:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:13.503 09:26:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.503 09:26:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:09:13.503 09:26:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:13.503 09:26:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:13.503 09:26:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:13.503 09:26:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:13.503 09:26:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:13.503 09:26:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:13.503 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:13.503 09:26:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:13.503 09:26:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:13.503 09:26:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:13.503 09:26:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:13.503 09:26:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:13.503 09:26:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:13.503 09:26:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:13.503 09:26:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:13.503 09:26:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:13.503 09:26:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:13.503 09:26:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:13.503 09:26:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:13.503 09:26:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:13.503 09:26:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:13.503 09:26:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:13.503 09:26:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:13.503 09:26:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:13.503 09:26:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:09:13.503 09:26:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:21.651 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:21.651 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:09:21.651 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:21.651 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:21.651 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:21.651 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:21.651 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:21.651 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:09:21.651 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:21.651 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:09:21.651 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:09:21.651 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:09:21.651 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:09:21.651 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:09:21.651 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:09:21.651 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:21.651 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:21.651 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:21.651 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:21.651 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:21.651 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:21.651 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:21.651 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:21.651 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:21.651 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:21.651 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:21.651 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:21.651 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:21.651 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:21.651 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:21.651 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:21.651 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:21.651 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:21.651 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:21.651 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:21.651 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:21.651 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:21.651 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:21.651 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:21.651 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:21.651 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:21.651 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:21.651 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:21.651 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:21.651 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:21.651 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:21.651 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:21.651 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:21.651 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:21.651 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:21.651 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:21.651 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:21.651 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:21.651 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:21.651 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:21.651 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:21.651 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:21.651 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:21.651 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:21.651 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:21.651 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:21.651 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:21.651 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:21.651 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:21.651 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:21.651 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:21.651 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:21.651 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:21.651 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:21.651 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:21.651 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:21.651 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:21.651 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:21.651 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:09:21.651 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:21.651 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:21.651 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:21.651 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:21.651 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:21.651 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:21.651 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:21.651 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:21.651 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:21.651 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:21.651 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:21.651 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:21.651 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:21.651 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:21.651 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:21.651 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:21.651 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:21.651 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:21.651 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:21.651 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:21.651 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:21.651 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:21.651 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:21.651 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:21.651 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:21.651 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:21.651 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:21.651 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.692 ms 00:09:21.651 00:09:21.651 --- 10.0.0.2 ping statistics --- 00:09:21.651 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:21.651 rtt min/avg/max/mdev = 0.692/0.692/0.692/0.000 ms 00:09:21.651 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:21.651 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:21.651 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.269 ms 00:09:21.651 00:09:21.651 --- 10.0.0.1 ping statistics --- 00:09:21.652 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:21.652 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:09:21.652 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:21.652 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:09:21.652 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:21.652 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:21.652 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:21.652 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:21.652 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:21.652 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:21.652 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:21.652 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:21.652 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:21.652 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:21.652 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:21.652 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=2609275 00:09:21.652 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 2609275 00:09:21.652 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:21.652 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2609275 ']' 00:09:21.652 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:21.652 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:21.652 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:21.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:21.652 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:21.652 09:26:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:21.652 [2024-12-09 09:26:56.462425] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:09:21.652 [2024-12-09 09:26:56.462493] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:21.652 [2024-12-09 09:26:56.565406] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.652 [2024-12-09 09:26:56.591568] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:21.652 [2024-12-09 09:26:56.591617] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:21.652 [2024-12-09 09:26:56.591625] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:21.652 [2024-12-09 09:26:56.591633] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:21.652 [2024-12-09 09:26:56.591647] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:21.652 [2024-12-09 09:26:56.592410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:21.912 09:26:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:21.912 09:26:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:21.912 09:26:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:21.912 09:26:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:21.912 09:26:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:21.912 09:26:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:21.912 09:26:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:21.912 09:26:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.912 09:26:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:21.912 [2024-12-09 09:26:57.327619] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:21.912 09:26:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.912 09:26:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:21.912 09:26:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.912 09:26:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:21.912 Malloc0 00:09:21.912 09:26:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.912 09:26:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:21.912 09:26:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.912 09:26:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:22.173 09:26:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.173 09:26:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:22.173 09:26:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.173 09:26:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:22.173 09:26:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.173 09:26:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:22.173 09:26:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.173 09:26:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:22.173 [2024-12-09 09:26:57.388889] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:22.173 09:26:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.173 09:26:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2609597 00:09:22.173 09:26:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:22.173 09:26:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:22.173 09:26:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2609597 /var/tmp/bdevperf.sock 00:09:22.173 09:26:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2609597 ']' 00:09:22.174 09:26:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:22.174 09:26:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:22.174 09:26:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:22.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:22.174 09:26:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:22.174 09:26:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:22.174 [2024-12-09 09:26:57.447067] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:09:22.174 [2024-12-09 09:26:57.447132] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2609597 ] 00:09:22.174 [2024-12-09 09:26:57.539306] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.174 [2024-12-09 09:26:57.567718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.115 09:26:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:23.115 09:26:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:23.115 09:26:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:23.115 09:26:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.115 09:26:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:23.115 NVMe0n1 00:09:23.115 09:26:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.115 09:26:58 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:23.115 Running I/O for 10 seconds... 00:09:25.449 10205.00 IOPS, 39.86 MiB/s [2024-12-09T08:27:01.846Z] 10758.00 IOPS, 42.02 MiB/s [2024-12-09T08:27:02.792Z] 11091.00 IOPS, 43.32 MiB/s [2024-12-09T08:27:03.735Z] 11267.00 IOPS, 44.01 MiB/s [2024-12-09T08:27:04.680Z] 11471.20 IOPS, 44.81 MiB/s [2024-12-09T08:27:05.622Z] 11777.67 IOPS, 46.01 MiB/s [2024-12-09T08:27:06.563Z] 12024.14 IOPS, 46.97 MiB/s [2024-12-09T08:27:07.506Z] 12286.12 IOPS, 47.99 MiB/s [2024-12-09T08:27:08.891Z] 12399.00 IOPS, 48.43 MiB/s [2024-12-09T08:27:08.891Z] 12491.70 IOPS, 48.80 MiB/s 00:09:33.438 Latency(us) 00:09:33.438 [2024-12-09T08:27:08.891Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:33.438 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:33.438 Verification LBA range: start 0x0 length 0x4000 00:09:33.438 NVMe0n1 : 10.05 12529.73 48.94 0.00 0.00 81467.16 18568.53 77769.39 00:09:33.438 [2024-12-09T08:27:08.891Z] =================================================================================================================== 00:09:33.438 [2024-12-09T08:27:08.891Z] Total : 12529.73 48.94 0.00 0.00 81467.16 18568.53 77769.39 00:09:33.438 { 00:09:33.438 "results": [ 00:09:33.438 { 00:09:33.438 "job": "NVMe0n1", 00:09:33.438 "core_mask": "0x1", 00:09:33.438 "workload": "verify", 00:09:33.438 "status": "finished", 00:09:33.438 "verify_range": { 00:09:33.438 "start": 0, 00:09:33.438 "length": 16384 00:09:33.438 }, 00:09:33.438 "queue_depth": 1024, 00:09:33.438 "io_size": 4096, 00:09:33.438 "runtime": 10.051374, 00:09:33.438 "iops": 12529.729766298617, 00:09:33.438 "mibps": 48.94425689960397, 00:09:33.438 "io_failed": 0, 00:09:33.438 "io_timeout": 0, 00:09:33.438 "avg_latency_us": 81467.1587167536, 00:09:33.438 "min_latency_us": 18568.533333333333, 00:09:33.438 "max_latency_us": 77769.38666666667 00:09:33.438 } 00:09:33.438 ], 00:09:33.438 "core_count": 1 00:09:33.438 } 00:09:33.438 09:27:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2609597 00:09:33.438 09:27:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2609597 ']' 00:09:33.438 09:27:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2609597 00:09:33.438 09:27:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:33.438 09:27:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:33.438 09:27:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2609597 00:09:33.438 09:27:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:33.438 09:27:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:33.438 09:27:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2609597' 00:09:33.438 killing process with pid 2609597 00:09:33.438 09:27:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2609597 00:09:33.438 Received shutdown signal, test time was about 10.000000 seconds 00:09:33.438 00:09:33.438 Latency(us) 00:09:33.438 [2024-12-09T08:27:08.891Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:33.438 [2024-12-09T08:27:08.891Z] =================================================================================================================== 00:09:33.438 [2024-12-09T08:27:08.891Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:33.438 09:27:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2609597 00:09:33.438 09:27:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:33.438 09:27:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:33.438 09:27:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:33.438 09:27:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:09:33.438 09:27:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:33.438 09:27:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:09:33.438 09:27:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:33.438 09:27:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:33.438 rmmod nvme_tcp 00:09:33.438 rmmod nvme_fabrics 00:09:33.438 rmmod nvme_keyring 00:09:33.438 09:27:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:33.438 09:27:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:09:33.438 09:27:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:09:33.438 09:27:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 2609275 ']' 00:09:33.438 09:27:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 2609275 00:09:33.438 09:27:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2609275 ']' 00:09:33.438 09:27:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2609275 00:09:33.438 09:27:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:33.438 09:27:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:33.438 09:27:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2609275 00:09:33.438 09:27:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:33.438 09:27:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:33.438 09:27:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2609275' 00:09:33.438 killing process with pid 2609275 00:09:33.438 09:27:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2609275 00:09:33.438 09:27:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2609275 00:09:33.699 09:27:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:33.699 09:27:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:33.699 09:27:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:33.699 09:27:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:09:33.699 09:27:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:09:33.699 09:27:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:33.699 09:27:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:09:33.699 09:27:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:33.699 09:27:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:33.699 09:27:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:33.699 09:27:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:33.699 09:27:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:35.631 09:27:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:35.631 00:09:35.631 real 0m22.374s 00:09:35.631 user 0m25.678s 00:09:35.631 sys 0m6.960s 00:09:35.631 09:27:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:35.631 09:27:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:35.631 ************************************ 00:09:35.631 END TEST nvmf_queue_depth 00:09:35.631 ************************************ 00:09:35.891 09:27:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:35.891 09:27:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:35.891 09:27:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:35.891 09:27:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:35.891 ************************************ 00:09:35.891 START TEST nvmf_target_multipath 00:09:35.891 ************************************ 00:09:35.891 09:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:35.891 * Looking for test storage... 00:09:35.891 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:35.891 09:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:35.891 09:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:09:35.891 09:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:35.891 09:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:35.891 09:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:35.891 09:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:35.891 09:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:35.891 09:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:09:35.891 09:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:09:35.891 09:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:09:35.891 09:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:09:35.891 09:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:09:35.891 09:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:09:35.891 09:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:09:35.891 09:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:35.891 09:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:09:35.891 09:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:09:35.891 09:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:35.891 09:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:35.891 09:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:09:35.891 09:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:09:35.891 09:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:35.891 09:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:09:35.891 09:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:09:35.891 09:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:09:35.891 09:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:09:35.891 09:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:35.891 09:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:09:35.891 09:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:09:35.891 09:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:35.891 09:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:35.891 09:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:09:35.891 09:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:35.891 09:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:35.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.891 --rc genhtml_branch_coverage=1 00:09:35.891 --rc genhtml_function_coverage=1 00:09:35.891 --rc genhtml_legend=1 00:09:35.891 --rc geninfo_all_blocks=1 00:09:35.891 --rc geninfo_unexecuted_blocks=1 00:09:35.891 00:09:35.891 ' 00:09:35.891 09:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:35.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.891 --rc genhtml_branch_coverage=1 00:09:35.891 --rc genhtml_function_coverage=1 00:09:35.891 --rc genhtml_legend=1 00:09:35.891 --rc geninfo_all_blocks=1 00:09:35.891 --rc geninfo_unexecuted_blocks=1 00:09:35.891 00:09:35.891 ' 00:09:35.891 09:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:35.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.891 --rc genhtml_branch_coverage=1 00:09:35.891 --rc genhtml_function_coverage=1 00:09:35.891 --rc genhtml_legend=1 00:09:35.891 --rc geninfo_all_blocks=1 00:09:35.891 --rc geninfo_unexecuted_blocks=1 00:09:35.891 00:09:35.891 ' 00:09:35.891 09:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:35.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.891 --rc genhtml_branch_coverage=1 00:09:35.892 --rc genhtml_function_coverage=1 00:09:35.892 --rc genhtml_legend=1 00:09:35.892 --rc geninfo_all_blocks=1 00:09:35.892 --rc geninfo_unexecuted_blocks=1 00:09:35.892 00:09:35.892 ' 00:09:35.892 09:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:35.892 09:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:35.892 09:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:35.892 09:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:35.892 09:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:35.892 09:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:35.892 09:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:35.892 09:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:35.892 09:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:35.892 09:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:35.892 09:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:36.152 09:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:36.152 09:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:36.152 09:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:36.152 09:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:36.152 09:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:36.152 09:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:36.152 09:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:36.152 09:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:36.152 09:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:09:36.152 09:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:36.152 09:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:36.152 09:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:36.153 09:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.153 09:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.153 09:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.153 09:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:36.153 09:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.153 09:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:09:36.153 09:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:36.153 09:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:36.153 09:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:36.153 09:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:36.153 09:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:36.153 09:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:36.153 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:36.153 09:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:36.153 09:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:36.153 09:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:36.153 09:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:36.153 09:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:36.153 09:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:36.153 09:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:36.153 09:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:36.153 09:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:36.153 09:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:36.153 09:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:36.153 09:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:36.153 09:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:36.153 09:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:36.153 09:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:36.153 09:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:36.153 09:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:36.153 09:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:36.153 09:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:09:36.153 09:27:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:44.292 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:44.292 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:09:44.292 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:44.293 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:44.293 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:44.293 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:44.293 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:44.293 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:09:44.293 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:44.293 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:09:44.293 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:09:44.293 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:09:44.293 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:09:44.293 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:09:44.293 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:09:44.293 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:44.293 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:44.293 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:44.293 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:44.293 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:44.293 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:44.293 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:44.293 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:44.293 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:44.293 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:44.293 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:44.293 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:44.293 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:44.293 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:44.293 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:44.293 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:44.293 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:44.293 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:44.293 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:44.293 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:44.293 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:44.293 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:44.293 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:44.293 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:44.293 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:44.293 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:44.293 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:44.293 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:44.293 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:44.293 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:44.293 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:44.293 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:44.293 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:44.293 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:44.293 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:44.293 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:44.293 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:44.293 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:44.293 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:44.293 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:44.293 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:44.293 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:44.293 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:44.293 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:44.293 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:44.293 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:44.293 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:44.293 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:44.293 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:44.293 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:44.293 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:44.293 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:44.293 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:44.293 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:44.293 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:44.293 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:44.293 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:44.293 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:44.293 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:09:44.293 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:44.293 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:44.293 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:44.293 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:44.293 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:44.293 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:44.293 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:44.293 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:44.293 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:44.293 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:44.293 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:44.293 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:44.293 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:44.293 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:44.293 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:44.293 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:44.293 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:44.293 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:44.293 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:44.293 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:44.293 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:44.293 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:44.293 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:44.293 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:44.293 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:44.293 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:44.293 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:44.293 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.618 ms 00:09:44.293 00:09:44.293 --- 10.0.0.2 ping statistics --- 00:09:44.293 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:44.293 rtt min/avg/max/mdev = 0.618/0.618/0.618/0.000 ms 00:09:44.293 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:44.293 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:44.293 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:09:44.293 00:09:44.293 --- 10.0.0.1 ping statistics --- 00:09:44.293 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:44.293 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:09:44.294 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:44.294 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:09:44.294 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:44.294 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:44.294 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:44.294 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:44.294 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:44.294 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:44.294 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:44.294 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:09:44.294 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:09:44.294 only one NIC for nvmf test 00:09:44.294 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:09:44.294 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:44.294 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:44.294 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:44.294 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:44.294 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:44.294 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:44.294 rmmod nvme_tcp 00:09:44.294 rmmod nvme_fabrics 00:09:44.294 rmmod nvme_keyring 00:09:44.294 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:44.294 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:44.294 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:44.294 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:44.294 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:44.294 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:44.294 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:44.294 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:44.294 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:44.294 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:44.294 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:44.294 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:44.294 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:44.294 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:44.294 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:44.294 09:27:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:45.719 09:27:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:45.719 09:27:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:09:45.719 09:27:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:09:45.719 09:27:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:45.719 09:27:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:45.719 09:27:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:45.719 09:27:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:45.719 09:27:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:45.719 09:27:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:45.719 09:27:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:45.719 09:27:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:45.719 09:27:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:45.719 09:27:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:45.719 09:27:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:45.719 09:27:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:45.719 09:27:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:45.719 09:27:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:45.719 09:27:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:45.719 09:27:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:45.719 09:27:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:45.719 09:27:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:45.719 09:27:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:45.719 09:27:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:45.719 09:27:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:45.719 09:27:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:45.719 09:27:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:45.719 00:09:45.719 real 0m9.653s 00:09:45.719 user 0m2.057s 00:09:45.719 sys 0m5.531s 00:09:45.719 09:27:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:45.719 09:27:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:45.719 ************************************ 00:09:45.719 END TEST nvmf_target_multipath 00:09:45.719 ************************************ 00:09:45.719 09:27:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:45.719 09:27:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:45.719 09:27:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:45.719 09:27:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:45.719 ************************************ 00:09:45.719 START TEST nvmf_zcopy 00:09:45.719 ************************************ 00:09:45.719 09:27:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:45.719 * Looking for test storage... 00:09:45.719 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:45.719 09:27:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:45.719 09:27:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:09:45.719 09:27:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:45.719 09:27:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:45.719 09:27:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:45.719 09:27:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:45.719 09:27:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:45.719 09:27:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:09:45.719 09:27:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:09:45.719 09:27:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:09:45.719 09:27:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:09:45.719 09:27:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:09:45.719 09:27:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:09:45.719 09:27:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:09:45.719 09:27:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:45.719 09:27:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:09:45.719 09:27:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:09:45.719 09:27:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:45.719 09:27:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:45.719 09:27:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:09:45.719 09:27:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:09:45.719 09:27:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:45.719 09:27:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:09:45.719 09:27:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:09:45.719 09:27:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:09:45.719 09:27:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:09:45.719 09:27:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:45.719 09:27:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:09:45.719 09:27:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:09:45.719 09:27:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:45.719 09:27:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:45.719 09:27:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:09:45.719 09:27:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:45.719 09:27:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:45.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.719 --rc genhtml_branch_coverage=1 00:09:45.719 --rc genhtml_function_coverage=1 00:09:45.719 --rc genhtml_legend=1 00:09:45.719 --rc geninfo_all_blocks=1 00:09:45.719 --rc geninfo_unexecuted_blocks=1 00:09:45.719 00:09:45.719 ' 00:09:45.719 09:27:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:45.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.719 --rc genhtml_branch_coverage=1 00:09:45.719 --rc genhtml_function_coverage=1 00:09:45.719 --rc genhtml_legend=1 00:09:45.719 --rc geninfo_all_blocks=1 00:09:45.719 --rc geninfo_unexecuted_blocks=1 00:09:45.719 00:09:45.719 ' 00:09:45.719 09:27:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:45.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.719 --rc genhtml_branch_coverage=1 00:09:45.719 --rc genhtml_function_coverage=1 00:09:45.719 --rc genhtml_legend=1 00:09:45.719 --rc geninfo_all_blocks=1 00:09:45.719 --rc geninfo_unexecuted_blocks=1 00:09:45.719 00:09:45.719 ' 00:09:45.719 09:27:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:45.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.719 --rc genhtml_branch_coverage=1 00:09:45.719 --rc genhtml_function_coverage=1 00:09:45.719 --rc genhtml_legend=1 00:09:45.719 --rc geninfo_all_blocks=1 00:09:45.719 --rc geninfo_unexecuted_blocks=1 00:09:45.719 00:09:45.719 ' 00:09:45.719 09:27:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:45.719 09:27:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:45.719 09:27:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:45.719 09:27:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:45.719 09:27:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:45.719 09:27:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:45.719 09:27:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:45.719 09:27:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:45.719 09:27:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:45.719 09:27:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:45.719 09:27:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:45.719 09:27:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:45.719 09:27:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:45.719 09:27:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:45.719 09:27:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:45.720 09:27:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:45.720 09:27:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:45.720 09:27:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:45.720 09:27:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:45.720 09:27:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:09:45.720 09:27:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:45.720 09:27:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:45.720 09:27:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:45.720 09:27:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.720 09:27:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.720 09:27:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.720 09:27:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:45.720 09:27:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.720 09:27:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:09:45.720 09:27:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:45.720 09:27:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:45.720 09:27:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:45.720 09:27:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:45.720 09:27:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:45.720 09:27:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:45.720 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:45.720 09:27:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:45.720 09:27:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:45.720 09:27:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:45.720 09:27:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:45.720 09:27:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:45.720 09:27:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:45.720 09:27:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:45.720 09:27:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:45.720 09:27:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:45.720 09:27:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:45.720 09:27:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:45.720 09:27:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:45.720 09:27:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:45.720 09:27:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:45.720 09:27:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:09:45.720 09:27:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:53.863 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:53.863 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:09:53.863 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:53.863 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:53.863 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:53.863 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:53.863 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:53.863 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:09:53.863 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:53.863 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:09:53.863 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:09:53.863 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:09:53.863 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:09:53.863 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:09:53.863 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:09:53.863 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:53.863 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:53.863 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:53.863 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:53.863 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:53.863 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:53.863 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:53.863 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:53.863 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:53.863 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:53.863 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:53.863 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:53.863 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:53.863 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:53.863 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:53.863 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:53.863 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:53.863 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:53.863 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:53.863 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:53.863 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:53.863 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:53.863 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:53.863 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:53.863 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:53.863 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:53.863 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:53.863 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:53.863 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:53.863 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:53.863 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:53.863 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:53.863 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:53.863 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:53.863 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:53.863 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:53.863 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:53.863 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:53.863 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:53.863 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:53.863 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:53.863 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:53.863 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:53.863 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:53.863 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:53.863 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:53.863 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:53.863 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:53.863 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:53.863 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:53.863 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:53.863 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:53.863 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:53.863 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:53.863 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:53.863 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:53.863 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:53.863 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:53.864 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:09:53.864 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:53.864 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:53.864 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:53.864 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:53.864 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:53.864 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:53.864 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:53.864 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:53.864 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:53.864 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:53.864 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:53.864 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:53.864 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:53.864 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:53.864 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:53.864 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:53.864 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:53.864 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:53.864 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:53.864 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:53.864 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:53.864 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:53.864 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:53.864 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:53.864 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:53.864 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:53.864 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:53.864 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.593 ms 00:09:53.864 00:09:53.864 --- 10.0.0.2 ping statistics --- 00:09:53.864 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:53.864 rtt min/avg/max/mdev = 0.593/0.593/0.593/0.000 ms 00:09:53.864 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:53.864 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:53.864 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.270 ms 00:09:53.864 00:09:53.864 --- 10.0.0.1 ping statistics --- 00:09:53.864 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:53.864 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:09:53.864 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:53.864 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:09:53.864 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:53.864 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:53.864 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:53.864 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:53.864 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:53.864 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:53.864 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:53.864 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:53.864 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:53.864 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:53.864 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:53.864 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=2620875 00:09:53.864 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 2620875 00:09:53.864 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:53.864 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 2620875 ']' 00:09:53.864 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:53.864 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:53.864 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:53.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:53.864 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:53.864 09:27:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:53.864 [2024-12-09 09:27:28.709597] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:09:53.864 [2024-12-09 09:27:28.709672] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:53.864 [2024-12-09 09:27:28.807593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.864 [2024-12-09 09:27:28.833358] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:53.864 [2024-12-09 09:27:28.833407] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:53.864 [2024-12-09 09:27:28.833415] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:53.864 [2024-12-09 09:27:28.833423] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:53.864 [2024-12-09 09:27:28.833429] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:53.864 [2024-12-09 09:27:28.834176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:54.125 09:27:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:54.125 09:27:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:09:54.125 09:27:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:54.125 09:27:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:54.125 09:27:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:54.125 09:27:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:54.125 09:27:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:54.125 09:27:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:54.125 09:27:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.125 09:27:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:54.125 [2024-12-09 09:27:29.577302] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:54.385 09:27:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.385 09:27:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:54.385 09:27:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.385 09:27:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:54.385 09:27:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.385 09:27:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:54.385 09:27:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.385 09:27:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:54.385 [2024-12-09 09:27:29.601543] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:54.385 09:27:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.385 09:27:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:54.385 09:27:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.385 09:27:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:54.385 09:27:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.385 09:27:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:54.385 09:27:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.385 09:27:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:54.385 malloc0 00:09:54.385 09:27:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.385 09:27:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:54.385 09:27:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.385 09:27:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:54.385 09:27:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.385 09:27:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:54.385 09:27:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:54.385 09:27:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:54.385 09:27:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:54.386 09:27:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:54.386 09:27:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:54.386 { 00:09:54.386 "params": { 00:09:54.386 "name": "Nvme$subsystem", 00:09:54.386 "trtype": "$TEST_TRANSPORT", 00:09:54.386 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:54.386 "adrfam": "ipv4", 00:09:54.386 "trsvcid": "$NVMF_PORT", 00:09:54.386 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:54.386 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:54.386 "hdgst": ${hdgst:-false}, 00:09:54.386 "ddgst": ${ddgst:-false} 00:09:54.386 }, 00:09:54.386 "method": "bdev_nvme_attach_controller" 00:09:54.386 } 00:09:54.386 EOF 00:09:54.386 )") 00:09:54.386 09:27:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:54.386 09:27:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:54.386 09:27:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:54.386 09:27:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:54.386 "params": { 00:09:54.386 "name": "Nvme1", 00:09:54.386 "trtype": "tcp", 00:09:54.386 "traddr": "10.0.0.2", 00:09:54.386 "adrfam": "ipv4", 00:09:54.386 "trsvcid": "4420", 00:09:54.386 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:54.386 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:54.386 "hdgst": false, 00:09:54.386 "ddgst": false 00:09:54.386 }, 00:09:54.386 "method": "bdev_nvme_attach_controller" 00:09:54.386 }' 00:09:54.386 [2024-12-09 09:27:29.701341] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:09:54.386 [2024-12-09 09:27:29.701413] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2620918 ] 00:09:54.386 [2024-12-09 09:27:29.793560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:54.386 [2024-12-09 09:27:29.821871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:54.958 Running I/O for 10 seconds... 00:09:56.898 6513.00 IOPS, 50.88 MiB/s [2024-12-09T08:27:33.291Z] 7961.00 IOPS, 62.20 MiB/s [2024-12-09T08:27:34.233Z] 8582.67 IOPS, 67.05 MiB/s [2024-12-09T08:27:35.288Z] 8893.25 IOPS, 69.48 MiB/s [2024-12-09T08:27:36.280Z] 9076.80 IOPS, 70.91 MiB/s [2024-12-09T08:27:37.221Z] 9199.00 IOPS, 71.87 MiB/s [2024-12-09T08:27:38.162Z] 9283.29 IOPS, 72.53 MiB/s [2024-12-09T08:27:39.549Z] 9347.50 IOPS, 73.03 MiB/s [2024-12-09T08:27:40.490Z] 9400.00 IOPS, 73.44 MiB/s [2024-12-09T08:27:40.490Z] 9440.10 IOPS, 73.75 MiB/s 00:10:05.037 Latency(us) 00:10:05.037 [2024-12-09T08:27:40.490Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:05.037 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:05.037 Verification LBA range: start 0x0 length 0x1000 00:10:05.037 Nvme1n1 : 10.05 9401.01 73.45 0.00 0.00 13513.32 2020.69 42161.49 00:10:05.037 [2024-12-09T08:27:40.490Z] =================================================================================================================== 00:10:05.037 [2024-12-09T08:27:40.490Z] Total : 9401.01 73.45 0.00 0.00 13513.32 2020.69 42161.49 00:10:05.037 09:27:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2623090 00:10:05.037 09:27:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:05.037 09:27:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:05.037 09:27:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:05.037 09:27:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:05.037 09:27:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:10:05.037 09:27:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:10:05.037 09:27:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:05.037 09:27:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:05.037 { 00:10:05.037 "params": { 00:10:05.037 "name": "Nvme$subsystem", 00:10:05.037 "trtype": "$TEST_TRANSPORT", 00:10:05.037 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:05.037 "adrfam": "ipv4", 00:10:05.037 "trsvcid": "$NVMF_PORT", 00:10:05.037 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:05.037 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:05.037 "hdgst": ${hdgst:-false}, 00:10:05.037 "ddgst": ${ddgst:-false} 00:10:05.037 }, 00:10:05.037 "method": "bdev_nvme_attach_controller" 00:10:05.037 } 00:10:05.037 EOF 00:10:05.037 )") 00:10:05.037 [2024-12-09 09:27:40.276061] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.037 [2024-12-09 09:27:40.276091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.037 09:27:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:10:05.037 09:27:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:10:05.037 09:27:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:10:05.037 09:27:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:05.037 "params": { 00:10:05.037 "name": "Nvme1", 00:10:05.037 "trtype": "tcp", 00:10:05.037 "traddr": "10.0.0.2", 00:10:05.037 "adrfam": "ipv4", 00:10:05.037 "trsvcid": "4420", 00:10:05.037 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:05.037 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:05.037 "hdgst": false, 00:10:05.037 "ddgst": false 00:10:05.037 }, 00:10:05.037 "method": "bdev_nvme_attach_controller" 00:10:05.037 }' 00:10:05.037 [2024-12-09 09:27:40.288061] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.037 [2024-12-09 09:27:40.288071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.037 [2024-12-09 09:27:40.300090] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.037 [2024-12-09 09:27:40.300098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.037 [2024-12-09 09:27:40.312120] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.037 [2024-12-09 09:27:40.312129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.037 [2024-12-09 09:27:40.318220] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:10:05.037 [2024-12-09 09:27:40.318267] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2623090 ] 00:10:05.037 [2024-12-09 09:27:40.324151] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.037 [2024-12-09 09:27:40.324158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.037 [2024-12-09 09:27:40.336182] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.037 [2024-12-09 09:27:40.336190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.037 [2024-12-09 09:27:40.348214] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.037 [2024-12-09 09:27:40.348222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.037 [2024-12-09 09:27:40.360243] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.037 [2024-12-09 09:27:40.360251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.037 [2024-12-09 09:27:40.372275] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.037 [2024-12-09 09:27:40.372282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.037 [2024-12-09 09:27:40.384304] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.037 [2024-12-09 09:27:40.384312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.037 [2024-12-09 09:27:40.396336] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.037 [2024-12-09 09:27:40.396344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.037 [2024-12-09 09:27:40.401272] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:05.037 [2024-12-09 09:27:40.408371] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.037 [2024-12-09 09:27:40.408381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.037 [2024-12-09 09:27:40.417182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:05.037 [2024-12-09 09:27:40.420403] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.037 [2024-12-09 09:27:40.420414] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.037 [2024-12-09 09:27:40.432438] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.037 [2024-12-09 09:27:40.432448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.037 [2024-12-09 09:27:40.444466] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.037 [2024-12-09 09:27:40.444478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.037 [2024-12-09 09:27:40.456494] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.037 [2024-12-09 09:27:40.456505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.037 [2024-12-09 09:27:40.468526] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.037 [2024-12-09 09:27:40.468540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.037 [2024-12-09 09:27:40.480556] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.037 [2024-12-09 09:27:40.480564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.297 [2024-12-09 09:27:40.492597] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.297 [2024-12-09 09:27:40.492613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.297 [2024-12-09 09:27:40.504621] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.297 [2024-12-09 09:27:40.504632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.297 [2024-12-09 09:27:40.516664] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.297 [2024-12-09 09:27:40.516674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.297 [2024-12-09 09:27:40.528686] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.297 [2024-12-09 09:27:40.528693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.297 [2024-12-09 09:27:40.540715] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.297 [2024-12-09 09:27:40.540723] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.297 [2024-12-09 09:27:40.552746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.297 [2024-12-09 09:27:40.552755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.297 [2024-12-09 09:27:40.564776] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.297 [2024-12-09 09:27:40.564786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.297 [2024-12-09 09:27:40.576805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.297 [2024-12-09 09:27:40.576813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.297 [2024-12-09 09:27:40.588836] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.297 [2024-12-09 09:27:40.588843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.297 [2024-12-09 09:27:40.600868] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.297 [2024-12-09 09:27:40.600876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.297 [2024-12-09 09:27:40.612901] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.297 [2024-12-09 09:27:40.612911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.297 [2024-12-09 09:27:40.624932] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.297 [2024-12-09 09:27:40.624940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.297 [2024-12-09 09:27:40.636965] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.297 [2024-12-09 09:27:40.636972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.297 [2024-12-09 09:27:40.648996] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.297 [2024-12-09 09:27:40.649004] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.297 [2024-12-09 09:27:40.661029] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.297 [2024-12-09 09:27:40.661038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.297 [2024-12-09 09:27:40.673060] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.297 [2024-12-09 09:27:40.673067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.297 [2024-12-09 09:27:40.685091] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.297 [2024-12-09 09:27:40.685098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.297 [2024-12-09 09:27:40.697123] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.297 [2024-12-09 09:27:40.697135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.298 [2024-12-09 09:27:40.709160] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.298 [2024-12-09 09:27:40.709174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.298 Running I/O for 5 seconds... 00:10:05.298 [2024-12-09 09:27:40.721188] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.298 [2024-12-09 09:27:40.721198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.298 [2024-12-09 09:27:40.737243] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.298 [2024-12-09 09:27:40.737261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.558 [2024-12-09 09:27:40.750573] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.558 [2024-12-09 09:27:40.750590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.558 [2024-12-09 09:27:40.763951] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.558 [2024-12-09 09:27:40.763968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.558 [2024-12-09 09:27:40.777149] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.558 [2024-12-09 09:27:40.777165] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.558 [2024-12-09 09:27:40.790753] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.558 [2024-12-09 09:27:40.790768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.558 [2024-12-09 09:27:40.803555] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.558 [2024-12-09 09:27:40.803572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.558 [2024-12-09 09:27:40.817079] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.558 [2024-12-09 09:27:40.817096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.558 [2024-12-09 09:27:40.830666] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.558 [2024-12-09 09:27:40.830682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.558 [2024-12-09 09:27:40.844003] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.558 [2024-12-09 09:27:40.844019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.558 [2024-12-09 09:27:40.857353] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.558 [2024-12-09 09:27:40.857369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.558 [2024-12-09 09:27:40.870559] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.558 [2024-12-09 09:27:40.870575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.558 [2024-12-09 09:27:40.883929] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.558 [2024-12-09 09:27:40.883944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.558 [2024-12-09 09:27:40.897324] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.558 [2024-12-09 09:27:40.897342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.558 [2024-12-09 09:27:40.910310] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.558 [2024-12-09 09:27:40.910325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.558 [2024-12-09 09:27:40.923714] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.558 [2024-12-09 09:27:40.923731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.558 [2024-12-09 09:27:40.937115] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.558 [2024-12-09 09:27:40.937131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.558 [2024-12-09 09:27:40.950748] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.558 [2024-12-09 09:27:40.950768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.558 [2024-12-09 09:27:40.964321] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.558 [2024-12-09 09:27:40.964336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.558 [2024-12-09 09:27:40.977605] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.558 [2024-12-09 09:27:40.977620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.558 [2024-12-09 09:27:40.990945] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.558 [2024-12-09 09:27:40.990961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.558 [2024-12-09 09:27:41.003936] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.558 [2024-12-09 09:27:41.003952] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.819 [2024-12-09 09:27:41.017213] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.819 [2024-12-09 09:27:41.017229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.819 [2024-12-09 09:27:41.030729] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.819 [2024-12-09 09:27:41.030744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.819 [2024-12-09 09:27:41.043359] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.819 [2024-12-09 09:27:41.043374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.819 [2024-12-09 09:27:41.056402] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.819 [2024-12-09 09:27:41.056419] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.819 [2024-12-09 09:27:41.069880] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.819 [2024-12-09 09:27:41.069896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.819 [2024-12-09 09:27:41.083142] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.819 [2024-12-09 09:27:41.083158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.819 [2024-12-09 09:27:41.096348] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.819 [2024-12-09 09:27:41.096364] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.819 [2024-12-09 09:27:41.109955] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.819 [2024-12-09 09:27:41.109971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.819 [2024-12-09 09:27:41.123087] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.819 [2024-12-09 09:27:41.123103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.819 [2024-12-09 09:27:41.136665] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.819 [2024-12-09 09:27:41.136680] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.819 [2024-12-09 09:27:41.149732] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.819 [2024-12-09 09:27:41.149751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.819 [2024-12-09 09:27:41.163146] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.819 [2024-12-09 09:27:41.163161] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.819 [2024-12-09 09:27:41.175533] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.819 [2024-12-09 09:27:41.175549] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.819 [2024-12-09 09:27:41.188903] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.819 [2024-12-09 09:27:41.188919] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.819 [2024-12-09 09:27:41.201924] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.819 [2024-12-09 09:27:41.201939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.819 [2024-12-09 09:27:41.214936] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.819 [2024-12-09 09:27:41.214952] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.819 [2024-12-09 09:27:41.228430] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.819 [2024-12-09 09:27:41.228446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.819 [2024-12-09 09:27:41.241681] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.819 [2024-12-09 09:27:41.241701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.819 [2024-12-09 09:27:41.255206] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.819 [2024-12-09 09:27:41.255222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.819 [2024-12-09 09:27:41.268494] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.819 [2024-12-09 09:27:41.268511] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.079 [2024-12-09 09:27:41.281914] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.079 [2024-12-09 09:27:41.281931] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.079 [2024-12-09 09:27:41.294956] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.079 [2024-12-09 09:27:41.294972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.079 [2024-12-09 09:27:41.308613] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.079 [2024-12-09 09:27:41.308629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.079 [2024-12-09 09:27:41.321914] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.079 [2024-12-09 09:27:41.321929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.079 [2024-12-09 09:27:41.335093] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.079 [2024-12-09 09:27:41.335108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.079 [2024-12-09 09:27:41.348386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.079 [2024-12-09 09:27:41.348403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.079 [2024-12-09 09:27:41.361905] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.079 [2024-12-09 09:27:41.361921] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.079 [2024-12-09 09:27:41.375260] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.079 [2024-12-09 09:27:41.375275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.079 [2024-12-09 09:27:41.388471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.079 [2024-12-09 09:27:41.388487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.079 [2024-12-09 09:27:41.401835] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.079 [2024-12-09 09:27:41.401850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.079 [2024-12-09 09:27:41.414951] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.079 [2024-12-09 09:27:41.414967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.079 [2024-12-09 09:27:41.428164] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.079 [2024-12-09 09:27:41.428180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.079 [2024-12-09 09:27:41.441653] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.079 [2024-12-09 09:27:41.441668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.079 [2024-12-09 09:27:41.454937] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.079 [2024-12-09 09:27:41.454953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.079 [2024-12-09 09:27:41.467416] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.079 [2024-12-09 09:27:41.467431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.079 [2024-12-09 09:27:41.479794] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.079 [2024-12-09 09:27:41.479810] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.080 [2024-12-09 09:27:41.492784] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.080 [2024-12-09 09:27:41.492800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.080 [2024-12-09 09:27:41.506205] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.080 [2024-12-09 09:27:41.506220] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.080 [2024-12-09 09:27:41.519658] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.080 [2024-12-09 09:27:41.519674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.341 [2024-12-09 09:27:41.532270] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.341 [2024-12-09 09:27:41.532285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.341 [2024-12-09 09:27:41.545155] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.341 [2024-12-09 09:27:41.545170] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.341 [2024-12-09 09:27:41.558350] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.341 [2024-12-09 09:27:41.558366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.341 [2024-12-09 09:27:41.571942] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.341 [2024-12-09 09:27:41.571958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.341 [2024-12-09 09:27:41.584996] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.341 [2024-12-09 09:27:41.585011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.341 [2024-12-09 09:27:41.598030] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.341 [2024-12-09 09:27:41.598045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.341 [2024-12-09 09:27:41.611782] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.341 [2024-12-09 09:27:41.611798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.341 [2024-12-09 09:27:41.625014] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.341 [2024-12-09 09:27:41.625029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.341 [2024-12-09 09:27:41.638563] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.341 [2024-12-09 09:27:41.638578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.341 [2024-12-09 09:27:41.651900] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.341 [2024-12-09 09:27:41.651918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.341 [2024-12-09 09:27:41.665039] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.341 [2024-12-09 09:27:41.665055] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.341 [2024-12-09 09:27:41.678558] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.341 [2024-12-09 09:27:41.678574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.341 [2024-12-09 09:27:41.691664] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.341 [2024-12-09 09:27:41.691680] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.341 [2024-12-09 09:27:41.704781] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.341 [2024-12-09 09:27:41.704797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.341 [2024-12-09 09:27:41.718176] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.341 [2024-12-09 09:27:41.718192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.341 18721.00 IOPS, 146.26 MiB/s [2024-12-09T08:27:41.794Z] [2024-12-09 09:27:41.731096] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.341 [2024-12-09 09:27:41.731113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.341 [2024-12-09 09:27:41.744237] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.341 [2024-12-09 09:27:41.744253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.341 [2024-12-09 09:27:41.757329] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.341 [2024-12-09 09:27:41.757345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.341 [2024-12-09 09:27:41.770373] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.341 [2024-12-09 09:27:41.770388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.341 [2024-12-09 09:27:41.783896] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.341 [2024-12-09 09:27:41.783913] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.602 [2024-12-09 09:27:41.797066] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.602 [2024-12-09 09:27:41.797083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.602 [2024-12-09 09:27:41.810076] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.602 [2024-12-09 09:27:41.810094] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.602 [2024-12-09 09:27:41.823568] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.602 [2024-12-09 09:27:41.823583] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.602 [2024-12-09 09:27:41.836495] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.602 [2024-12-09 09:27:41.836510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.602 [2024-12-09 09:27:41.849855] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.602 [2024-12-09 09:27:41.849871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.602 [2024-12-09 09:27:41.862962] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.602 [2024-12-09 09:27:41.862978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.602 [2024-12-09 09:27:41.875752] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.602 [2024-12-09 09:27:41.875767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.602 [2024-12-09 09:27:41.888109] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.602 [2024-12-09 09:27:41.888124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.602 [2024-12-09 09:27:41.901249] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.602 [2024-12-09 09:27:41.901265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.602 [2024-12-09 09:27:41.914937] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.602 [2024-12-09 09:27:41.914952] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.602 [2024-12-09 09:27:41.928498] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.602 [2024-12-09 09:27:41.928514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.602 [2024-12-09 09:27:41.941534] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.602 [2024-12-09 09:27:41.941554] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.602 [2024-12-09 09:27:41.954979] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.603 [2024-12-09 09:27:41.954994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.603 [2024-12-09 09:27:41.967998] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.603 [2024-12-09 09:27:41.968013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.603 [2024-12-09 09:27:41.981503] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.603 [2024-12-09 09:27:41.981519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.603 [2024-12-09 09:27:41.994642] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.603 [2024-12-09 09:27:41.994657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.603 [2024-12-09 09:27:42.008109] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.603 [2024-12-09 09:27:42.008128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.603 [2024-12-09 09:27:42.021240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.603 [2024-12-09 09:27:42.021255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.603 [2024-12-09 09:27:42.034709] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.603 [2024-12-09 09:27:42.034724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.603 [2024-12-09 09:27:42.047719] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.603 [2024-12-09 09:27:42.047734] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.863 [2024-12-09 09:27:42.061377] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.863 [2024-12-09 09:27:42.061394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.863 [2024-12-09 09:27:42.074681] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.863 [2024-12-09 09:27:42.074697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.863 [2024-12-09 09:27:42.088120] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.863 [2024-12-09 09:27:42.088136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.863 [2024-12-09 09:27:42.101816] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.863 [2024-12-09 09:27:42.101832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.863 [2024-12-09 09:27:42.115357] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.863 [2024-12-09 09:27:42.115373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.863 [2024-12-09 09:27:42.127993] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.863 [2024-12-09 09:27:42.128010] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.863 [2024-12-09 09:27:42.140556] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.863 [2024-12-09 09:27:42.140573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.863 [2024-12-09 09:27:42.153735] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.863 [2024-12-09 09:27:42.153751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.863 [2024-12-09 09:27:42.167111] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.863 [2024-12-09 09:27:42.167127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.863 [2024-12-09 09:27:42.180677] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.863 [2024-12-09 09:27:42.180693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.863 [2024-12-09 09:27:42.194227] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.863 [2024-12-09 09:27:42.194247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.863 [2024-12-09 09:27:42.207583] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.863 [2024-12-09 09:27:42.207599] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.863 [2024-12-09 09:27:42.221002] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.863 [2024-12-09 09:27:42.221018] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.863 [2024-12-09 09:27:42.233812] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.863 [2024-12-09 09:27:42.233828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.863 [2024-12-09 09:27:42.247130] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.863 [2024-12-09 09:27:42.247146] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.863 [2024-12-09 09:27:42.260643] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.863 [2024-12-09 09:27:42.260658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.863 [2024-12-09 09:27:42.273728] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.863 [2024-12-09 09:27:42.273744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.864 [2024-12-09 09:27:42.286927] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.864 [2024-12-09 09:27:42.286943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.864 [2024-12-09 09:27:42.300620] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.864 [2024-12-09 09:27:42.300636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.864 [2024-12-09 09:27:42.313377] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.864 [2024-12-09 09:27:42.313393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.123 [2024-12-09 09:27:42.326633] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.123 [2024-12-09 09:27:42.326652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.123 [2024-12-09 09:27:42.340195] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.123 [2024-12-09 09:27:42.340210] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.123 [2024-12-09 09:27:42.353512] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.123 [2024-12-09 09:27:42.353527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.123 [2024-12-09 09:27:42.366825] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.123 [2024-12-09 09:27:42.366840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.123 [2024-12-09 09:27:42.380147] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.123 [2024-12-09 09:27:42.380162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.123 [2024-12-09 09:27:42.393601] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.123 [2024-12-09 09:27:42.393617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.123 [2024-12-09 09:27:42.406948] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.123 [2024-12-09 09:27:42.406964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.123 [2024-12-09 09:27:42.419973] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.123 [2024-12-09 09:27:42.419988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.123 [2024-12-09 09:27:42.433139] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.123 [2024-12-09 09:27:42.433154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.123 [2024-12-09 09:27:42.446489] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.123 [2024-12-09 09:27:42.446509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.123 [2024-12-09 09:27:42.459922] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.123 [2024-12-09 09:27:42.459938] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.123 [2024-12-09 09:27:42.473323] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.123 [2024-12-09 09:27:42.473339] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.123 [2024-12-09 09:27:42.486747] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.123 [2024-12-09 09:27:42.486762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.123 [2024-12-09 09:27:42.499908] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.123 [2024-12-09 09:27:42.499924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.123 [2024-12-09 09:27:42.513089] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.123 [2024-12-09 09:27:42.513105] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.123 [2024-12-09 09:27:42.526002] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.123 [2024-12-09 09:27:42.526018] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.123 [2024-12-09 09:27:42.539652] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.123 [2024-12-09 09:27:42.539668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.123 [2024-12-09 09:27:42.552646] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.123 [2024-12-09 09:27:42.552661] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.123 [2024-12-09 09:27:42.565760] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.123 [2024-12-09 09:27:42.565777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.383 [2024-12-09 09:27:42.579087] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.383 [2024-12-09 09:27:42.579102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.383 [2024-12-09 09:27:42.592264] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.383 [2024-12-09 09:27:42.592280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.383 [2024-12-09 09:27:42.605622] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.383 [2024-12-09 09:27:42.605642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.383 [2024-12-09 09:27:42.618904] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.383 [2024-12-09 09:27:42.618919] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.383 [2024-12-09 09:27:42.632571] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.383 [2024-12-09 09:27:42.632586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.383 [2024-12-09 09:27:42.645446] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.383 [2024-12-09 09:27:42.645461] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.383 [2024-12-09 09:27:42.657943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.383 [2024-12-09 09:27:42.657958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.383 [2024-12-09 09:27:42.671318] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.383 [2024-12-09 09:27:42.671334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.383 [2024-12-09 09:27:42.684611] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.383 [2024-12-09 09:27:42.684627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.383 [2024-12-09 09:27:42.697797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.383 [2024-12-09 09:27:42.697816] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.383 [2024-12-09 09:27:42.710581] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.383 [2024-12-09 09:27:42.710597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.383 18836.00 IOPS, 147.16 MiB/s [2024-12-09T08:27:42.836Z] [2024-12-09 09:27:42.723652] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.383 [2024-12-09 09:27:42.723668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.383 [2024-12-09 09:27:42.736988] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.383 [2024-12-09 09:27:42.737004] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.383 [2024-12-09 09:27:42.749738] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.383 [2024-12-09 09:27:42.749753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.383 [2024-12-09 09:27:42.763288] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.383 [2024-12-09 09:27:42.763304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.383 [2024-12-09 09:27:42.776476] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.383 [2024-12-09 09:27:42.776491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.383 [2024-12-09 09:27:42.789905] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.383 [2024-12-09 09:27:42.789921] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.383 [2024-12-09 09:27:42.803364] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.383 [2024-12-09 09:27:42.803380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.383 [2024-12-09 09:27:42.816053] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.383 [2024-12-09 09:27:42.816069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.383 [2024-12-09 09:27:42.829085] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.383 [2024-12-09 09:27:42.829101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.643 [2024-12-09 09:27:42.842468] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.643 [2024-12-09 09:27:42.842483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.643 [2024-12-09 09:27:42.855918] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.643 [2024-12-09 09:27:42.855934] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.643 [2024-12-09 09:27:42.869425] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.643 [2024-12-09 09:27:42.869440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.643 [2024-12-09 09:27:42.882662] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.643 [2024-12-09 09:27:42.882678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.643 [2024-12-09 09:27:42.895978] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.643 [2024-12-09 09:27:42.895995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.643 [2024-12-09 09:27:42.909354] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.643 [2024-12-09 09:27:42.909369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.643 [2024-12-09 09:27:42.922492] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.643 [2024-12-09 09:27:42.922508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.643 [2024-12-09 09:27:42.936103] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.643 [2024-12-09 09:27:42.936121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.643 [2024-12-09 09:27:42.949346] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.643 [2024-12-09 09:27:42.949362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.643 [2024-12-09 09:27:42.962510] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.643 [2024-12-09 09:27:42.962525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.643 [2024-12-09 09:27:42.975632] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.643 [2024-12-09 09:27:42.975652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.643 [2024-12-09 09:27:42.988766] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.643 [2024-12-09 09:27:42.988782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.643 [2024-12-09 09:27:43.001905] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.643 [2024-12-09 09:27:43.001921] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.643 [2024-12-09 09:27:43.015559] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.643 [2024-12-09 09:27:43.015575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.643 [2024-12-09 09:27:43.028702] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.643 [2024-12-09 09:27:43.028717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.643 [2024-12-09 09:27:43.041777] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.643 [2024-12-09 09:27:43.041793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.643 [2024-12-09 09:27:43.055795] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.643 [2024-12-09 09:27:43.055811] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.643 [2024-12-09 09:27:43.068423] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.643 [2024-12-09 09:27:43.068440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.643 [2024-12-09 09:27:43.082121] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.643 [2024-12-09 09:27:43.082136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.643 [2024-12-09 09:27:43.095414] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.643 [2024-12-09 09:27:43.095430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.904 [2024-12-09 09:27:43.108716] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.904 [2024-12-09 09:27:43.108732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.904 [2024-12-09 09:27:43.122138] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.904 [2024-12-09 09:27:43.122154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.904 [2024-12-09 09:27:43.135267] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.904 [2024-12-09 09:27:43.135284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.904 [2024-12-09 09:27:43.148776] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.904 [2024-12-09 09:27:43.148792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.904 [2024-12-09 09:27:43.162130] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.904 [2024-12-09 09:27:43.162146] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.904 [2024-12-09 09:27:43.175349] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.904 [2024-12-09 09:27:43.175363] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.904 [2024-12-09 09:27:43.188600] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.904 [2024-12-09 09:27:43.188615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.904 [2024-12-09 09:27:43.202206] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.904 [2024-12-09 09:27:43.202221] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.904 [2024-12-09 09:27:43.215542] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.904 [2024-12-09 09:27:43.215558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.904 [2024-12-09 09:27:43.228850] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.904 [2024-12-09 09:27:43.228867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.904 [2024-12-09 09:27:43.242360] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.904 [2024-12-09 09:27:43.242376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.904 [2024-12-09 09:27:43.255960] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.904 [2024-12-09 09:27:43.255975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.904 [2024-12-09 09:27:43.269197] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.904 [2024-12-09 09:27:43.269213] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.904 [2024-12-09 09:27:43.282969] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.904 [2024-12-09 09:27:43.282984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.904 [2024-12-09 09:27:43.296362] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.904 [2024-12-09 09:27:43.296378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.904 [2024-12-09 09:27:43.309951] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.904 [2024-12-09 09:27:43.309967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.904 [2024-12-09 09:27:43.322358] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.904 [2024-12-09 09:27:43.322373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.904 [2024-12-09 09:27:43.335770] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.904 [2024-12-09 09:27:43.335788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.904 [2024-12-09 09:27:43.349369] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.904 [2024-12-09 09:27:43.349386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.164 [2024-12-09 09:27:43.362290] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.164 [2024-12-09 09:27:43.362306] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.164 [2024-12-09 09:27:43.375689] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.164 [2024-12-09 09:27:43.375705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.164 [2024-12-09 09:27:43.388893] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.164 [2024-12-09 09:27:43.388909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.164 [2024-12-09 09:27:43.402242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.164 [2024-12-09 09:27:43.402258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.164 [2024-12-09 09:27:43.415293] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.164 [2024-12-09 09:27:43.415309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.164 [2024-12-09 09:27:43.428886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.164 [2024-12-09 09:27:43.428902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.164 [2024-12-09 09:27:43.441892] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.164 [2024-12-09 09:27:43.441912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.164 [2024-12-09 09:27:43.455291] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.164 [2024-12-09 09:27:43.455310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.164 [2024-12-09 09:27:43.469074] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.164 [2024-12-09 09:27:43.469089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.164 [2024-12-09 09:27:43.482035] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.164 [2024-12-09 09:27:43.482051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.164 [2024-12-09 09:27:43.495559] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.164 [2024-12-09 09:27:43.495575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.164 [2024-12-09 09:27:43.508760] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.164 [2024-12-09 09:27:43.508776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.164 [2024-12-09 09:27:43.522310] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.164 [2024-12-09 09:27:43.522326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.164 [2024-12-09 09:27:43.535777] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.164 [2024-12-09 09:27:43.535793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.164 [2024-12-09 09:27:43.548875] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.164 [2024-12-09 09:27:43.548891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.164 [2024-12-09 09:27:43.561643] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.164 [2024-12-09 09:27:43.561658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.164 [2024-12-09 09:27:43.574109] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.164 [2024-12-09 09:27:43.574125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.164 [2024-12-09 09:27:43.586712] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.164 [2024-12-09 09:27:43.586730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.164 [2024-12-09 09:27:43.599925] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.164 [2024-12-09 09:27:43.599943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.164 [2024-12-09 09:27:43.613296] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.164 [2024-12-09 09:27:43.613312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.425 [2024-12-09 09:27:43.626537] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.425 [2024-12-09 09:27:43.626553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.425 [2024-12-09 09:27:43.639875] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.425 [2024-12-09 09:27:43.639891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.425 [2024-12-09 09:27:43.653213] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.425 [2024-12-09 09:27:43.653229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.425 [2024-12-09 09:27:43.666525] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.425 [2024-12-09 09:27:43.666542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.425 [2024-12-09 09:27:43.679735] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.425 [2024-12-09 09:27:43.679751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.425 [2024-12-09 09:27:43.693249] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.425 [2024-12-09 09:27:43.693269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.425 [2024-12-09 09:27:43.706361] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.425 [2024-12-09 09:27:43.706376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.425 [2024-12-09 09:27:43.719741] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.425 [2024-12-09 09:27:43.719756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.425 18867.33 IOPS, 147.40 MiB/s [2024-12-09T08:27:43.878Z] [2024-12-09 09:27:43.732972] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.425 [2024-12-09 09:27:43.732989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.425 [2024-12-09 09:27:43.746309] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.425 [2024-12-09 09:27:43.746325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.425 [2024-12-09 09:27:43.759646] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.425 [2024-12-09 09:27:43.759662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.425 [2024-12-09 09:27:43.772811] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.425 [2024-12-09 09:27:43.772826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.425 [2024-12-09 09:27:43.785743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.425 [2024-12-09 09:27:43.785759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.425 [2024-12-09 09:27:43.798781] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.425 [2024-12-09 09:27:43.798796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.425 [2024-12-09 09:27:43.811099] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.425 [2024-12-09 09:27:43.811114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.425 [2024-12-09 09:27:43.824487] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.425 [2024-12-09 09:27:43.824502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.425 [2024-12-09 09:27:43.837730] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.425 [2024-12-09 09:27:43.837746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.425 [2024-12-09 09:27:43.851203] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.425 [2024-12-09 09:27:43.851219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.425 [2024-12-09 09:27:43.864746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.425 [2024-12-09 09:27:43.864765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.685 [2024-12-09 09:27:43.878282] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.685 [2024-12-09 09:27:43.878299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.685 [2024-12-09 09:27:43.891307] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.685 [2024-12-09 09:27:43.891323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.685 [2024-12-09 09:27:43.904518] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.685 [2024-12-09 09:27:43.904534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.685 [2024-12-09 09:27:43.918056] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.685 [2024-12-09 09:27:43.918075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.685 [2024-12-09 09:27:43.931439] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.685 [2024-12-09 09:27:43.931456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.685 [2024-12-09 09:27:43.945080] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.685 [2024-12-09 09:27:43.945100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.685 [2024-12-09 09:27:43.958030] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.685 [2024-12-09 09:27:43.958046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.685 [2024-12-09 09:27:43.971376] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.685 [2024-12-09 09:27:43.971392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.685 [2024-12-09 09:27:43.984625] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.686 [2024-12-09 09:27:43.984648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.686 [2024-12-09 09:27:43.998084] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.686 [2024-12-09 09:27:43.998101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.686 [2024-12-09 09:27:44.011476] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.686 [2024-12-09 09:27:44.011492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.686 [2024-12-09 09:27:44.024969] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.686 [2024-12-09 09:27:44.024985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.686 [2024-12-09 09:27:44.038122] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.686 [2024-12-09 09:27:44.038138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.686 [2024-12-09 09:27:44.051593] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.686 [2024-12-09 09:27:44.051608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.686 [2024-12-09 09:27:44.065147] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.686 [2024-12-09 09:27:44.065163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.686 [2024-12-09 09:27:44.078348] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.686 [2024-12-09 09:27:44.078363] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.686 [2024-12-09 09:27:44.091825] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.686 [2024-12-09 09:27:44.091841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.686 [2024-12-09 09:27:44.104272] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.686 [2024-12-09 09:27:44.104288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.686 [2024-12-09 09:27:44.116992] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.686 [2024-12-09 09:27:44.117008] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.686 [2024-12-09 09:27:44.130417] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.686 [2024-12-09 09:27:44.130433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.946 [2024-12-09 09:27:44.143461] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.946 [2024-12-09 09:27:44.143477] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.946 [2024-12-09 09:27:44.156337] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.946 [2024-12-09 09:27:44.156353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.946 [2024-12-09 09:27:44.170036] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.946 [2024-12-09 09:27:44.170051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.946 [2024-12-09 09:27:44.183353] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.946 [2024-12-09 09:27:44.183368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.947 [2024-12-09 09:27:44.196515] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.947 [2024-12-09 09:27:44.196533] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.947 [2024-12-09 09:27:44.209778] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.947 [2024-12-09 09:27:44.209797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.947 [2024-12-09 09:27:44.223142] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.947 [2024-12-09 09:27:44.223157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.947 [2024-12-09 09:27:44.236733] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.947 [2024-12-09 09:27:44.236749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.947 [2024-12-09 09:27:44.249526] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.947 [2024-12-09 09:27:44.249542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.947 [2024-12-09 09:27:44.262820] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.947 [2024-12-09 09:27:44.262837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.947 [2024-12-09 09:27:44.275960] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.947 [2024-12-09 09:27:44.275976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.947 [2024-12-09 09:27:44.289150] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.947 [2024-12-09 09:27:44.289165] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.947 [2024-12-09 09:27:44.302104] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.947 [2024-12-09 09:27:44.302119] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.947 [2024-12-09 09:27:44.315428] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.947 [2024-12-09 09:27:44.315443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.947 [2024-12-09 09:27:44.328893] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.947 [2024-12-09 09:27:44.328909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.947 [2024-12-09 09:27:44.342146] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.947 [2024-12-09 09:27:44.342162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.947 [2024-12-09 09:27:44.355248] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.947 [2024-12-09 09:27:44.355264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.947 [2024-12-09 09:27:44.368743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.947 [2024-12-09 09:27:44.368759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.947 [2024-12-09 09:27:44.382192] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.947 [2024-12-09 09:27:44.382208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.947 [2024-12-09 09:27:44.395631] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.947 [2024-12-09 09:27:44.395652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.207 [2024-12-09 09:27:44.408488] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.207 [2024-12-09 09:27:44.408504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.207 [2024-12-09 09:27:44.420644] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.207 [2024-12-09 09:27:44.420659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.207 [2024-12-09 09:27:44.433561] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.207 [2024-12-09 09:27:44.433577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.207 [2024-12-09 09:27:44.446953] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.207 [2024-12-09 09:27:44.446969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.207 [2024-12-09 09:27:44.460395] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.207 [2024-12-09 09:27:44.460411] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.207 [2024-12-09 09:27:44.473401] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.207 [2024-12-09 09:27:44.473416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.207 [2024-12-09 09:27:44.485603] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.207 [2024-12-09 09:27:44.485618] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.207 [2024-12-09 09:27:44.498081] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.207 [2024-12-09 09:27:44.498098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.207 [2024-12-09 09:27:44.510751] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.207 [2024-12-09 09:27:44.510766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.207 [2024-12-09 09:27:44.524004] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.207 [2024-12-09 09:27:44.524020] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.207 [2024-12-09 09:27:44.537614] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.207 [2024-12-09 09:27:44.537629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.207 [2024-12-09 09:27:44.550075] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.207 [2024-12-09 09:27:44.550090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.207 [2024-12-09 09:27:44.563226] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.207 [2024-12-09 09:27:44.563241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.207 [2024-12-09 09:27:44.576793] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.207 [2024-12-09 09:27:44.576808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.207 [2024-12-09 09:27:44.589739] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.207 [2024-12-09 09:27:44.589754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.207 [2024-12-09 09:27:44.603264] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.207 [2024-12-09 09:27:44.603279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.207 [2024-12-09 09:27:44.616674] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.207 [2024-12-09 09:27:44.616690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.207 [2024-12-09 09:27:44.629408] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.207 [2024-12-09 09:27:44.629424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.207 [2024-12-09 09:27:44.641691] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.207 [2024-12-09 09:27:44.641706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.207 [2024-12-09 09:27:44.654705] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.207 [2024-12-09 09:27:44.654720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.467 [2024-12-09 09:27:44.667907] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.467 [2024-12-09 09:27:44.667923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.467 [2024-12-09 09:27:44.681394] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.467 [2024-12-09 09:27:44.681409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.467 [2024-12-09 09:27:44.693817] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.467 [2024-12-09 09:27:44.693834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.467 [2024-12-09 09:27:44.707190] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.468 [2024-12-09 09:27:44.707205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.468 [2024-12-09 09:27:44.720318] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.468 [2024-12-09 09:27:44.720333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.468 18890.75 IOPS, 147.58 MiB/s [2024-12-09T08:27:44.921Z] [2024-12-09 09:27:44.733780] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.468 [2024-12-09 09:27:44.733795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.468 [2024-12-09 09:27:44.746135] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.468 [2024-12-09 09:27:44.746151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.468 [2024-12-09 09:27:44.759291] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.468 [2024-12-09 09:27:44.759307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.468 [2024-12-09 09:27:44.772390] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.468 [2024-12-09 09:27:44.772407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.468 [2024-12-09 09:27:44.785818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.468 [2024-12-09 09:27:44.785833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.468 [2024-12-09 09:27:44.799140] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.468 [2024-12-09 09:27:44.799156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.468 [2024-12-09 09:27:44.812561] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.468 [2024-12-09 09:27:44.812576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.468 [2024-12-09 09:27:44.825819] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.468 [2024-12-09 09:27:44.825835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.468 [2024-12-09 09:27:44.839048] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.468 [2024-12-09 09:27:44.839064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.468 [2024-12-09 09:27:44.852790] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.468 [2024-12-09 09:27:44.852804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.468 [2024-12-09 09:27:44.866045] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.468 [2024-12-09 09:27:44.866060] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.468 [2024-12-09 09:27:44.879289] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.468 [2024-12-09 09:27:44.879306] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.468 [2024-12-09 09:27:44.892340] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.468 [2024-12-09 09:27:44.892355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.468 [2024-12-09 09:27:44.905836] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.468 [2024-12-09 09:27:44.905852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.468 [2024-12-09 09:27:44.918543] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.468 [2024-12-09 09:27:44.918558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.736 [2024-12-09 09:27:44.931631] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.736 [2024-12-09 09:27:44.931655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.736 [2024-12-09 09:27:44.945071] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.736 [2024-12-09 09:27:44.945086] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.737 [2024-12-09 09:27:44.957465] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.737 [2024-12-09 09:27:44.957483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.737 [2024-12-09 09:27:44.970800] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.737 [2024-12-09 09:27:44.970815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.737 [2024-12-09 09:27:44.983668] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.737 [2024-12-09 09:27:44.983683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.737 [2024-12-09 09:27:44.996918] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.737 [2024-12-09 09:27:44.996934] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.737 [2024-12-09 09:27:45.010117] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.737 [2024-12-09 09:27:45.010133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.737 [2024-12-09 09:27:45.023424] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.737 [2024-12-09 09:27:45.023440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.737 [2024-12-09 09:27:45.036384] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.737 [2024-12-09 09:27:45.036399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.737 [2024-12-09 09:27:45.049601] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.737 [2024-12-09 09:27:45.049618] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.737 [2024-12-09 09:27:45.062739] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.737 [2024-12-09 09:27:45.062755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.737 [2024-12-09 09:27:45.076250] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.737 [2024-12-09 09:27:45.076265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.737 [2024-12-09 09:27:45.089561] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.737 [2024-12-09 09:27:45.089577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.737 [2024-12-09 09:27:45.101894] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.737 [2024-12-09 09:27:45.101909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.737 [2024-12-09 09:27:45.114958] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.737 [2024-12-09 09:27:45.114974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.737 [2024-12-09 09:27:45.128618] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.737 [2024-12-09 09:27:45.128634] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.737 [2024-12-09 09:27:45.141238] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.737 [2024-12-09 09:27:45.141254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.737 [2024-12-09 09:27:45.154486] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.737 [2024-12-09 09:27:45.154502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.737 [2024-12-09 09:27:45.167525] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.737 [2024-12-09 09:27:45.167541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.737 [2024-12-09 09:27:45.180478] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.737 [2024-12-09 09:27:45.180501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.998 [2024-12-09 09:27:45.192815] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.998 [2024-12-09 09:27:45.192835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.998 [2024-12-09 09:27:45.205530] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.998 [2024-12-09 09:27:45.205546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.998 [2024-12-09 09:27:45.218532] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.998 [2024-12-09 09:27:45.218548] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.998 [2024-12-09 09:27:45.231970] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.998 [2024-12-09 09:27:45.231986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.998 [2024-12-09 09:27:45.244929] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.998 [2024-12-09 09:27:45.244945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.998 [2024-12-09 09:27:45.258202] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.998 [2024-12-09 09:27:45.258217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.998 [2024-12-09 09:27:45.271566] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.998 [2024-12-09 09:27:45.271582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.998 [2024-12-09 09:27:45.284765] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.998 [2024-12-09 09:27:45.284781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.998 [2024-12-09 09:27:45.298056] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.998 [2024-12-09 09:27:45.298072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.998 [2024-12-09 09:27:45.311231] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.998 [2024-12-09 09:27:45.311249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.998 [2024-12-09 09:27:45.324551] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.998 [2024-12-09 09:27:45.324567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.998 [2024-12-09 09:27:45.337818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.998 [2024-12-09 09:27:45.337834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.998 [2024-12-09 09:27:45.350526] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.998 [2024-12-09 09:27:45.350541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.998 [2024-12-09 09:27:45.363600] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.998 [2024-12-09 09:27:45.363616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.998 [2024-12-09 09:27:45.376649] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.998 [2024-12-09 09:27:45.376664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.998 [2024-12-09 09:27:45.389910] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.998 [2024-12-09 09:27:45.389925] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.998 [2024-12-09 09:27:45.403305] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.998 [2024-12-09 09:27:45.403320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.998 [2024-12-09 09:27:45.416091] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.998 [2024-12-09 09:27:45.416107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.998 [2024-12-09 09:27:45.428601] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.998 [2024-12-09 09:27:45.428620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.998 [2024-12-09 09:27:45.441881] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.998 [2024-12-09 09:27:45.441896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.259 [2024-12-09 09:27:45.455024] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.259 [2024-12-09 09:27:45.455043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.259 [2024-12-09 09:27:45.468247] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.259 [2024-12-09 09:27:45.468263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.259 [2024-12-09 09:27:45.481493] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.259 [2024-12-09 09:27:45.481509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.259 [2024-12-09 09:27:45.494077] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.259 [2024-12-09 09:27:45.494093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.259 [2024-12-09 09:27:45.506752] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.259 [2024-12-09 09:27:45.506768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.259 [2024-12-09 09:27:45.519905] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.259 [2024-12-09 09:27:45.519921] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.259 [2024-12-09 09:27:45.533166] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.259 [2024-12-09 09:27:45.533182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.259 [2024-12-09 09:27:45.546368] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.259 [2024-12-09 09:27:45.546385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.259 [2024-12-09 09:27:45.559663] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.259 [2024-12-09 09:27:45.559679] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.259 [2024-12-09 09:27:45.572961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.259 [2024-12-09 09:27:45.572978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.259 [2024-12-09 09:27:45.585851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.259 [2024-12-09 09:27:45.585867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.259 [2024-12-09 09:27:45.599383] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.259 [2024-12-09 09:27:45.599399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.259 [2024-12-09 09:27:45.612916] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.259 [2024-12-09 09:27:45.612932] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.259 [2024-12-09 09:27:45.626104] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.259 [2024-12-09 09:27:45.626120] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.259 [2024-12-09 09:27:45.639326] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.259 [2024-12-09 09:27:45.639342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.259 [2024-12-09 09:27:45.652877] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.259 [2024-12-09 09:27:45.652894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.259 [2024-12-09 09:27:45.666382] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.259 [2024-12-09 09:27:45.666399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.259 [2024-12-09 09:27:45.679346] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.259 [2024-12-09 09:27:45.679366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.259 [2024-12-09 09:27:45.691658] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.259 [2024-12-09 09:27:45.691674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.259 [2024-12-09 09:27:45.704820] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.259 [2024-12-09 09:27:45.704837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.520 [2024-12-09 09:27:45.718324] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.520 [2024-12-09 09:27:45.718341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.520 18894.40 IOPS, 147.61 MiB/s [2024-12-09T08:27:45.973Z] [2024-12-09 09:27:45.730959] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.520 [2024-12-09 09:27:45.730979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.520 00:10:10.520 Latency(us) 00:10:10.520 [2024-12-09T08:27:45.973Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:10.520 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:10.520 Nvme1n1 : 5.01 18898.32 147.64 0.00 0.00 6767.03 2757.97 17257.81 00:10:10.520 [2024-12-09T08:27:45.973Z] =================================================================================================================== 00:10:10.520 [2024-12-09T08:27:45.973Z] Total : 18898.32 147.64 0.00 0.00 6767.03 2757.97 17257.81 00:10:10.520 [2024-12-09 09:27:45.740376] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.520 [2024-12-09 09:27:45.740389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.520 [2024-12-09 09:27:45.752413] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.520 [2024-12-09 09:27:45.752428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.520 [2024-12-09 09:27:45.764438] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.520 [2024-12-09 09:27:45.764451] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.520 [2024-12-09 09:27:45.776468] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.520 [2024-12-09 09:27:45.776482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.520 [2024-12-09 09:27:45.788495] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.520 [2024-12-09 09:27:45.788506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.520 [2024-12-09 09:27:45.800526] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.520 [2024-12-09 09:27:45.800535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.520 [2024-12-09 09:27:45.812559] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.520 [2024-12-09 09:27:45.812572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.520 [2024-12-09 09:27:45.824591] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:10.520 [2024-12-09 09:27:45.824602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:10.520 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2623090) - No such process 00:10:10.520 09:27:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2623090 00:10:10.520 09:27:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:10.520 09:27:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.520 09:27:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:10.520 09:27:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.520 09:27:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:10.520 09:27:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.520 09:27:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:10.520 delay0 00:10:10.520 09:27:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.520 09:27:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:10.520 09:27:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.520 09:27:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:10.520 09:27:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.520 09:27:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:10.781 [2024-12-09 09:27:45.993131] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:17.362 Initializing NVMe Controllers 00:10:17.362 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:17.362 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:17.362 Initialization complete. Launching workers. 00:10:17.362 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 117 00:10:17.362 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 407, failed to submit 30 00:10:17.362 success 231, unsuccessful 176, failed 0 00:10:17.362 09:27:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:17.362 09:27:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:17.362 09:27:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:17.362 09:27:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:10:17.362 09:27:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:17.362 09:27:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:10:17.362 09:27:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:17.362 09:27:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:17.362 rmmod nvme_tcp 00:10:17.362 rmmod nvme_fabrics 00:10:17.362 rmmod nvme_keyring 00:10:17.362 09:27:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:17.362 09:27:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:10:17.362 09:27:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:10:17.362 09:27:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 2620875 ']' 00:10:17.362 09:27:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 2620875 00:10:17.362 09:27:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 2620875 ']' 00:10:17.362 09:27:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 2620875 00:10:17.362 09:27:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:10:17.362 09:27:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:17.362 09:27:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2620875 00:10:17.362 09:27:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:17.362 09:27:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:17.362 09:27:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2620875' 00:10:17.362 killing process with pid 2620875 00:10:17.362 09:27:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 2620875 00:10:17.362 09:27:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 2620875 00:10:17.362 09:27:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:17.362 09:27:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:17.362 09:27:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:17.362 09:27:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:10:17.362 09:27:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:10:17.362 09:27:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:17.362 09:27:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:10:17.362 09:27:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:17.362 09:27:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:17.362 09:27:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:17.362 09:27:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:17.362 09:27:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:19.274 09:27:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:19.274 00:10:19.274 real 0m33.496s 00:10:19.274 user 0m44.126s 00:10:19.274 sys 0m11.078s 00:10:19.274 09:27:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:19.274 09:27:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:19.274 ************************************ 00:10:19.274 END TEST nvmf_zcopy 00:10:19.274 ************************************ 00:10:19.274 09:27:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:19.275 09:27:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:19.275 09:27:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:19.275 09:27:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:19.275 ************************************ 00:10:19.275 START TEST nvmf_nmic 00:10:19.275 ************************************ 00:10:19.275 09:27:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:19.275 * Looking for test storage... 00:10:19.275 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:19.275 09:27:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:19.275 09:27:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:10:19.275 09:27:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:19.275 09:27:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:19.275 09:27:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:19.275 09:27:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:19.275 09:27:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:19.275 09:27:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:10:19.275 09:27:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:10:19.275 09:27:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:10:19.275 09:27:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:10:19.275 09:27:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:10:19.275 09:27:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:10:19.275 09:27:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:10:19.275 09:27:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:19.275 09:27:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:10:19.275 09:27:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:10:19.275 09:27:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:19.275 09:27:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:19.275 09:27:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:10:19.275 09:27:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:10:19.275 09:27:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:19.275 09:27:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:10:19.275 09:27:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:10:19.275 09:27:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:10:19.275 09:27:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:10:19.275 09:27:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:19.275 09:27:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:10:19.275 09:27:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:10:19.275 09:27:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:19.275 09:27:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:19.275 09:27:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:10:19.275 09:27:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:19.275 09:27:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:19.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.275 --rc genhtml_branch_coverage=1 00:10:19.275 --rc genhtml_function_coverage=1 00:10:19.275 --rc genhtml_legend=1 00:10:19.275 --rc geninfo_all_blocks=1 00:10:19.275 --rc geninfo_unexecuted_blocks=1 00:10:19.275 00:10:19.275 ' 00:10:19.275 09:27:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:19.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.275 --rc genhtml_branch_coverage=1 00:10:19.275 --rc genhtml_function_coverage=1 00:10:19.275 --rc genhtml_legend=1 00:10:19.275 --rc geninfo_all_blocks=1 00:10:19.275 --rc geninfo_unexecuted_blocks=1 00:10:19.275 00:10:19.275 ' 00:10:19.275 09:27:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:19.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.275 --rc genhtml_branch_coverage=1 00:10:19.275 --rc genhtml_function_coverage=1 00:10:19.275 --rc genhtml_legend=1 00:10:19.275 --rc geninfo_all_blocks=1 00:10:19.275 --rc geninfo_unexecuted_blocks=1 00:10:19.275 00:10:19.275 ' 00:10:19.275 09:27:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:19.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.275 --rc genhtml_branch_coverage=1 00:10:19.275 --rc genhtml_function_coverage=1 00:10:19.275 --rc genhtml_legend=1 00:10:19.275 --rc geninfo_all_blocks=1 00:10:19.275 --rc geninfo_unexecuted_blocks=1 00:10:19.275 00:10:19.275 ' 00:10:19.275 09:27:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:19.275 09:27:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:19.275 09:27:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:19.275 09:27:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:19.275 09:27:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:19.275 09:27:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:19.275 09:27:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:19.275 09:27:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:19.275 09:27:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:19.275 09:27:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:19.275 09:27:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:19.275 09:27:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:19.275 09:27:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:19.275 09:27:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:19.275 09:27:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:19.275 09:27:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:19.275 09:27:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:19.275 09:27:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:19.275 09:27:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:19.275 09:27:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:10:19.275 09:27:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:19.275 09:27:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:19.275 09:27:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:19.275 09:27:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.276 09:27:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.276 09:27:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.276 09:27:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:19.276 09:27:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.276 09:27:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:10:19.276 09:27:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:19.276 09:27:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:19.276 09:27:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:19.276 09:27:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:19.276 09:27:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:19.276 09:27:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:19.276 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:19.276 09:27:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:19.276 09:27:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:19.276 09:27:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:19.276 09:27:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:19.276 09:27:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:19.276 09:27:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:19.276 09:27:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:19.276 09:27:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:19.276 09:27:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:19.276 09:27:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:19.276 09:27:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:19.276 09:27:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:19.276 09:27:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:19.276 09:27:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:19.276 09:27:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:19.276 09:27:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:19.276 09:27:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:10:19.276 09:27:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:27.416 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:27.416 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:10:27.416 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:27.416 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:27.416 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:27.416 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:27.416 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:27.416 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:10:27.416 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:27.416 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:10:27.416 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:10:27.416 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:10:27.416 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:10:27.416 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:10:27.416 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:10:27.416 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:27.416 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:27.416 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:27.416 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:27.416 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:27.416 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:27.416 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:27.416 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:27.416 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:27.416 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:27.416 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:27.416 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:27.416 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:27.416 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:27.416 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:27.416 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:27.416 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:27.416 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:27.416 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:27.416 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:27.416 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:27.417 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:27.417 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:27.417 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:27.417 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:27.417 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:27.417 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:27.417 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:27.417 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:27.417 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:27.417 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:27.417 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:27.417 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:27.417 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:27.417 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:27.417 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:27.417 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:27.417 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:27.417 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:27.417 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:27.417 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:27.417 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:27.417 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:27.417 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:27.417 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:27.417 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:27.417 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:27.417 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:27.417 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:27.417 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:27.417 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:27.417 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:27.417 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:27.417 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:27.417 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:27.417 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:27.417 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:27.417 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:27.417 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:10:27.417 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:27.417 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:27.417 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:27.417 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:27.417 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:27.417 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:27.417 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:27.417 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:27.417 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:27.417 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:27.417 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:27.417 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:27.417 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:27.417 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:27.417 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:27.417 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:27.417 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:27.417 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:27.417 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:27.417 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:27.417 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:27.417 09:28:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:27.417 09:28:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:27.417 09:28:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:27.417 09:28:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:27.417 09:28:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:27.417 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:27.417 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.551 ms 00:10:27.417 00:10:27.417 --- 10.0.0.2 ping statistics --- 00:10:27.417 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:27.417 rtt min/avg/max/mdev = 0.551/0.551/0.551/0.000 ms 00:10:27.417 09:28:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:27.417 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:27.417 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.299 ms 00:10:27.417 00:10:27.417 --- 10.0.0.1 ping statistics --- 00:10:27.417 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:27.417 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:10:27.417 09:28:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:27.417 09:28:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:10:27.417 09:28:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:27.417 09:28:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:27.417 09:28:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:27.417 09:28:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:27.417 09:28:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:27.417 09:28:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:27.417 09:28:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:27.417 09:28:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:27.417 09:28:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:27.417 09:28:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:27.417 09:28:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:27.417 09:28:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=2629622 00:10:27.417 09:28:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 2629622 00:10:27.417 09:28:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:27.417 09:28:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 2629622 ']' 00:10:27.417 09:28:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:27.417 09:28:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:27.417 09:28:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:27.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:27.417 09:28:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:27.417 09:28:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:27.417 [2024-12-09 09:28:02.209669] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:10:27.417 [2024-12-09 09:28:02.209738] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:27.417 [2024-12-09 09:28:02.310159] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:27.417 [2024-12-09 09:28:02.340816] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:27.417 [2024-12-09 09:28:02.340868] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:27.417 [2024-12-09 09:28:02.340879] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:27.417 [2024-12-09 09:28:02.340889] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:27.417 [2024-12-09 09:28:02.340902] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:27.417 [2024-12-09 09:28:02.342912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:27.417 [2024-12-09 09:28:02.343044] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:27.417 [2024-12-09 09:28:02.343208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:27.417 [2024-12-09 09:28:02.343209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:27.679 09:28:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:27.679 09:28:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:10:27.679 09:28:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:27.679 09:28:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:27.679 09:28:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:27.679 09:28:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:27.679 09:28:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:27.679 09:28:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.679 09:28:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:27.679 [2024-12-09 09:28:03.069793] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:27.679 09:28:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.679 09:28:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:27.679 09:28:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.679 09:28:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:27.679 Malloc0 00:10:27.679 09:28:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.679 09:28:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:27.679 09:28:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.679 09:28:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:27.679 09:28:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.679 09:28:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:27.679 09:28:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.679 09:28:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:27.940 09:28:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.940 09:28:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:27.940 09:28:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.940 09:28:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:27.940 [2024-12-09 09:28:03.147399] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:27.940 09:28:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.940 09:28:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:27.940 test case1: single bdev can't be used in multiple subsystems 00:10:27.940 09:28:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:27.940 09:28:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.940 09:28:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:27.940 09:28:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.940 09:28:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:27.940 09:28:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.940 09:28:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:27.940 09:28:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.940 09:28:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:27.940 09:28:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:27.940 09:28:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.940 09:28:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:27.940 [2024-12-09 09:28:03.183336] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:27.940 [2024-12-09 09:28:03.183358] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:27.940 [2024-12-09 09:28:03.183370] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.940 request: 00:10:27.940 { 00:10:27.940 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:27.940 "namespace": { 00:10:27.940 "bdev_name": "Malloc0", 00:10:27.940 "no_auto_visible": false, 00:10:27.940 "hide_metadata": false 00:10:27.940 }, 00:10:27.940 "method": "nvmf_subsystem_add_ns", 00:10:27.940 "req_id": 1 00:10:27.940 } 00:10:27.940 Got JSON-RPC error response 00:10:27.940 response: 00:10:27.940 { 00:10:27.940 "code": -32602, 00:10:27.940 "message": "Invalid parameters" 00:10:27.940 } 00:10:27.940 09:28:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:27.940 09:28:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:27.940 09:28:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:27.940 09:28:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:27.940 Adding namespace failed - expected result. 00:10:27.940 09:28:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:27.940 test case2: host connect to nvmf target in multiple paths 00:10:27.940 09:28:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:27.940 09:28:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.940 09:28:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:27.940 [2024-12-09 09:28:03.195505] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:27.940 09:28:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.940 09:28:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:29.324 09:28:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:31.232 09:28:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:31.232 09:28:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:10:31.232 09:28:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:31.232 09:28:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:31.232 09:28:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:10:33.164 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:33.164 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:33.164 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:33.164 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:33.164 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:33.164 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:10:33.164 09:28:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:33.164 [global] 00:10:33.164 thread=1 00:10:33.164 invalidate=1 00:10:33.164 rw=write 00:10:33.164 time_based=1 00:10:33.164 runtime=1 00:10:33.164 ioengine=libaio 00:10:33.164 direct=1 00:10:33.164 bs=4096 00:10:33.164 iodepth=1 00:10:33.164 norandommap=0 00:10:33.164 numjobs=1 00:10:33.164 00:10:33.164 verify_dump=1 00:10:33.164 verify_backlog=512 00:10:33.164 verify_state_save=0 00:10:33.164 do_verify=1 00:10:33.164 verify=crc32c-intel 00:10:33.164 [job0] 00:10:33.164 filename=/dev/nvme0n1 00:10:33.164 Could not set queue depth (nvme0n1) 00:10:33.427 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:33.427 fio-3.35 00:10:33.427 Starting 1 thread 00:10:34.811 00:10:34.811 job0: (groupid=0, jobs=1): err= 0: pid=2631165: Mon Dec 9 09:28:09 2024 00:10:34.811 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:10:34.811 slat (nsec): min=8464, max=61610, avg=27101.11, stdev=3293.45 00:10:34.811 clat (usec): min=552, max=42931, avg=1220.57, stdev=3130.94 00:10:34.811 lat (usec): min=580, max=42958, avg=1247.67, stdev=3130.94 00:10:34.811 clat percentiles (usec): 00:10:34.811 | 1.00th=[ 734], 5.00th=[ 848], 10.00th=[ 889], 20.00th=[ 938], 00:10:34.811 | 30.00th=[ 963], 40.00th=[ 979], 50.00th=[ 996], 60.00th=[ 1004], 00:10:34.811 | 70.00th=[ 1012], 80.00th=[ 1029], 90.00th=[ 1057], 95.00th=[ 1074], 00:10:34.811 | 99.00th=[ 1172], 99.50th=[41157], 99.90th=[42730], 99.95th=[42730], 00:10:34.811 | 99.99th=[42730] 00:10:34.811 write: IOPS=542, BW=2170KiB/s (2222kB/s)(2172KiB/1001msec); 0 zone resets 00:10:34.811 slat (usec): min=9, max=28641, avg=80.10, stdev=1228.00 00:10:34.811 clat (usec): min=335, max=805, avg=570.34, stdev=100.82 00:10:34.811 lat (usec): min=346, max=29333, avg=650.43, stdev=1237.54 00:10:34.811 clat percentiles (usec): 00:10:34.811 | 1.00th=[ 359], 5.00th=[ 396], 10.00th=[ 424], 20.00th=[ 486], 00:10:34.811 | 30.00th=[ 506], 40.00th=[ 553], 50.00th=[ 578], 60.00th=[ 594], 00:10:34.811 | 70.00th=[ 627], 80.00th=[ 676], 90.00th=[ 701], 95.00th=[ 725], 00:10:34.811 | 99.00th=[ 766], 99.50th=[ 775], 99.90th=[ 807], 99.95th=[ 807], 00:10:34.811 | 99.99th=[ 807] 00:10:34.811 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:10:34.811 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:34.811 lat (usec) : 500=14.79%, 750=36.30%, 1000=27.87% 00:10:34.811 lat (msec) : 2=20.76%, 50=0.28% 00:10:34.811 cpu : usr=1.70%, sys=3.20%, ctx=1057, majf=0, minf=1 00:10:34.811 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:34.811 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:34.811 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:34.811 issued rwts: total=512,543,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:34.811 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:34.811 00:10:34.811 Run status group 0 (all jobs): 00:10:34.811 READ: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:10:34.811 WRITE: bw=2170KiB/s (2222kB/s), 2170KiB/s-2170KiB/s (2222kB/s-2222kB/s), io=2172KiB (2224kB), run=1001-1001msec 00:10:34.811 00:10:34.811 Disk stats (read/write): 00:10:34.811 nvme0n1: ios=481/512, merge=0/0, ticks=1338/267, in_queue=1605, util=98.90% 00:10:34.811 09:28:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:34.811 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:34.811 09:28:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:34.811 09:28:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:10:34.811 09:28:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:34.811 09:28:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:34.811 09:28:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:34.811 09:28:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:34.811 09:28:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:10:34.811 09:28:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:34.811 09:28:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:34.811 09:28:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:34.811 09:28:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:34.811 09:28:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:34.811 09:28:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:34.811 09:28:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:34.811 09:28:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:34.811 rmmod nvme_tcp 00:10:34.811 rmmod nvme_fabrics 00:10:34.811 rmmod nvme_keyring 00:10:34.811 09:28:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:34.811 09:28:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:34.811 09:28:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:34.811 09:28:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 2629622 ']' 00:10:34.811 09:28:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 2629622 00:10:34.811 09:28:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 2629622 ']' 00:10:34.811 09:28:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 2629622 00:10:34.811 09:28:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:10:34.811 09:28:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:34.811 09:28:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2629622 00:10:34.811 09:28:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:34.811 09:28:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:34.811 09:28:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2629622' 00:10:34.811 killing process with pid 2629622 00:10:34.811 09:28:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 2629622 00:10:34.811 09:28:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 2629622 00:10:35.074 09:28:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:35.074 09:28:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:35.074 09:28:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:35.074 09:28:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:10:35.074 09:28:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:10:35.074 09:28:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:35.074 09:28:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:10:35.074 09:28:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:35.074 09:28:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:35.074 09:28:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:35.074 09:28:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:35.074 09:28:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:37.048 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:37.048 00:10:37.048 real 0m17.890s 00:10:37.048 user 0m50.475s 00:10:37.048 sys 0m6.508s 00:10:37.048 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:37.048 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:37.048 ************************************ 00:10:37.048 END TEST nvmf_nmic 00:10:37.048 ************************************ 00:10:37.048 09:28:12 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:37.048 09:28:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:37.048 09:28:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:37.048 09:28:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:37.048 ************************************ 00:10:37.048 START TEST nvmf_fio_target 00:10:37.048 ************************************ 00:10:37.048 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:37.345 * Looking for test storage... 00:10:37.345 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:37.345 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:37.345 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:10:37.345 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:37.345 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:37.345 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:37.345 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:37.345 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:37.345 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:37.345 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:37.345 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:37.345 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:37.345 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:37.345 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:37.345 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:37.345 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:37.345 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:37.345 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:37.345 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:37.345 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:37.345 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:37.345 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:37.345 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:37.345 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:37.345 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:37.345 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:37.345 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:37.345 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:37.345 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:37.345 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:37.345 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:37.345 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:37.345 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:37.345 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:37.345 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:37.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.345 --rc genhtml_branch_coverage=1 00:10:37.345 --rc genhtml_function_coverage=1 00:10:37.345 --rc genhtml_legend=1 00:10:37.345 --rc geninfo_all_blocks=1 00:10:37.345 --rc geninfo_unexecuted_blocks=1 00:10:37.345 00:10:37.345 ' 00:10:37.345 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:37.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.345 --rc genhtml_branch_coverage=1 00:10:37.345 --rc genhtml_function_coverage=1 00:10:37.345 --rc genhtml_legend=1 00:10:37.345 --rc geninfo_all_blocks=1 00:10:37.345 --rc geninfo_unexecuted_blocks=1 00:10:37.345 00:10:37.345 ' 00:10:37.345 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:37.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.345 --rc genhtml_branch_coverage=1 00:10:37.345 --rc genhtml_function_coverage=1 00:10:37.345 --rc genhtml_legend=1 00:10:37.345 --rc geninfo_all_blocks=1 00:10:37.346 --rc geninfo_unexecuted_blocks=1 00:10:37.346 00:10:37.346 ' 00:10:37.346 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:37.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.346 --rc genhtml_branch_coverage=1 00:10:37.346 --rc genhtml_function_coverage=1 00:10:37.346 --rc genhtml_legend=1 00:10:37.346 --rc geninfo_all_blocks=1 00:10:37.346 --rc geninfo_unexecuted_blocks=1 00:10:37.346 00:10:37.346 ' 00:10:37.346 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:37.346 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:37.346 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:37.346 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:37.346 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:37.346 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:37.346 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:37.346 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:37.346 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:37.346 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:37.346 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:37.346 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:37.346 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:37.346 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:37.346 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:37.346 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:37.346 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:37.346 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:37.346 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:37.346 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:37.346 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:37.346 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:37.346 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:37.346 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.346 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.346 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.346 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:37.346 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.346 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:37.346 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:37.346 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:37.346 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:37.346 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:37.346 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:37.346 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:37.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:37.346 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:37.346 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:37.346 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:37.346 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:37.346 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:37.346 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:37.346 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:37.346 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:37.346 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:37.346 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:37.346 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:37.346 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:37.346 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:37.346 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:37.346 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:37.346 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:37.346 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:37.346 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:10:37.346 09:28:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:45.489 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:45.489 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:45.489 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:45.489 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:45.489 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:45.489 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.589 ms 00:10:45.489 00:10:45.489 --- 10.0.0.2 ping statistics --- 00:10:45.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:45.489 rtt min/avg/max/mdev = 0.589/0.589/0.589/0.000 ms 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:45.489 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:45.489 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.309 ms 00:10:45.489 00:10:45.489 --- 10.0.0.1 ping statistics --- 00:10:45.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:45.489 rtt min/avg/max/mdev = 0.309/0.309/0.309/0.000 ms 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=2635615 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 2635615 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 2635615 ']' 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:45.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:45.489 09:28:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:45.489 [2024-12-09 09:28:19.989172] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:10:45.489 [2024-12-09 09:28:19.989240] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:45.489 [2024-12-09 09:28:20.094615] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:45.489 [2024-12-09 09:28:20.124803] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:45.489 [2024-12-09 09:28:20.124860] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:45.489 [2024-12-09 09:28:20.124872] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:45.489 [2024-12-09 09:28:20.124883] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:45.489 [2024-12-09 09:28:20.124891] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:45.489 [2024-12-09 09:28:20.127099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:45.489 [2024-12-09 09:28:20.127228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:45.489 [2024-12-09 09:28:20.127400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:45.489 [2024-12-09 09:28:20.127401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:45.489 09:28:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:45.489 09:28:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:10:45.489 09:28:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:45.489 09:28:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:45.489 09:28:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:45.489 09:28:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:45.489 09:28:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:45.749 [2024-12-09 09:28:20.997806] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:45.749 09:28:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:46.009 09:28:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:46.009 09:28:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:46.009 09:28:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:46.009 09:28:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:46.269 09:28:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:46.269 09:28:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:46.530 09:28:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:46.530 09:28:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:46.790 09:28:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:46.790 09:28:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:46.790 09:28:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:47.051 09:28:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:47.051 09:28:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:47.312 09:28:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:47.312 09:28:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:47.312 09:28:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:47.572 09:28:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:47.573 09:28:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:47.833 09:28:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:47.833 09:28:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:48.094 09:28:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:48.094 [2024-12-09 09:28:23.463603] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:48.094 09:28:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:48.354 09:28:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:48.616 09:28:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:49.997 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:49.997 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:10:49.997 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:49.997 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:10:49.997 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:10:49.997 09:28:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:10:52.545 09:28:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:52.545 09:28:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:52.545 09:28:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:52.545 09:28:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:10:52.545 09:28:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:52.545 09:28:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:10:52.545 09:28:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:52.545 [global] 00:10:52.545 thread=1 00:10:52.545 invalidate=1 00:10:52.545 rw=write 00:10:52.545 time_based=1 00:10:52.545 runtime=1 00:10:52.545 ioengine=libaio 00:10:52.545 direct=1 00:10:52.545 bs=4096 00:10:52.545 iodepth=1 00:10:52.545 norandommap=0 00:10:52.545 numjobs=1 00:10:52.545 00:10:52.545 verify_dump=1 00:10:52.545 verify_backlog=512 00:10:52.545 verify_state_save=0 00:10:52.545 do_verify=1 00:10:52.545 verify=crc32c-intel 00:10:52.545 [job0] 00:10:52.545 filename=/dev/nvme0n1 00:10:52.545 [job1] 00:10:52.545 filename=/dev/nvme0n2 00:10:52.545 [job2] 00:10:52.545 filename=/dev/nvme0n3 00:10:52.545 [job3] 00:10:52.545 filename=/dev/nvme0n4 00:10:52.545 Could not set queue depth (nvme0n1) 00:10:52.545 Could not set queue depth (nvme0n2) 00:10:52.545 Could not set queue depth (nvme0n3) 00:10:52.545 Could not set queue depth (nvme0n4) 00:10:52.545 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:52.545 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:52.545 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:52.545 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:52.545 fio-3.35 00:10:52.545 Starting 4 threads 00:10:53.954 00:10:53.954 job0: (groupid=0, jobs=1): err= 0: pid=2637439: Mon Dec 9 09:28:29 2024 00:10:53.954 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:10:53.954 slat (nsec): min=6890, max=65027, avg=25111.40, stdev=4056.86 00:10:53.954 clat (usec): min=228, max=1273, avg=979.33, stdev=104.61 00:10:53.954 lat (usec): min=236, max=1298, avg=1004.44, stdev=105.12 00:10:53.954 clat percentiles (usec): 00:10:53.954 | 1.00th=[ 725], 5.00th=[ 799], 10.00th=[ 857], 20.00th=[ 906], 00:10:53.954 | 30.00th=[ 947], 40.00th=[ 971], 50.00th=[ 988], 60.00th=[ 1012], 00:10:53.954 | 70.00th=[ 1037], 80.00th=[ 1057], 90.00th=[ 1090], 95.00th=[ 1123], 00:10:53.954 | 99.00th=[ 1172], 99.50th=[ 1237], 99.90th=[ 1270], 99.95th=[ 1270], 00:10:53.954 | 99.99th=[ 1270] 00:10:53.954 write: IOPS=769, BW=3077KiB/s (3151kB/s)(3080KiB/1001msec); 0 zone resets 00:10:53.954 slat (nsec): min=9435, max=62356, avg=27802.92, stdev=10612.68 00:10:53.954 clat (usec): min=124, max=3135, avg=592.45, stdev=161.94 00:10:53.954 lat (usec): min=134, max=3167, avg=620.25, stdev=165.20 00:10:53.954 clat percentiles (usec): 00:10:53.954 | 1.00th=[ 229], 5.00th=[ 355], 10.00th=[ 400], 20.00th=[ 490], 00:10:53.954 | 30.00th=[ 537], 40.00th=[ 570], 50.00th=[ 594], 60.00th=[ 627], 00:10:53.954 | 70.00th=[ 660], 80.00th=[ 701], 90.00th=[ 750], 95.00th=[ 799], 00:10:53.954 | 99.00th=[ 873], 99.50th=[ 914], 99.90th=[ 3130], 99.95th=[ 3130], 00:10:53.954 | 99.99th=[ 3130] 00:10:53.954 bw ( KiB/s): min= 4087, max= 4087, per=38.33%, avg=4087.00, stdev= 0.00, samples=1 00:10:53.954 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:10:53.954 lat (usec) : 250=1.17%, 500=12.32%, 750=41.34%, 1000=26.68% 00:10:53.954 lat (msec) : 2=18.41%, 4=0.08% 00:10:53.954 cpu : usr=2.30%, sys=2.80%, ctx=1282, majf=0, minf=1 00:10:53.954 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:53.954 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:53.954 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:53.954 issued rwts: total=512,770,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:53.954 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:53.954 job1: (groupid=0, jobs=1): err= 0: pid=2637440: Mon Dec 9 09:28:29 2024 00:10:53.954 read: IOPS=369, BW=1477KiB/s (1512kB/s)(1496KiB/1013msec) 00:10:53.954 slat (nsec): min=6776, max=47502, avg=26873.10, stdev=5085.88 00:10:53.954 clat (usec): min=269, max=42014, avg=1899.96, stdev=6279.23 00:10:53.954 lat (usec): min=277, max=42040, avg=1926.83, stdev=6279.21 00:10:53.954 clat percentiles (usec): 00:10:53.954 | 1.00th=[ 498], 5.00th=[ 652], 10.00th=[ 766], 20.00th=[ 824], 00:10:53.954 | 30.00th=[ 865], 40.00th=[ 906], 50.00th=[ 947], 60.00th=[ 971], 00:10:53.954 | 70.00th=[ 1004], 80.00th=[ 1029], 90.00th=[ 1074], 95.00th=[ 1106], 00:10:53.954 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:53.954 | 99.99th=[42206] 00:10:53.954 write: IOPS=505, BW=2022KiB/s (2070kB/s)(2048KiB/1013msec); 0 zone resets 00:10:53.954 slat (nsec): min=9352, max=56046, avg=29209.23, stdev=11337.08 00:10:53.954 clat (usec): min=149, max=948, avg=528.43, stdev=151.28 00:10:53.954 lat (usec): min=159, max=983, avg=557.64, stdev=154.72 00:10:53.954 clat percentiles (usec): 00:10:53.954 | 1.00th=[ 180], 5.00th=[ 273], 10.00th=[ 318], 20.00th=[ 400], 00:10:53.954 | 30.00th=[ 453], 40.00th=[ 490], 50.00th=[ 537], 60.00th=[ 570], 00:10:53.954 | 70.00th=[ 611], 80.00th=[ 660], 90.00th=[ 725], 95.00th=[ 766], 00:10:53.954 | 99.00th=[ 865], 99.50th=[ 898], 99.90th=[ 947], 99.95th=[ 947], 00:10:53.954 | 99.99th=[ 947] 00:10:53.954 bw ( KiB/s): min= 4096, max= 4096, per=38.42%, avg=4096.00, stdev= 0.00, samples=1 00:10:53.954 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:53.954 lat (usec) : 250=1.92%, 500=23.02%, 750=32.84%, 1000=29.35% 00:10:53.954 lat (msec) : 2=11.85%, 50=1.02% 00:10:53.954 cpu : usr=2.17%, sys=2.67%, ctx=888, majf=0, minf=1 00:10:53.954 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:53.954 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:53.954 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:53.954 issued rwts: total=374,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:53.955 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:53.955 job2: (groupid=0, jobs=1): err= 0: pid=2637442: Mon Dec 9 09:28:29 2024 00:10:53.955 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:10:53.955 slat (nsec): min=7207, max=57485, avg=25984.87, stdev=4789.98 00:10:53.955 clat (usec): min=213, max=1379, avg=1095.34, stdev=106.80 00:10:53.955 lat (usec): min=223, max=1397, avg=1121.33, stdev=106.38 00:10:53.955 clat percentiles (usec): 00:10:53.955 | 1.00th=[ 799], 5.00th=[ 930], 10.00th=[ 971], 20.00th=[ 1029], 00:10:53.955 | 30.00th=[ 1057], 40.00th=[ 1090], 50.00th=[ 1106], 60.00th=[ 1123], 00:10:53.955 | 70.00th=[ 1139], 80.00th=[ 1172], 90.00th=[ 1221], 95.00th=[ 1254], 00:10:53.955 | 99.00th=[ 1303], 99.50th=[ 1319], 99.90th=[ 1385], 99.95th=[ 1385], 00:10:53.955 | 99.99th=[ 1385] 00:10:53.955 write: IOPS=692, BW=2769KiB/s (2836kB/s)(2772KiB/1001msec); 0 zone resets 00:10:53.955 slat (nsec): min=9357, max=59167, avg=29264.00, stdev=10897.66 00:10:53.955 clat (usec): min=273, max=983, avg=572.63, stdev=103.35 00:10:53.955 lat (usec): min=287, max=1024, avg=601.89, stdev=108.48 00:10:53.955 clat percentiles (usec): 00:10:53.955 | 1.00th=[ 338], 5.00th=[ 392], 10.00th=[ 441], 20.00th=[ 482], 00:10:53.955 | 30.00th=[ 519], 40.00th=[ 553], 50.00th=[ 578], 60.00th=[ 603], 00:10:53.955 | 70.00th=[ 627], 80.00th=[ 660], 90.00th=[ 701], 95.00th=[ 742], 00:10:53.955 | 99.00th=[ 791], 99.50th=[ 832], 99.90th=[ 988], 99.95th=[ 988], 00:10:53.955 | 99.99th=[ 988] 00:10:53.955 bw ( KiB/s): min= 4087, max= 4087, per=38.33%, avg=4087.00, stdev= 0.00, samples=1 00:10:53.955 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:10:53.955 lat (usec) : 250=0.08%, 500=14.44%, 750=41.00%, 1000=7.97% 00:10:53.955 lat (msec) : 2=36.51% 00:10:53.955 cpu : usr=2.60%, sys=4.40%, ctx=1205, majf=0, minf=1 00:10:53.955 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:53.955 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:53.955 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:53.955 issued rwts: total=512,693,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:53.955 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:53.955 job3: (groupid=0, jobs=1): err= 0: pid=2637443: Mon Dec 9 09:28:29 2024 00:10:53.955 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:10:53.955 slat (nsec): min=6962, max=44635, avg=25925.29, stdev=2632.31 00:10:53.955 clat (usec): min=211, max=1298, avg=992.34, stdev=96.96 00:10:53.955 lat (usec): min=218, max=1324, avg=1018.27, stdev=97.53 00:10:53.955 clat percentiles (usec): 00:10:53.955 | 1.00th=[ 742], 5.00th=[ 840], 10.00th=[ 889], 20.00th=[ 930], 00:10:53.955 | 30.00th=[ 955], 40.00th=[ 979], 50.00th=[ 1004], 60.00th=[ 1020], 00:10:53.955 | 70.00th=[ 1037], 80.00th=[ 1057], 90.00th=[ 1090], 95.00th=[ 1123], 00:10:53.955 | 99.00th=[ 1188], 99.50th=[ 1237], 99.90th=[ 1303], 99.95th=[ 1303], 00:10:53.955 | 99.99th=[ 1303] 00:10:53.955 write: IOPS=724, BW=2897KiB/s (2967kB/s)(2900KiB/1001msec); 0 zone resets 00:10:53.955 slat (nsec): min=9984, max=61903, avg=31885.84, stdev=7975.57 00:10:53.955 clat (usec): min=222, max=926, avg=615.28, stdev=130.56 00:10:53.955 lat (usec): min=255, max=959, avg=647.16, stdev=132.55 00:10:53.955 clat percentiles (usec): 00:10:53.955 | 1.00th=[ 334], 5.00th=[ 420], 10.00th=[ 457], 20.00th=[ 498], 00:10:53.955 | 30.00th=[ 537], 40.00th=[ 578], 50.00th=[ 603], 60.00th=[ 635], 00:10:53.955 | 70.00th=[ 685], 80.00th=[ 742], 90.00th=[ 799], 95.00th=[ 840], 00:10:53.955 | 99.00th=[ 881], 99.50th=[ 889], 99.90th=[ 930], 99.95th=[ 930], 00:10:53.955 | 99.99th=[ 930] 00:10:53.955 bw ( KiB/s): min= 4087, max= 4087, per=38.33%, avg=4087.00, stdev= 0.00, samples=1 00:10:53.955 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:10:53.955 lat (usec) : 250=0.16%, 500=11.80%, 750=35.81%, 1000=31.53% 00:10:53.955 lat (msec) : 2=20.70% 00:10:53.955 cpu : usr=1.90%, sys=3.70%, ctx=1237, majf=0, minf=1 00:10:53.955 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:53.955 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:53.955 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:53.955 issued rwts: total=512,725,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:53.955 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:53.955 00:10:53.955 Run status group 0 (all jobs): 00:10:53.955 READ: bw=7542KiB/s (7723kB/s), 1477KiB/s-2046KiB/s (1512kB/s-2095kB/s), io=7640KiB (7823kB), run=1001-1013msec 00:10:53.955 WRITE: bw=10.4MiB/s (10.9MB/s), 2022KiB/s-3077KiB/s (2070kB/s-3151kB/s), io=10.5MiB (11.1MB), run=1001-1013msec 00:10:53.955 00:10:53.955 Disk stats (read/write): 00:10:53.955 nvme0n1: ios=553/512, merge=0/0, ticks=564/290, in_queue=854, util=87.37% 00:10:53.955 nvme0n2: ios=420/512, merge=0/0, ticks=993/217, in_queue=1210, util=100.00% 00:10:53.955 nvme0n3: ios=459/512, merge=0/0, ticks=467/240, in_queue=707, util=88.47% 00:10:53.955 nvme0n4: ios=515/512, merge=0/0, ticks=699/276, in_queue=975, util=91.65% 00:10:53.955 09:28:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:53.955 [global] 00:10:53.955 thread=1 00:10:53.955 invalidate=1 00:10:53.955 rw=randwrite 00:10:53.955 time_based=1 00:10:53.955 runtime=1 00:10:53.955 ioengine=libaio 00:10:53.955 direct=1 00:10:53.955 bs=4096 00:10:53.955 iodepth=1 00:10:53.955 norandommap=0 00:10:53.955 numjobs=1 00:10:53.955 00:10:53.955 verify_dump=1 00:10:53.955 verify_backlog=512 00:10:53.955 verify_state_save=0 00:10:53.955 do_verify=1 00:10:53.955 verify=crc32c-intel 00:10:53.955 [job0] 00:10:53.955 filename=/dev/nvme0n1 00:10:53.955 [job1] 00:10:53.955 filename=/dev/nvme0n2 00:10:53.955 [job2] 00:10:53.955 filename=/dev/nvme0n3 00:10:53.955 [job3] 00:10:53.955 filename=/dev/nvme0n4 00:10:53.955 Could not set queue depth (nvme0n1) 00:10:53.955 Could not set queue depth (nvme0n2) 00:10:53.955 Could not set queue depth (nvme0n3) 00:10:53.955 Could not set queue depth (nvme0n4) 00:10:54.218 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:54.218 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:54.218 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:54.218 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:54.218 fio-3.35 00:10:54.218 Starting 4 threads 00:10:55.602 00:10:55.602 job0: (groupid=0, jobs=1): err= 0: pid=2637961: Mon Dec 9 09:28:30 2024 00:10:55.602 read: IOPS=17, BW=70.0KiB/s (71.7kB/s)(72.0KiB/1028msec) 00:10:55.602 slat (nsec): min=27495, max=28838, avg=28063.00, stdev=349.29 00:10:55.602 clat (usec): min=1025, max=42026, avg=39388.27, stdev=9583.75 00:10:55.602 lat (usec): min=1053, max=42054, avg=39416.34, stdev=9583.68 00:10:55.602 clat percentiles (usec): 00:10:55.602 | 1.00th=[ 1029], 5.00th=[ 1029], 10.00th=[41157], 20.00th=[41157], 00:10:55.602 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:10:55.602 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:55.602 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:55.602 | 99.99th=[42206] 00:10:55.602 write: IOPS=498, BW=1992KiB/s (2040kB/s)(2048KiB/1028msec); 0 zone resets 00:10:55.602 slat (nsec): min=9046, max=71997, avg=32118.15, stdev=9663.49 00:10:55.602 clat (usec): min=243, max=1154, avg=582.24, stdev=135.79 00:10:55.602 lat (usec): min=252, max=1188, avg=614.36, stdev=139.66 00:10:55.603 clat percentiles (usec): 00:10:55.603 | 1.00th=[ 289], 5.00th=[ 351], 10.00th=[ 388], 20.00th=[ 461], 00:10:55.603 | 30.00th=[ 515], 40.00th=[ 553], 50.00th=[ 594], 60.00th=[ 619], 00:10:55.603 | 70.00th=[ 652], 80.00th=[ 701], 90.00th=[ 750], 95.00th=[ 799], 00:10:55.603 | 99.00th=[ 881], 99.50th=[ 898], 99.90th=[ 1156], 99.95th=[ 1156], 00:10:55.603 | 99.99th=[ 1156] 00:10:55.603 bw ( KiB/s): min= 4096, max= 4096, per=50.33%, avg=4096.00, stdev= 0.00, samples=1 00:10:55.603 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:55.603 lat (usec) : 250=0.38%, 500=26.04%, 750=60.75%, 1000=9.25% 00:10:55.603 lat (msec) : 2=0.38%, 50=3.21% 00:10:55.603 cpu : usr=1.36%, sys=1.75%, ctx=531, majf=0, minf=1 00:10:55.603 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:55.603 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:55.603 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:55.603 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:55.603 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:55.603 job1: (groupid=0, jobs=1): err= 0: pid=2637962: Mon Dec 9 09:28:30 2024 00:10:55.603 read: IOPS=16, BW=65.9KiB/s (67.5kB/s)(68.0KiB/1032msec) 00:10:55.603 slat (nsec): min=27895, max=33335, avg=28714.47, stdev=1280.17 00:10:55.603 clat (usec): min=40955, max=42619, avg=41844.94, stdev=394.46 00:10:55.603 lat (usec): min=40984, max=42648, avg=41873.65, stdev=394.60 00:10:55.603 clat percentiles (usec): 00:10:55.603 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:10:55.603 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:10:55.603 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42730], 00:10:55.603 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:10:55.603 | 99.99th=[42730] 00:10:55.603 write: IOPS=496, BW=1984KiB/s (2032kB/s)(2048KiB/1032msec); 0 zone resets 00:10:55.603 slat (nsec): min=9254, max=56360, avg=32515.20, stdev=9082.87 00:10:55.603 clat (usec): min=230, max=1339, avg=584.78, stdev=135.85 00:10:55.603 lat (usec): min=241, max=1395, avg=617.30, stdev=138.52 00:10:55.603 clat percentiles (usec): 00:10:55.603 | 1.00th=[ 273], 5.00th=[ 367], 10.00th=[ 396], 20.00th=[ 465], 00:10:55.603 | 30.00th=[ 515], 40.00th=[ 562], 50.00th=[ 594], 60.00th=[ 627], 00:10:55.603 | 70.00th=[ 660], 80.00th=[ 701], 90.00th=[ 742], 95.00th=[ 775], 00:10:55.603 | 99.00th=[ 889], 99.50th=[ 1020], 99.90th=[ 1336], 99.95th=[ 1336], 00:10:55.603 | 99.99th=[ 1336] 00:10:55.603 bw ( KiB/s): min= 4096, max= 4096, per=50.33%, avg=4096.00, stdev= 0.00, samples=1 00:10:55.603 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:55.603 lat (usec) : 250=0.38%, 500=26.09%, 750=61.63%, 1000=8.13% 00:10:55.603 lat (msec) : 2=0.57%, 50=3.21% 00:10:55.603 cpu : usr=0.68%, sys=2.42%, ctx=531, majf=0, minf=1 00:10:55.603 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:55.603 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:55.603 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:55.603 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:55.603 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:55.603 job2: (groupid=0, jobs=1): err= 0: pid=2637964: Mon Dec 9 09:28:30 2024 00:10:55.603 read: IOPS=50, BW=203KiB/s (208kB/s)(212KiB/1042msec) 00:10:55.603 slat (nsec): min=8107, max=46901, avg=27169.15, stdev=5047.85 00:10:55.603 clat (usec): min=603, max=42650, avg=13332.52, stdev=18935.47 00:10:55.603 lat (usec): min=633, max=42676, avg=13359.69, stdev=18934.52 00:10:55.603 clat percentiles (usec): 00:10:55.603 | 1.00th=[ 603], 5.00th=[ 832], 10.00th=[ 930], 20.00th=[ 971], 00:10:55.603 | 30.00th=[ 1012], 40.00th=[ 1029], 50.00th=[ 1057], 60.00th=[ 1074], 00:10:55.603 | 70.00th=[40633], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:10:55.603 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:10:55.603 | 99.99th=[42730] 00:10:55.603 write: IOPS=491, BW=1965KiB/s (2013kB/s)(2048KiB/1042msec); 0 zone resets 00:10:55.603 slat (nsec): min=10087, max=58622, avg=30236.22, stdev=9502.15 00:10:55.603 clat (usec): min=273, max=1327, avg=613.50, stdev=113.98 00:10:55.603 lat (usec): min=283, max=1362, avg=643.74, stdev=117.97 00:10:55.603 clat percentiles (usec): 00:10:55.603 | 1.00th=[ 351], 5.00th=[ 400], 10.00th=[ 461], 20.00th=[ 519], 00:10:55.603 | 30.00th=[ 570], 40.00th=[ 594], 50.00th=[ 611], 60.00th=[ 644], 00:10:55.603 | 70.00th=[ 685], 80.00th=[ 717], 90.00th=[ 742], 95.00th=[ 766], 00:10:55.603 | 99.00th=[ 857], 99.50th=[ 881], 99.90th=[ 1336], 99.95th=[ 1336], 00:10:55.603 | 99.99th=[ 1336] 00:10:55.603 bw ( KiB/s): min= 4096, max= 4096, per=50.33%, avg=4096.00, stdev= 0.00, samples=1 00:10:55.603 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:55.603 lat (usec) : 500=15.04%, 750=68.67%, 1000=9.20% 00:10:55.603 lat (msec) : 2=4.25%, 50=2.83% 00:10:55.603 cpu : usr=1.06%, sys=1.34%, ctx=566, majf=0, minf=1 00:10:55.603 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:55.603 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:55.603 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:55.603 issued rwts: total=53,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:55.603 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:55.603 job3: (groupid=0, jobs=1): err= 0: pid=2637966: Mon Dec 9 09:28:30 2024 00:10:55.603 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:10:55.603 slat (nsec): min=2683, max=69140, avg=27576.40, stdev=5093.06 00:10:55.603 clat (usec): min=404, max=42394, avg=1230.55, stdev=3188.56 00:10:55.603 lat (usec): min=437, max=42404, avg=1258.13, stdev=3188.18 00:10:55.603 clat percentiles (usec): 00:10:55.603 | 1.00th=[ 660], 5.00th=[ 816], 10.00th=[ 865], 20.00th=[ 914], 00:10:55.603 | 30.00th=[ 955], 40.00th=[ 963], 50.00th=[ 979], 60.00th=[ 996], 00:10:55.603 | 70.00th=[ 1004], 80.00th=[ 1020], 90.00th=[ 1045], 95.00th=[ 1057], 00:10:55.603 | 99.00th=[ 1205], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:10:55.603 | 99.99th=[42206] 00:10:55.603 write: IOPS=583, BW=2334KiB/s (2390kB/s)(2336KiB/1001msec); 0 zone resets 00:10:55.603 slat (nsec): min=9320, max=79053, avg=31996.90, stdev=11517.19 00:10:55.603 clat (usec): min=213, max=1223, avg=561.94, stdev=144.99 00:10:55.603 lat (usec): min=244, max=1261, avg=593.94, stdev=148.93 00:10:55.603 clat percentiles (usec): 00:10:55.603 | 1.00th=[ 273], 5.00th=[ 314], 10.00th=[ 355], 20.00th=[ 437], 00:10:55.603 | 30.00th=[ 498], 40.00th=[ 529], 50.00th=[ 562], 60.00th=[ 603], 00:10:55.603 | 70.00th=[ 644], 80.00th=[ 685], 90.00th=[ 742], 95.00th=[ 799], 00:10:55.603 | 99.00th=[ 857], 99.50th=[ 996], 99.90th=[ 1221], 99.95th=[ 1221], 00:10:55.603 | 99.99th=[ 1221] 00:10:55.603 bw ( KiB/s): min= 4096, max= 4096, per=50.33%, avg=4096.00, stdev= 0.00, samples=1 00:10:55.603 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:55.603 lat (usec) : 250=0.27%, 500=16.33%, 750=33.03%, 1000=34.67% 00:10:55.603 lat (msec) : 2=15.33%, 20=0.09%, 50=0.27% 00:10:55.603 cpu : usr=2.40%, sys=4.20%, ctx=1097, majf=0, minf=1 00:10:55.603 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:55.603 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:55.603 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:55.603 issued rwts: total=512,584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:55.603 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:55.603 00:10:55.603 Run status group 0 (all jobs): 00:10:55.603 READ: bw=2303KiB/s (2359kB/s), 65.9KiB/s-2046KiB/s (67.5kB/s-2095kB/s), io=2400KiB (2458kB), run=1001-1042msec 00:10:55.603 WRITE: bw=8138KiB/s (8334kB/s), 1965KiB/s-2334KiB/s (2013kB/s-2390kB/s), io=8480KiB (8684kB), run=1001-1042msec 00:10:55.603 00:10:55.603 Disk stats (read/write): 00:10:55.603 nvme0n1: ios=68/512, merge=0/0, ticks=1511/233, in_queue=1744, util=89.01% 00:10:55.603 nvme0n2: ios=67/512, merge=0/0, ticks=1256/221, in_queue=1477, util=92.16% 00:10:55.603 nvme0n3: ios=105/512, merge=0/0, ticks=934/288, in_queue=1222, util=94.32% 00:10:55.603 nvme0n4: ios=351/512, merge=0/0, ticks=1347/262, in_queue=1609, util=98.74% 00:10:55.603 09:28:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:55.603 [global] 00:10:55.603 thread=1 00:10:55.603 invalidate=1 00:10:55.603 rw=write 00:10:55.603 time_based=1 00:10:55.603 runtime=1 00:10:55.603 ioengine=libaio 00:10:55.603 direct=1 00:10:55.603 bs=4096 00:10:55.603 iodepth=128 00:10:55.603 norandommap=0 00:10:55.603 numjobs=1 00:10:55.603 00:10:55.603 verify_dump=1 00:10:55.603 verify_backlog=512 00:10:55.603 verify_state_save=0 00:10:55.603 do_verify=1 00:10:55.603 verify=crc32c-intel 00:10:55.603 [job0] 00:10:55.603 filename=/dev/nvme0n1 00:10:55.603 [job1] 00:10:55.603 filename=/dev/nvme0n2 00:10:55.603 [job2] 00:10:55.603 filename=/dev/nvme0n3 00:10:55.603 [job3] 00:10:55.603 filename=/dev/nvme0n4 00:10:55.603 Could not set queue depth (nvme0n1) 00:10:55.603 Could not set queue depth (nvme0n2) 00:10:55.603 Could not set queue depth (nvme0n3) 00:10:55.603 Could not set queue depth (nvme0n4) 00:10:55.864 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:55.864 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:55.864 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:55.864 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:55.864 fio-3.35 00:10:55.864 Starting 4 threads 00:10:57.250 00:10:57.250 job0: (groupid=0, jobs=1): err= 0: pid=2638488: Mon Dec 9 09:28:32 2024 00:10:57.250 read: IOPS=6596, BW=25.8MiB/s (27.0MB/s)(26.0MiB/1009msec) 00:10:57.250 slat (nsec): min=924, max=12344k, avg=65082.45, stdev=474569.54 00:10:57.250 clat (usec): min=2038, max=27344, avg=8676.97, stdev=3093.64 00:10:57.250 lat (usec): min=2055, max=27352, avg=8742.05, stdev=3125.69 00:10:57.250 clat percentiles (usec): 00:10:57.250 | 1.00th=[ 3556], 5.00th=[ 4948], 10.00th=[ 5735], 20.00th=[ 6521], 00:10:57.250 | 30.00th=[ 7046], 40.00th=[ 7439], 50.00th=[ 8094], 60.00th=[ 8356], 00:10:57.250 | 70.00th=[ 9503], 80.00th=[10683], 90.00th=[11994], 95.00th=[15008], 00:10:57.250 | 99.00th=[19530], 99.50th=[22414], 99.90th=[25297], 99.95th=[27395], 00:10:57.250 | 99.99th=[27395] 00:10:57.250 write: IOPS=7013, BW=27.4MiB/s (28.7MB/s)(27.6MiB/1009msec); 0 zone resets 00:10:57.250 slat (nsec): min=1592, max=7411.9k, avg=71774.17, stdev=404774.66 00:10:57.250 clat (usec): min=1149, max=32268, avg=9927.16, stdev=6118.62 00:10:57.250 lat (usec): min=1159, max=32274, avg=9998.94, stdev=6158.43 00:10:57.250 clat percentiles (usec): 00:10:57.250 | 1.00th=[ 2311], 5.00th=[ 3654], 10.00th=[ 4359], 20.00th=[ 5211], 00:10:57.250 | 30.00th=[ 5735], 40.00th=[ 6718], 50.00th=[ 7767], 60.00th=[ 9634], 00:10:57.250 | 70.00th=[11207], 80.00th=[13173], 90.00th=[18220], 95.00th=[23725], 00:10:57.250 | 99.00th=[29492], 99.50th=[30802], 99.90th=[31851], 99.95th=[32113], 00:10:57.250 | 99.99th=[32375] 00:10:57.251 bw ( KiB/s): min=25224, max=30376, per=33.99%, avg=27800.00, stdev=3643.01, samples=2 00:10:57.251 iops : min= 6306, max= 7594, avg=6950.00, stdev=910.75, samples=2 00:10:57.251 lat (msec) : 2=0.27%, 4=3.87%, 10=63.23%, 20=27.63%, 50=5.00% 00:10:57.251 cpu : usr=5.46%, sys=5.85%, ctx=544, majf=0, minf=1 00:10:57.251 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:10:57.251 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:57.251 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:57.251 issued rwts: total=6656,7077,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:57.251 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:57.251 job1: (groupid=0, jobs=1): err= 0: pid=2638490: Mon Dec 9 09:28:32 2024 00:10:57.251 read: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec) 00:10:57.251 slat (nsec): min=987, max=13725k, avg=101609.61, stdev=638412.37 00:10:57.251 clat (usec): min=5873, max=38116, avg=12184.20, stdev=5652.87 00:10:57.251 lat (usec): min=5880, max=38862, avg=12285.81, stdev=5720.04 00:10:57.251 clat percentiles (usec): 00:10:57.251 | 1.00th=[ 6915], 5.00th=[ 7898], 10.00th=[ 8225], 20.00th=[ 8848], 00:10:57.251 | 30.00th=[ 9503], 40.00th=[ 9765], 50.00th=[10159], 60.00th=[10683], 00:10:57.251 | 70.00th=[11469], 80.00th=[12780], 90.00th=[21365], 95.00th=[25297], 00:10:57.251 | 99.00th=[33162], 99.50th=[35914], 99.90th=[38011], 99.95th=[38011], 00:10:57.251 | 99.99th=[38011] 00:10:57.251 write: IOPS=4730, BW=18.5MiB/s (19.4MB/s)(18.6MiB/1006msec); 0 zone resets 00:10:57.251 slat (nsec): min=1659, max=32765k, avg=106774.03, stdev=750853.82 00:10:57.251 clat (usec): min=4667, max=58251, avg=13579.28, stdev=6164.06 00:10:57.251 lat (usec): min=4671, max=58294, avg=13686.06, stdev=6239.56 00:10:57.251 clat percentiles (usec): 00:10:57.251 | 1.00th=[ 7177], 5.00th=[ 7635], 10.00th=[ 7963], 20.00th=[ 8717], 00:10:57.251 | 30.00th=[10552], 40.00th=[11076], 50.00th=[12256], 60.00th=[12518], 00:10:57.251 | 70.00th=[14877], 80.00th=[16581], 90.00th=[20841], 95.00th=[24249], 00:10:57.251 | 99.00th=[38011], 99.50th=[40109], 99.90th=[41157], 99.95th=[58459], 00:10:57.251 | 99.99th=[58459] 00:10:57.251 bw ( KiB/s): min=16072, max=20984, per=22.65%, avg=18528.00, stdev=3473.31, samples=2 00:10:57.251 iops : min= 4018, max= 5246, avg=4632.00, stdev=868.33, samples=2 00:10:57.251 lat (msec) : 10=34.74%, 20=53.89%, 50=11.33%, 100=0.04% 00:10:57.251 cpu : usr=3.68%, sys=3.68%, ctx=563, majf=0, minf=2 00:10:57.251 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:10:57.251 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:57.251 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:57.251 issued rwts: total=4608,4759,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:57.251 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:57.251 job2: (groupid=0, jobs=1): err= 0: pid=2638492: Mon Dec 9 09:28:32 2024 00:10:57.251 read: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec) 00:10:57.251 slat (nsec): min=913, max=17045k, avg=112322.11, stdev=783154.35 00:10:57.251 clat (usec): min=6392, max=56784, avg=14659.57, stdev=10903.12 00:10:57.251 lat (usec): min=7130, max=56790, avg=14771.89, stdev=10962.81 00:10:57.251 clat percentiles (usec): 00:10:57.251 | 1.00th=[ 7308], 5.00th=[ 8029], 10.00th=[ 8455], 20.00th=[ 8848], 00:10:57.251 | 30.00th=[ 9241], 40.00th=[ 9634], 50.00th=[10028], 60.00th=[10290], 00:10:57.251 | 70.00th=[13566], 80.00th=[15664], 90.00th=[26084], 95.00th=[49021], 00:10:57.251 | 99.00th=[54789], 99.50th=[56886], 99.90th=[56886], 99.95th=[56886], 00:10:57.251 | 99.99th=[56886] 00:10:57.251 write: IOPS=5472, BW=21.4MiB/s (22.4MB/s)(21.4MiB/1002msec); 0 zone resets 00:10:57.251 slat (nsec): min=1570, max=12041k, avg=74090.43, stdev=439937.13 00:10:57.251 clat (usec): min=1306, max=42876, avg=9220.42, stdev=5045.28 00:10:57.251 lat (usec): min=1788, max=50505, avg=9294.51, stdev=5077.37 00:10:57.251 clat percentiles (usec): 00:10:57.251 | 1.00th=[ 3818], 5.00th=[ 6063], 10.00th=[ 6521], 20.00th=[ 6718], 00:10:57.251 | 30.00th=[ 6915], 40.00th=[ 7635], 50.00th=[ 7963], 60.00th=[ 8225], 00:10:57.251 | 70.00th=[ 8586], 80.00th=[ 9241], 90.00th=[11600], 95.00th=[22938], 00:10:57.251 | 99.00th=[32113], 99.50th=[33424], 99.90th=[42730], 99.95th=[42730], 00:10:57.251 | 99.99th=[42730] 00:10:57.251 bw ( KiB/s): min=15160, max=27688, per=26.19%, avg=21424.00, stdev=8858.63, samples=2 00:10:57.251 iops : min= 3790, max= 6922, avg=5356.00, stdev=2214.66, samples=2 00:10:57.251 lat (msec) : 2=0.08%, 4=0.48%, 10=66.83%, 20=21.43%, 50=9.49% 00:10:57.251 lat (msec) : 100=1.69% 00:10:57.251 cpu : usr=3.40%, sys=3.30%, ctx=597, majf=0, minf=1 00:10:57.251 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:57.251 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:57.251 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:57.251 issued rwts: total=5120,5483,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:57.251 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:57.251 job3: (groupid=0, jobs=1): err= 0: pid=2638493: Mon Dec 9 09:28:32 2024 00:10:57.251 read: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec) 00:10:57.251 slat (nsec): min=978, max=21277k, avg=190565.85, stdev=1316059.72 00:10:57.251 clat (usec): min=3783, max=74487, avg=24001.08, stdev=16928.72 00:10:57.251 lat (usec): min=3792, max=74494, avg=24191.65, stdev=17017.86 00:10:57.251 clat percentiles (usec): 00:10:57.251 | 1.00th=[ 8291], 5.00th=[ 9765], 10.00th=[12649], 20.00th=[13435], 00:10:57.251 | 30.00th=[14353], 40.00th=[15008], 50.00th=[17171], 60.00th=[19006], 00:10:57.251 | 70.00th=[22938], 80.00th=[29492], 90.00th=[58983], 95.00th=[65274], 00:10:57.251 | 99.00th=[73925], 99.50th=[74974], 99.90th=[74974], 99.95th=[74974], 00:10:57.251 | 99.99th=[74974] 00:10:57.251 write: IOPS=3298, BW=12.9MiB/s (13.5MB/s)(12.9MiB/1004msec); 0 zone resets 00:10:57.251 slat (nsec): min=1677, max=10328k, avg=119354.13, stdev=476488.02 00:10:57.251 clat (usec): min=2060, max=38643, avg=16146.90, stdev=6285.59 00:10:57.251 lat (usec): min=2070, max=38650, avg=16266.26, stdev=6319.47 00:10:57.251 clat percentiles (usec): 00:10:57.251 | 1.00th=[ 4883], 5.00th=[ 7963], 10.00th=[ 9503], 20.00th=[11731], 00:10:57.251 | 30.00th=[12256], 40.00th=[12518], 50.00th=[13960], 60.00th=[16712], 00:10:57.251 | 70.00th=[18220], 80.00th=[21627], 90.00th=[25035], 95.00th=[28443], 00:10:57.251 | 99.00th=[32637], 99.50th=[36963], 99.90th=[38536], 99.95th=[38536], 00:10:57.251 | 99.99th=[38536] 00:10:57.251 bw ( KiB/s): min= 8264, max=17216, per=15.58%, avg=12740.00, stdev=6330.02, samples=2 00:10:57.251 iops : min= 2066, max= 4304, avg=3185.00, stdev=1582.50, samples=2 00:10:57.251 lat (msec) : 4=0.52%, 10=8.79%, 20=60.70%, 50=24.31%, 100=5.69% 00:10:57.251 cpu : usr=3.19%, sys=2.89%, ctx=453, majf=0, minf=1 00:10:57.251 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:10:57.251 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:57.251 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:57.251 issued rwts: total=3072,3312,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:57.251 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:57.251 00:10:57.251 Run status group 0 (all jobs): 00:10:57.251 READ: bw=75.3MiB/s (79.0MB/s), 12.0MiB/s-25.8MiB/s (12.5MB/s-27.0MB/s), io=76.0MiB (79.7MB), run=1002-1009msec 00:10:57.251 WRITE: bw=79.9MiB/s (83.8MB/s), 12.9MiB/s-27.4MiB/s (13.5MB/s-28.7MB/s), io=80.6MiB (84.5MB), run=1002-1009msec 00:10:57.251 00:10:57.251 Disk stats (read/write): 00:10:57.251 nvme0n1: ios=5682/5990, merge=0/0, ticks=47624/52424, in_queue=100048, util=87.17% 00:10:57.251 nvme0n2: ios=3679/4096, merge=0/0, ticks=22581/24983, in_queue=47564, util=100.00% 00:10:57.251 nvme0n3: ios=4096/4419, merge=0/0, ticks=15904/10997, in_queue=26901, util=88.17% 00:10:57.251 nvme0n4: ios=2617/2839, merge=0/0, ticks=22732/18798, in_queue=41530, util=96.47% 00:10:57.251 09:28:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:57.251 [global] 00:10:57.251 thread=1 00:10:57.251 invalidate=1 00:10:57.251 rw=randwrite 00:10:57.251 time_based=1 00:10:57.251 runtime=1 00:10:57.251 ioengine=libaio 00:10:57.251 direct=1 00:10:57.251 bs=4096 00:10:57.251 iodepth=128 00:10:57.251 norandommap=0 00:10:57.251 numjobs=1 00:10:57.251 00:10:57.251 verify_dump=1 00:10:57.251 verify_backlog=512 00:10:57.251 verify_state_save=0 00:10:57.251 do_verify=1 00:10:57.251 verify=crc32c-intel 00:10:57.251 [job0] 00:10:57.251 filename=/dev/nvme0n1 00:10:57.251 [job1] 00:10:57.251 filename=/dev/nvme0n2 00:10:57.251 [job2] 00:10:57.251 filename=/dev/nvme0n3 00:10:57.251 [job3] 00:10:57.251 filename=/dev/nvme0n4 00:10:57.251 Could not set queue depth (nvme0n1) 00:10:57.251 Could not set queue depth (nvme0n2) 00:10:57.251 Could not set queue depth (nvme0n3) 00:10:57.251 Could not set queue depth (nvme0n4) 00:10:57.512 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:57.512 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:57.512 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:57.512 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:57.512 fio-3.35 00:10:57.512 Starting 4 threads 00:10:58.899 00:10:58.899 job0: (groupid=0, jobs=1): err= 0: pid=2639016: Mon Dec 9 09:28:34 2024 00:10:58.899 read: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec) 00:10:58.899 slat (nsec): min=919, max=9800.5k, avg=147421.96, stdev=803453.68 00:10:58.899 clat (usec): min=9054, max=43940, avg=19417.91, stdev=6481.20 00:10:58.899 lat (usec): min=9771, max=43948, avg=19565.33, stdev=6478.79 00:10:58.899 clat percentiles (usec): 00:10:58.899 | 1.00th=[10814], 5.00th=[12649], 10.00th=[13435], 20.00th=[13829], 00:10:58.899 | 30.00th=[14615], 40.00th=[16909], 50.00th=[18744], 60.00th=[19530], 00:10:58.899 | 70.00th=[20579], 80.00th=[23200], 90.00th=[29230], 95.00th=[34341], 00:10:58.899 | 99.00th=[40109], 99.50th=[43779], 99.90th=[43779], 99.95th=[43779], 00:10:58.899 | 99.99th=[43779] 00:10:58.900 write: IOPS=4024, BW=15.7MiB/s (16.5MB/s)(15.8MiB/1002msec); 0 zone resets 00:10:58.900 slat (nsec): min=1522, max=13040k, avg=110814.48, stdev=656859.87 00:10:58.900 clat (usec): min=840, max=34246, avg=14200.92, stdev=5778.79 00:10:58.900 lat (usec): min=5174, max=34253, avg=14311.73, stdev=5785.35 00:10:58.900 clat percentiles (usec): 00:10:58.900 | 1.00th=[ 6915], 5.00th=[ 7701], 10.00th=[ 9634], 20.00th=[10421], 00:10:58.900 | 30.00th=[11076], 40.00th=[11731], 50.00th=[12125], 60.00th=[13042], 00:10:58.900 | 70.00th=[15139], 80.00th=[16581], 90.00th=[21365], 95.00th=[28967], 00:10:58.900 | 99.00th=[33424], 99.50th=[33424], 99.90th=[34341], 99.95th=[34341], 00:10:58.900 | 99.99th=[34341] 00:10:58.900 bw ( KiB/s): min=14344, max=16904, per=18.47%, avg=15624.00, stdev=1810.19, samples=2 00:10:58.900 iops : min= 3586, max= 4226, avg=3906.00, stdev=452.55, samples=2 00:10:58.900 lat (usec) : 1000=0.01% 00:10:58.900 lat (msec) : 10=6.13%, 20=71.41%, 50=22.45% 00:10:58.900 cpu : usr=3.20%, sys=4.90%, ctx=260, majf=0, minf=2 00:10:58.900 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:58.900 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.900 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:58.900 issued rwts: total=3584,4033,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:58.900 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:58.900 job1: (groupid=0, jobs=1): err= 0: pid=2639017: Mon Dec 9 09:28:34 2024 00:10:58.900 read: IOPS=7139, BW=27.9MiB/s (29.2MB/s)(28.0MiB/1004msec) 00:10:58.900 slat (nsec): min=913, max=7579.2k, avg=63164.35, stdev=402920.57 00:10:58.900 clat (usec): min=1869, max=18649, avg=8122.61, stdev=2284.34 00:10:58.900 lat (usec): min=1878, max=18679, avg=8185.77, stdev=2318.10 00:10:58.900 clat percentiles (usec): 00:10:58.900 | 1.00th=[ 3392], 5.00th=[ 4752], 10.00th=[ 5669], 20.00th=[ 6783], 00:10:58.900 | 30.00th=[ 7111], 40.00th=[ 7439], 50.00th=[ 7635], 60.00th=[ 8094], 00:10:58.900 | 70.00th=[ 8717], 80.00th=[ 9896], 90.00th=[10945], 95.00th=[12387], 00:10:58.900 | 99.00th=[15008], 99.50th=[15533], 99.90th=[17171], 99.95th=[17433], 00:10:58.900 | 99.99th=[18744] 00:10:58.900 write: IOPS=7385, BW=28.8MiB/s (30.2MB/s)(29.0MiB/1004msec); 0 zone resets 00:10:58.900 slat (nsec): min=1551, max=6779.4k, avg=66091.08, stdev=410476.94 00:10:58.900 clat (usec): min=533, max=68761, avg=9135.50, stdev=8710.20 00:10:58.900 lat (usec): min=544, max=68769, avg=9201.60, stdev=8772.34 00:10:58.900 clat percentiles (usec): 00:10:58.900 | 1.00th=[ 1352], 5.00th=[ 3949], 10.00th=[ 4490], 20.00th=[ 5800], 00:10:58.900 | 30.00th=[ 6652], 40.00th=[ 7177], 50.00th=[ 7570], 60.00th=[ 7963], 00:10:58.900 | 70.00th=[ 8586], 80.00th=[ 9372], 90.00th=[11994], 95.00th=[15139], 00:10:58.900 | 99.00th=[58983], 99.50th=[64226], 99.90th=[66323], 99.95th=[68682], 00:10:58.900 | 99.99th=[68682] 00:10:58.900 bw ( KiB/s): min=28576, max=29728, per=34.46%, avg=29152.00, stdev=814.59, samples=2 00:10:58.900 iops : min= 7144, max= 7432, avg=7288.00, stdev=203.65, samples=2 00:10:58.900 lat (usec) : 750=0.04%, 1000=0.15% 00:10:58.900 lat (msec) : 2=0.91%, 4=2.88%, 10=78.67%, 20=15.51%, 50=0.74% 00:10:58.900 lat (msec) : 100=1.09% 00:10:58.900 cpu : usr=3.59%, sys=7.28%, ctx=657, majf=0, minf=2 00:10:58.900 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:10:58.900 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.900 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:58.900 issued rwts: total=7168,7415,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:58.900 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:58.900 job2: (groupid=0, jobs=1): err= 0: pid=2639018: Mon Dec 9 09:28:34 2024 00:10:58.900 read: IOPS=4660, BW=18.2MiB/s (19.1MB/s)(19.0MiB/1043msec) 00:10:58.900 slat (nsec): min=948, max=18247k, avg=95832.63, stdev=592546.98 00:10:58.900 clat (usec): min=5573, max=52593, avg=12498.70, stdev=7223.20 00:10:58.900 lat (usec): min=5576, max=56012, avg=12594.53, stdev=7255.73 00:10:58.900 clat percentiles (usec): 00:10:58.900 | 1.00th=[ 6652], 5.00th=[ 8160], 10.00th=[ 8848], 20.00th=[ 9372], 00:10:58.900 | 30.00th=[ 9634], 40.00th=[10028], 50.00th=[10421], 60.00th=[10814], 00:10:58.900 | 70.00th=[11600], 80.00th=[12518], 90.00th=[16712], 95.00th=[26346], 00:10:58.900 | 99.00th=[51119], 99.50th=[51643], 99.90th=[52691], 99.95th=[52691], 00:10:58.900 | 99.99th=[52691] 00:10:58.900 write: IOPS=4908, BW=19.2MiB/s (20.1MB/s)(20.0MiB/1043msec); 0 zone resets 00:10:58.900 slat (nsec): min=1592, max=19549k, avg=99777.49, stdev=518997.97 00:10:58.900 clat (usec): min=5170, max=38678, avg=13886.43, stdev=6533.09 00:10:58.900 lat (usec): min=5179, max=38693, avg=13986.20, stdev=6572.74 00:10:58.900 clat percentiles (usec): 00:10:58.900 | 1.00th=[ 6783], 5.00th=[ 8094], 10.00th=[ 8848], 20.00th=[ 9372], 00:10:58.900 | 30.00th=[10290], 40.00th=[10814], 50.00th=[11600], 60.00th=[12649], 00:10:58.900 | 70.00th=[14353], 80.00th=[15795], 90.00th=[24773], 95.00th=[28705], 00:10:58.900 | 99.00th=[35914], 99.50th=[36963], 99.90th=[38536], 99.95th=[38536], 00:10:58.900 | 99.99th=[38536] 00:10:58.900 bw ( KiB/s): min=18488, max=22472, per=24.21%, avg=20480.00, stdev=2817.11, samples=2 00:10:58.900 iops : min= 4622, max= 5618, avg=5120.00, stdev=704.28, samples=2 00:10:58.900 lat (msec) : 10=33.92%, 20=55.14%, 50=10.30%, 100=0.63% 00:10:58.900 cpu : usr=3.17%, sys=4.61%, ctx=621, majf=0, minf=1 00:10:58.900 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:58.900 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.900 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:58.900 issued rwts: total=4861,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:58.900 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:58.900 job3: (groupid=0, jobs=1): err= 0: pid=2639019: Mon Dec 9 09:28:34 2024 00:10:58.900 read: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec) 00:10:58.900 slat (nsec): min=987, max=45143k, avg=118487.55, stdev=1486531.46 00:10:58.900 clat (usec): min=1831, max=101792, avg=15920.00, stdev=18764.37 00:10:58.900 lat (usec): min=1871, max=101797, avg=16038.49, stdev=18859.82 00:10:58.900 clat percentiles (msec): 00:10:58.900 | 1.00th=[ 4], 5.00th=[ 6], 10.00th=[ 7], 20.00th=[ 7], 00:10:58.900 | 30.00th=[ 8], 40.00th=[ 9], 50.00th=[ 9], 60.00th=[ 11], 00:10:58.900 | 70.00th=[ 14], 80.00th=[ 17], 90.00th=[ 29], 95.00th=[ 59], 00:10:58.900 | 99.00th=[ 101], 99.50th=[ 103], 99.90th=[ 103], 99.95th=[ 103], 00:10:58.900 | 99.99th=[ 103] 00:10:58.900 write: IOPS=5468, BW=21.4MiB/s (22.4MB/s)(21.4MiB/1004msec); 0 zone resets 00:10:58.900 slat (nsec): min=1608, max=6280.4k, avg=64223.40, stdev=402979.45 00:10:58.900 clat (usec): min=624, max=28571, avg=8324.78, stdev=3597.07 00:10:58.900 lat (usec): min=645, max=28574, avg=8389.01, stdev=3621.33 00:10:58.900 clat percentiles (usec): 00:10:58.900 | 1.00th=[ 2147], 5.00th=[ 4113], 10.00th=[ 4621], 20.00th=[ 6325], 00:10:58.900 | 30.00th=[ 6456], 40.00th=[ 6718], 50.00th=[ 7111], 60.00th=[ 7832], 00:10:58.900 | 70.00th=[ 9372], 80.00th=[10814], 90.00th=[12518], 95.00th=[14877], 00:10:58.900 | 99.00th=[23200], 99.50th=[25560], 99.90th=[28443], 99.95th=[28443], 00:10:58.900 | 99.99th=[28443] 00:10:58.900 bw ( KiB/s): min=10136, max=32768, per=25.36%, avg=21452.00, stdev=16003.24, samples=2 00:10:58.900 iops : min= 2534, max= 8192, avg=5363.00, stdev=4000.81, samples=2 00:10:58.900 lat (usec) : 750=0.05%, 1000=0.05% 00:10:58.900 lat (msec) : 2=0.26%, 4=2.07%, 10=63.58%, 20=26.25%, 50=2.95% 00:10:58.900 lat (msec) : 100=4.20%, 250=0.58% 00:10:58.900 cpu : usr=3.79%, sys=6.78%, ctx=357, majf=0, minf=2 00:10:58.900 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:58.900 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.900 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:58.900 issued rwts: total=5120,5490,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:58.900 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:58.900 00:10:58.900 Run status group 0 (all jobs): 00:10:58.900 READ: bw=77.6MiB/s (81.4MB/s), 14.0MiB/s-27.9MiB/s (14.7MB/s-29.2MB/s), io=81.0MiB (84.9MB), run=1002-1043msec 00:10:58.900 WRITE: bw=82.6MiB/s (86.6MB/s), 15.7MiB/s-28.8MiB/s (16.5MB/s-30.2MB/s), io=86.2MiB (90.3MB), run=1002-1043msec 00:10:58.900 00:10:58.900 Disk stats (read/write): 00:10:58.900 nvme0n1: ios=3108/3154, merge=0/0, ticks=15414/10174, in_queue=25588, util=90.58% 00:10:58.900 nvme0n2: ios=6172/6399, merge=0/0, ticks=27746/30074, in_queue=57820, util=98.57% 00:10:58.900 nvme0n3: ios=3988/4096, merge=0/0, ticks=24031/26752, in_queue=50783, util=99.89% 00:10:58.900 nvme0n4: ios=4590/4608, merge=0/0, ticks=35522/27717, in_queue=63239, util=100.00% 00:10:58.900 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:58.900 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2639353 00:10:58.900 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:58.900 09:28:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:58.900 [global] 00:10:58.900 thread=1 00:10:58.900 invalidate=1 00:10:58.900 rw=read 00:10:58.900 time_based=1 00:10:58.900 runtime=10 00:10:58.900 ioengine=libaio 00:10:58.900 direct=1 00:10:58.900 bs=4096 00:10:58.900 iodepth=1 00:10:58.900 norandommap=1 00:10:58.900 numjobs=1 00:10:58.900 00:10:58.900 [job0] 00:10:58.900 filename=/dev/nvme0n1 00:10:58.900 [job1] 00:10:58.900 filename=/dev/nvme0n2 00:10:58.900 [job2] 00:10:58.900 filename=/dev/nvme0n3 00:10:58.900 [job3] 00:10:58.900 filename=/dev/nvme0n4 00:10:58.900 Could not set queue depth (nvme0n1) 00:10:58.900 Could not set queue depth (nvme0n2) 00:10:58.900 Could not set queue depth (nvme0n3) 00:10:58.900 Could not set queue depth (nvme0n4) 00:10:59.161 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:59.161 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:59.161 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:59.161 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:59.161 fio-3.35 00:10:59.161 Starting 4 threads 00:11:02.458 09:28:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:02.458 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=12578816, buflen=4096 00:11:02.458 fio: pid=2639543, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:02.458 09:28:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:02.458 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=9240576, buflen=4096 00:11:02.458 fio: pid=2639542, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:02.458 09:28:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:02.458 09:28:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:02.458 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=11579392, buflen=4096 00:11:02.458 fio: pid=2639539, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:02.458 09:28:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:02.458 09:28:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:02.458 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=6791168, buflen=4096 00:11:02.458 fio: pid=2639540, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:02.458 09:28:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:02.719 09:28:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:02.719 00:11:02.719 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2639539: Mon Dec 9 09:28:37 2024 00:11:02.719 read: IOPS=950, BW=3801KiB/s (3892kB/s)(11.0MiB/2975msec) 00:11:02.719 slat (usec): min=6, max=22587, avg=37.49, stdev=501.28 00:11:02.719 clat (usec): min=336, max=41185, avg=999.04, stdev=3110.84 00:11:02.719 lat (usec): min=344, max=41212, avg=1036.53, stdev=3150.30 00:11:02.719 clat percentiles (usec): 00:11:02.719 | 1.00th=[ 529], 5.00th=[ 594], 10.00th=[ 644], 20.00th=[ 685], 00:11:02.719 | 30.00th=[ 725], 40.00th=[ 750], 50.00th=[ 766], 60.00th=[ 791], 00:11:02.719 | 70.00th=[ 807], 80.00th=[ 824], 90.00th=[ 857], 95.00th=[ 881], 00:11:02.719 | 99.00th=[ 963], 99.50th=[40633], 99.90th=[41157], 99.95th=[41157], 00:11:02.719 | 99.99th=[41157] 00:11:02.719 bw ( KiB/s): min= 1000, max= 5096, per=29.62%, avg=3689.60, stdev=1951.22, samples=5 00:11:02.719 iops : min= 250, max= 1274, avg=922.40, stdev=487.81, samples=5 00:11:02.719 lat (usec) : 500=0.53%, 750=40.56%, 1000=58.20% 00:11:02.719 lat (msec) : 2=0.04%, 4=0.04%, 50=0.60% 00:11:02.719 cpu : usr=0.74%, sys=2.89%, ctx=2832, majf=0, minf=1 00:11:02.719 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:02.719 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:02.719 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:02.719 issued rwts: total=2828,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:02.719 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:02.719 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2639540: Mon Dec 9 09:28:37 2024 00:11:02.719 read: IOPS=526, BW=2105KiB/s (2155kB/s)(6632KiB/3151msec) 00:11:02.719 slat (usec): min=6, max=30124, avg=92.26, stdev=1149.84 00:11:02.719 clat (usec): min=651, max=42046, avg=1788.51, stdev=5286.68 00:11:02.719 lat (usec): min=678, max=42073, avg=1880.82, stdev=5402.21 00:11:02.720 clat percentiles (usec): 00:11:02.720 | 1.00th=[ 783], 5.00th=[ 898], 10.00th=[ 963], 20.00th=[ 1029], 00:11:02.720 | 30.00th=[ 1057], 40.00th=[ 1074], 50.00th=[ 1090], 60.00th=[ 1106], 00:11:02.720 | 70.00th=[ 1139], 80.00th=[ 1156], 90.00th=[ 1188], 95.00th=[ 1237], 00:11:02.720 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:11:02.720 | 99.99th=[42206] 00:11:02.720 bw ( KiB/s): min= 1056, max= 3520, per=16.75%, avg=2086.33, stdev=1086.83, samples=6 00:11:02.720 iops : min= 264, max= 880, avg=521.50, stdev=271.62, samples=6 00:11:02.720 lat (usec) : 750=0.42%, 1000=14.83% 00:11:02.720 lat (msec) : 2=82.88%, 4=0.06%, 50=1.75% 00:11:02.720 cpu : usr=0.70%, sys=2.35%, ctx=1666, majf=0, minf=2 00:11:02.720 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:02.720 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:02.720 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:02.720 issued rwts: total=1659,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:02.720 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:02.720 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2639542: Mon Dec 9 09:28:37 2024 00:11:02.720 read: IOPS=810, BW=3243KiB/s (3320kB/s)(9024KiB/2783msec) 00:11:02.720 slat (nsec): min=6844, max=55861, avg=24003.68, stdev=7071.48 00:11:02.720 clat (usec): min=418, max=41371, avg=1194.28, stdev=3793.12 00:11:02.720 lat (usec): min=444, max=41396, avg=1218.28, stdev=3793.39 00:11:02.720 clat percentiles (usec): 00:11:02.720 | 1.00th=[ 619], 5.00th=[ 660], 10.00th=[ 685], 20.00th=[ 734], 00:11:02.720 | 30.00th=[ 758], 40.00th=[ 775], 50.00th=[ 791], 60.00th=[ 807], 00:11:02.720 | 70.00th=[ 824], 80.00th=[ 881], 90.00th=[ 1106], 95.00th=[ 1188], 00:11:02.720 | 99.00th=[ 1385], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:02.720 | 99.99th=[41157] 00:11:02.720 bw ( KiB/s): min= 104, max= 5048, per=24.93%, avg=3105.60, stdev=2132.88, samples=5 00:11:02.720 iops : min= 26, max= 1262, avg=776.40, stdev=533.22, samples=5 00:11:02.720 lat (usec) : 500=0.04%, 750=26.27%, 1000=58.04% 00:11:02.720 lat (msec) : 2=14.62%, 10=0.04%, 50=0.93% 00:11:02.720 cpu : usr=0.72%, sys=2.34%, ctx=2257, majf=0, minf=2 00:11:02.720 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:02.720 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:02.720 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:02.720 issued rwts: total=2257,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:02.720 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:02.720 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2639543: Mon Dec 9 09:28:37 2024 00:11:02.720 read: IOPS=1182, BW=4728KiB/s (4842kB/s)(12.0MiB/2598msec) 00:11:02.720 slat (nsec): min=6754, max=66606, avg=23801.36, stdev=7982.03 00:11:02.720 clat (usec): min=264, max=41220, avg=809.49, stdev=1035.70 00:11:02.720 lat (usec): min=272, max=41228, avg=833.29, stdev=1035.57 00:11:02.720 clat percentiles (usec): 00:11:02.720 | 1.00th=[ 474], 5.00th=[ 578], 10.00th=[ 644], 20.00th=[ 725], 00:11:02.720 | 30.00th=[ 766], 40.00th=[ 791], 50.00th=[ 807], 60.00th=[ 816], 00:11:02.720 | 70.00th=[ 832], 80.00th=[ 857], 90.00th=[ 881], 95.00th=[ 898], 00:11:02.720 | 99.00th=[ 947], 99.50th=[ 963], 99.90th=[ 1139], 99.95th=[41157], 00:11:02.720 | 99.99th=[41157] 00:11:02.720 bw ( KiB/s): min= 4624, max= 4912, per=38.29%, avg=4769.60, stdev=122.43, samples=5 00:11:02.720 iops : min= 1156, max= 1228, avg=1192.40, stdev=30.61, samples=5 00:11:02.720 lat (usec) : 500=1.37%, 750=22.98%, 1000=75.46% 00:11:02.720 lat (msec) : 2=0.10%, 50=0.07% 00:11:02.720 cpu : usr=1.00%, sys=3.43%, ctx=3072, majf=0, minf=2 00:11:02.720 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:02.720 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:02.720 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:02.720 issued rwts: total=3072,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:02.720 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:02.720 00:11:02.720 Run status group 0 (all jobs): 00:11:02.720 READ: bw=12.2MiB/s (12.8MB/s), 2105KiB/s-4728KiB/s (2155kB/s-4842kB/s), io=38.3MiB (40.2MB), run=2598-3151msec 00:11:02.720 00:11:02.720 Disk stats (read/write): 00:11:02.720 nvme0n1: ios=2708/0, merge=0/0, ticks=3541/0, in_queue=3541, util=98.10% 00:11:02.720 nvme0n2: ios=1628/0, merge=0/0, ticks=2783/0, in_queue=2783, util=92.69% 00:11:02.720 nvme0n3: ios=2041/0, merge=0/0, ticks=2471/0, in_queue=2471, util=95.99% 00:11:02.720 nvme0n4: ios=3072/0, merge=0/0, ticks=2427/0, in_queue=2427, util=96.35% 00:11:02.720 09:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:02.720 09:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:02.981 09:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:02.981 09:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:03.242 09:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:03.242 09:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:03.242 09:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:03.242 09:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:03.504 09:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:03.504 09:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 2639353 00:11:03.504 09:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:03.504 09:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:03.504 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:03.504 09:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:03.504 09:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:11:03.504 09:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:03.504 09:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:03.504 09:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:03.504 09:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:03.504 09:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:11:03.504 09:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:03.504 09:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:03.504 nvmf hotplug test: fio failed as expected 00:11:03.504 09:28:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:03.766 09:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:03.766 09:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:03.766 09:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:03.766 09:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:03.766 09:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:03.766 09:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:03.766 09:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:11:03.766 09:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:03.766 09:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:11:03.766 09:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:03.766 09:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:03.766 rmmod nvme_tcp 00:11:03.766 rmmod nvme_fabrics 00:11:03.766 rmmod nvme_keyring 00:11:03.766 09:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:04.026 09:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:11:04.026 09:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:11:04.026 09:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 2635615 ']' 00:11:04.026 09:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 2635615 00:11:04.026 09:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 2635615 ']' 00:11:04.026 09:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 2635615 00:11:04.026 09:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:11:04.026 09:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:04.026 09:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2635615 00:11:04.026 09:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:04.026 09:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:04.026 09:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2635615' 00:11:04.026 killing process with pid 2635615 00:11:04.026 09:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 2635615 00:11:04.026 09:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 2635615 00:11:04.026 09:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:04.026 09:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:04.026 09:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:04.026 09:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:11:04.026 09:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:11:04.026 09:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:04.026 09:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:11:04.026 09:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:04.026 09:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:04.026 09:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:04.026 09:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:04.026 09:28:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:06.590 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:06.590 00:11:06.590 real 0m29.046s 00:11:06.590 user 2m39.791s 00:11:06.590 sys 0m9.569s 00:11:06.590 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:06.590 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.590 ************************************ 00:11:06.590 END TEST nvmf_fio_target 00:11:06.590 ************************************ 00:11:06.590 09:28:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:06.590 09:28:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:06.590 09:28:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:06.590 09:28:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:06.590 ************************************ 00:11:06.590 START TEST nvmf_bdevio 00:11:06.590 ************************************ 00:11:06.590 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:06.590 * Looking for test storage... 00:11:06.590 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:06.590 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:06.590 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:11:06.590 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:06.590 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:06.590 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:06.590 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:06.590 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:06.590 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:11:06.590 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:11:06.590 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:11:06.590 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:11:06.590 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:11:06.590 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:11:06.590 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:11:06.590 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:06.590 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:11:06.590 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:11:06.590 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:06.590 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:06.590 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:11:06.590 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:11:06.590 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:06.590 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:11:06.590 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:11:06.590 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:11:06.590 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:11:06.590 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:06.590 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:11:06.590 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:11:06.590 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:06.590 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:06.590 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:11:06.590 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:06.590 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:06.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.590 --rc genhtml_branch_coverage=1 00:11:06.590 --rc genhtml_function_coverage=1 00:11:06.590 --rc genhtml_legend=1 00:11:06.590 --rc geninfo_all_blocks=1 00:11:06.590 --rc geninfo_unexecuted_blocks=1 00:11:06.590 00:11:06.590 ' 00:11:06.590 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:06.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.590 --rc genhtml_branch_coverage=1 00:11:06.590 --rc genhtml_function_coverage=1 00:11:06.590 --rc genhtml_legend=1 00:11:06.590 --rc geninfo_all_blocks=1 00:11:06.590 --rc geninfo_unexecuted_blocks=1 00:11:06.590 00:11:06.590 ' 00:11:06.590 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:06.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.590 --rc genhtml_branch_coverage=1 00:11:06.590 --rc genhtml_function_coverage=1 00:11:06.590 --rc genhtml_legend=1 00:11:06.590 --rc geninfo_all_blocks=1 00:11:06.590 --rc geninfo_unexecuted_blocks=1 00:11:06.590 00:11:06.590 ' 00:11:06.590 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:06.590 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.590 --rc genhtml_branch_coverage=1 00:11:06.590 --rc genhtml_function_coverage=1 00:11:06.590 --rc genhtml_legend=1 00:11:06.590 --rc geninfo_all_blocks=1 00:11:06.590 --rc geninfo_unexecuted_blocks=1 00:11:06.590 00:11:06.590 ' 00:11:06.590 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:06.590 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:06.590 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:06.590 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:06.590 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:06.590 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:06.590 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:06.590 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:06.590 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:06.590 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:06.590 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:06.590 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:06.590 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:06.590 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:06.591 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:06.591 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:06.591 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:06.591 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:06.591 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:06.591 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:11:06.591 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:06.591 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:06.591 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:06.591 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.591 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.591 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.591 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:06.591 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.591 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:11:06.591 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:06.591 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:06.591 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:06.591 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:06.591 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:06.591 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:06.591 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:06.591 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:06.591 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:06.591 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:06.591 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:06.591 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:06.591 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:06.591 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:06.591 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:06.591 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:06.591 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:06.591 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:06.591 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:06.591 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:06.591 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:06.591 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:06.591 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:06.591 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:11:06.591 09:28:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:14.732 09:28:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:14.732 09:28:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:11:14.732 09:28:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:14.732 09:28:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:14.732 09:28:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:14.732 09:28:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:14.732 09:28:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:14.732 09:28:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:11:14.732 09:28:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:14.732 09:28:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:11:14.732 09:28:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:11:14.732 09:28:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:11:14.732 09:28:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:11:14.732 09:28:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:11:14.732 09:28:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:11:14.732 09:28:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:14.732 09:28:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:14.732 09:28:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:14.732 09:28:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:14.732 09:28:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:14.732 09:28:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:14.732 09:28:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:14.732 09:28:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:14.732 09:28:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:14.732 09:28:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:14.732 09:28:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:14.732 09:28:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:14.732 09:28:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:14.732 09:28:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:14.732 09:28:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:14.732 09:28:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:14.732 09:28:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:14.732 09:28:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:14.732 09:28:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:14.732 09:28:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:14.732 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:14.732 09:28:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:14.733 09:28:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:14.733 09:28:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:14.733 09:28:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:14.733 09:28:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:14.733 09:28:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:14.733 09:28:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:14.733 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:14.733 09:28:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:14.733 09:28:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:14.733 09:28:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:14.733 09:28:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:14.733 09:28:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:14.733 09:28:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:14.733 09:28:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:14.733 09:28:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:14.733 09:28:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:14.733 09:28:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:14.733 09:28:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:14.733 09:28:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:14.733 09:28:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:14.733 09:28:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:14.733 09:28:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:14.733 09:28:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:14.733 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:14.733 09:28:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:14.733 09:28:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:14.733 09:28:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:14.733 09:28:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:14.733 09:28:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:14.733 09:28:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:14.733 09:28:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:14.733 09:28:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:14.733 09:28:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:14.733 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:14.733 09:28:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:14.733 09:28:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:14.733 09:28:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:11:14.733 09:28:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:14.733 09:28:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:14.733 09:28:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:14.733 09:28:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:14.733 09:28:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:14.733 09:28:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:14.733 09:28:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:14.733 09:28:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:14.733 09:28:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:14.733 09:28:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:14.733 09:28:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:14.733 09:28:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:14.733 09:28:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:14.733 09:28:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:14.733 09:28:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:14.733 09:28:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:14.733 09:28:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:14.733 09:28:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:14.733 09:28:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:14.733 09:28:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:14.733 09:28:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:14.733 09:28:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:14.733 09:28:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:14.733 09:28:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:14.733 09:28:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:14.733 09:28:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:14.733 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:14.733 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.566 ms 00:11:14.733 00:11:14.733 --- 10.0.0.2 ping statistics --- 00:11:14.733 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:14.733 rtt min/avg/max/mdev = 0.566/0.566/0.566/0.000 ms 00:11:14.733 09:28:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:14.733 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:14.733 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:11:14.733 00:11:14.733 --- 10.0.0.1 ping statistics --- 00:11:14.733 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:14.733 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:11:14.733 09:28:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:14.733 09:28:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:11:14.733 09:28:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:14.733 09:28:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:14.733 09:28:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:14.733 09:28:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:14.733 09:28:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:14.733 09:28:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:14.733 09:28:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:14.733 09:28:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:14.733 09:28:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:14.733 09:28:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:14.733 09:28:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:14.733 09:28:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=2644598 00:11:14.733 09:28:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 2644598 00:11:14.733 09:28:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:14.733 09:28:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 2644598 ']' 00:11:14.733 09:28:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:14.733 09:28:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:14.733 09:28:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:14.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:14.733 09:28:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:14.733 09:28:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:14.733 [2024-12-09 09:28:49.231994] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:11:14.733 [2024-12-09 09:28:49.232044] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:14.733 [2024-12-09 09:28:49.324093] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:14.733 [2024-12-09 09:28:49.342251] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:14.733 [2024-12-09 09:28:49.342284] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:14.733 [2024-12-09 09:28:49.342293] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:14.733 [2024-12-09 09:28:49.342299] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:14.733 [2024-12-09 09:28:49.342305] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:14.733 [2024-12-09 09:28:49.343967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:14.733 [2024-12-09 09:28:49.344121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:11:14.733 [2024-12-09 09:28:49.344275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:14.733 [2024-12-09 09:28:49.344276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:11:14.733 09:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:14.733 09:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:11:14.734 09:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:14.734 09:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:14.734 09:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:14.734 09:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:14.734 09:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:14.734 09:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.734 09:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:14.734 [2024-12-09 09:28:50.096868] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:14.734 09:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.734 09:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:14.734 09:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.734 09:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:14.734 Malloc0 00:11:14.734 09:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.734 09:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:14.734 09:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.734 09:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:14.734 09:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.734 09:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:14.734 09:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.734 09:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:14.734 09:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.734 09:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:14.734 09:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.734 09:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:14.734 [2024-12-09 09:28:50.177102] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:14.734 09:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.734 09:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:14.994 09:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:14.994 09:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:11:14.994 09:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:11:14.994 09:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:14.994 09:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:14.994 { 00:11:14.994 "params": { 00:11:14.994 "name": "Nvme$subsystem", 00:11:14.994 "trtype": "$TEST_TRANSPORT", 00:11:14.994 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:14.994 "adrfam": "ipv4", 00:11:14.994 "trsvcid": "$NVMF_PORT", 00:11:14.994 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:14.994 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:14.994 "hdgst": ${hdgst:-false}, 00:11:14.994 "ddgst": ${ddgst:-false} 00:11:14.994 }, 00:11:14.994 "method": "bdev_nvme_attach_controller" 00:11:14.994 } 00:11:14.994 EOF 00:11:14.994 )") 00:11:14.994 09:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:11:14.994 09:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:11:14.994 09:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:11:14.994 09:28:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:14.994 "params": { 00:11:14.994 "name": "Nvme1", 00:11:14.994 "trtype": "tcp", 00:11:14.994 "traddr": "10.0.0.2", 00:11:14.994 "adrfam": "ipv4", 00:11:14.994 "trsvcid": "4420", 00:11:14.994 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:14.994 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:14.994 "hdgst": false, 00:11:14.994 "ddgst": false 00:11:14.994 }, 00:11:14.994 "method": "bdev_nvme_attach_controller" 00:11:14.994 }' 00:11:14.994 [2024-12-09 09:28:50.235474] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:11:14.994 [2024-12-09 09:28:50.235543] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2644929 ] 00:11:14.994 [2024-12-09 09:28:50.332313] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:14.994 [2024-12-09 09:28:50.363854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:14.994 [2024-12-09 09:28:50.363982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:14.994 [2024-12-09 09:28:50.363985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:15.255 I/O targets: 00:11:15.255 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:15.255 00:11:15.255 00:11:15.255 CUnit - A unit testing framework for C - Version 2.1-3 00:11:15.255 http://cunit.sourceforge.net/ 00:11:15.255 00:11:15.255 00:11:15.255 Suite: bdevio tests on: Nvme1n1 00:11:15.255 Test: blockdev write read block ...passed 00:11:15.255 Test: blockdev write zeroes read block ...passed 00:11:15.255 Test: blockdev write zeroes read no split ...passed 00:11:15.255 Test: blockdev write zeroes read split ...passed 00:11:15.255 Test: blockdev write zeroes read split partial ...passed 00:11:15.255 Test: blockdev reset ...[2024-12-09 09:28:50.644138] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:11:15.255 [2024-12-09 09:28:50.644209] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x126b580 (9): Bad file descriptor 00:11:15.255 [2024-12-09 09:28:50.706795] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:11:15.255 passed 00:11:15.516 Test: blockdev write read 8 blocks ...passed 00:11:15.516 Test: blockdev write read size > 128k ...passed 00:11:15.516 Test: blockdev write read invalid size ...passed 00:11:15.516 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:15.516 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:15.516 Test: blockdev write read max offset ...passed 00:11:15.516 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:15.516 Test: blockdev writev readv 8 blocks ...passed 00:11:15.516 Test: blockdev writev readv 30 x 1block ...passed 00:11:15.516 Test: blockdev writev readv block ...passed 00:11:15.516 Test: blockdev writev readv size > 128k ...passed 00:11:15.516 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:15.516 Test: blockdev comparev and writev ...[2024-12-09 09:28:50.926630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:15.516 [2024-12-09 09:28:50.926664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:15.516 [2024-12-09 09:28:50.926676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:15.516 [2024-12-09 09:28:50.926682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:15.516 [2024-12-09 09:28:50.927056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:15.516 [2024-12-09 09:28:50.927068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:15.516 [2024-12-09 09:28:50.927078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:15.516 [2024-12-09 09:28:50.927084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:15.516 [2024-12-09 09:28:50.927437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:15.516 [2024-12-09 09:28:50.927445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:15.516 [2024-12-09 09:28:50.927455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:15.516 [2024-12-09 09:28:50.927460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:15.516 [2024-12-09 09:28:50.927822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:15.516 [2024-12-09 09:28:50.927830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:15.516 [2024-12-09 09:28:50.927839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:15.516 [2024-12-09 09:28:50.927845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:15.516 passed 00:11:15.777 Test: blockdev nvme passthru rw ...passed 00:11:15.777 Test: blockdev nvme passthru vendor specific ...[2024-12-09 09:28:51.011179] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:15.777 [2024-12-09 09:28:51.011190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:15.777 [2024-12-09 09:28:51.011418] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:15.777 [2024-12-09 09:28:51.011425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:15.777 [2024-12-09 09:28:51.011661] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:15.777 [2024-12-09 09:28:51.011669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:15.777 [2024-12-09 09:28:51.011907] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:15.777 [2024-12-09 09:28:51.011915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:15.777 passed 00:11:15.777 Test: blockdev nvme admin passthru ...passed 00:11:15.777 Test: blockdev copy ...passed 00:11:15.777 00:11:15.777 Run Summary: Type Total Ran Passed Failed Inactive 00:11:15.777 suites 1 1 n/a 0 0 00:11:15.777 tests 23 23 23 0 0 00:11:15.777 asserts 152 152 152 0 n/a 00:11:15.777 00:11:15.777 Elapsed time = 1.108 seconds 00:11:15.777 09:28:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:15.777 09:28:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.777 09:28:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:15.777 09:28:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.777 09:28:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:15.777 09:28:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:15.777 09:28:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:15.777 09:28:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:11:15.777 09:28:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:15.777 09:28:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:11:15.777 09:28:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:15.777 09:28:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:15.778 rmmod nvme_tcp 00:11:15.778 rmmod nvme_fabrics 00:11:15.778 rmmod nvme_keyring 00:11:16.038 09:28:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:16.038 09:28:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:11:16.038 09:28:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:11:16.038 09:28:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 2644598 ']' 00:11:16.038 09:28:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 2644598 00:11:16.038 09:28:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 2644598 ']' 00:11:16.038 09:28:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 2644598 00:11:16.038 09:28:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:11:16.038 09:28:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:16.038 09:28:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2644598 00:11:16.038 09:28:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:11:16.038 09:28:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:11:16.038 09:28:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2644598' 00:11:16.038 killing process with pid 2644598 00:11:16.038 09:28:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 2644598 00:11:16.038 09:28:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 2644598 00:11:16.038 09:28:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:16.038 09:28:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:16.038 09:28:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:16.038 09:28:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:11:16.038 09:28:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:11:16.038 09:28:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:16.038 09:28:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:11:16.038 09:28:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:16.038 09:28:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:16.038 09:28:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:16.038 09:28:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:16.038 09:28:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:18.586 09:28:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:18.586 00:11:18.586 real 0m11.988s 00:11:18.586 user 0m12.421s 00:11:18.586 sys 0m6.168s 00:11:18.586 09:28:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:18.587 09:28:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:18.587 ************************************ 00:11:18.587 END TEST nvmf_bdevio 00:11:18.587 ************************************ 00:11:18.587 09:28:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:18.587 00:11:18.587 real 5m0.773s 00:11:18.587 user 11m50.665s 00:11:18.587 sys 1m49.449s 00:11:18.587 09:28:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:18.587 09:28:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:18.587 ************************************ 00:11:18.587 END TEST nvmf_target_core 00:11:18.587 ************************************ 00:11:18.587 09:28:53 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:18.587 09:28:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:18.587 09:28:53 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:18.587 09:28:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:18.587 ************************************ 00:11:18.587 START TEST nvmf_target_extra 00:11:18.587 ************************************ 00:11:18.587 09:28:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:18.587 * Looking for test storage... 00:11:18.587 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:11:18.587 09:28:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:18.587 09:28:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:11:18.587 09:28:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:18.587 09:28:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:18.587 09:28:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:18.587 09:28:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:18.587 09:28:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:18.587 09:28:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:11:18.587 09:28:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:11:18.587 09:28:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:11:18.587 09:28:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:11:18.587 09:28:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:11:18.587 09:28:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:11:18.587 09:28:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:11:18.587 09:28:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:18.587 09:28:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:11:18.587 09:28:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:11:18.587 09:28:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:18.587 09:28:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:18.587 09:28:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:11:18.587 09:28:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:11:18.587 09:28:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:18.587 09:28:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:11:18.587 09:28:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:11:18.587 09:28:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:11:18.587 09:28:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:11:18.587 09:28:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:18.587 09:28:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:11:18.587 09:28:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:11:18.587 09:28:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:18.587 09:28:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:18.587 09:28:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:11:18.587 09:28:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:18.587 09:28:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:18.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.587 --rc genhtml_branch_coverage=1 00:11:18.587 --rc genhtml_function_coverage=1 00:11:18.587 --rc genhtml_legend=1 00:11:18.587 --rc geninfo_all_blocks=1 00:11:18.587 --rc geninfo_unexecuted_blocks=1 00:11:18.587 00:11:18.587 ' 00:11:18.587 09:28:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:18.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.587 --rc genhtml_branch_coverage=1 00:11:18.587 --rc genhtml_function_coverage=1 00:11:18.587 --rc genhtml_legend=1 00:11:18.587 --rc geninfo_all_blocks=1 00:11:18.587 --rc geninfo_unexecuted_blocks=1 00:11:18.587 00:11:18.587 ' 00:11:18.587 09:28:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:18.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.587 --rc genhtml_branch_coverage=1 00:11:18.587 --rc genhtml_function_coverage=1 00:11:18.587 --rc genhtml_legend=1 00:11:18.587 --rc geninfo_all_blocks=1 00:11:18.587 --rc geninfo_unexecuted_blocks=1 00:11:18.587 00:11:18.587 ' 00:11:18.587 09:28:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:18.587 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.587 --rc genhtml_branch_coverage=1 00:11:18.587 --rc genhtml_function_coverage=1 00:11:18.587 --rc genhtml_legend=1 00:11:18.587 --rc geninfo_all_blocks=1 00:11:18.587 --rc geninfo_unexecuted_blocks=1 00:11:18.587 00:11:18.587 ' 00:11:18.587 09:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:18.587 09:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:18.587 09:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:18.587 09:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:18.587 09:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:18.587 09:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:18.587 09:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:18.587 09:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:18.587 09:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:18.587 09:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:18.587 09:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:18.587 09:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:18.587 09:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:18.587 09:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:18.587 09:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:18.587 09:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:18.587 09:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:18.587 09:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:18.587 09:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:18.587 09:28:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:11:18.587 09:28:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:18.587 09:28:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:18.587 09:28:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:18.587 09:28:53 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.587 09:28:53 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.587 09:28:53 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.587 09:28:53 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:18.588 09:28:53 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.588 09:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:11:18.588 09:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:18.588 09:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:18.588 09:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:18.588 09:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:18.588 09:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:18.588 09:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:18.588 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:18.588 09:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:18.588 09:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:18.588 09:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:18.588 09:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:18.588 09:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:18.588 09:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:11:18.588 09:28:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:18.588 09:28:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:18.588 09:28:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:18.588 09:28:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:18.588 ************************************ 00:11:18.588 START TEST nvmf_example 00:11:18.588 ************************************ 00:11:18.588 09:28:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:18.588 * Looking for test storage... 00:11:18.849 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:18.849 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:18.849 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lcov --version 00:11:18.849 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:18.849 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:18.849 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:18.849 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:18.849 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:18.849 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:11:18.849 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:11:18.849 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:11:18.849 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:11:18.849 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:11:18.849 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:11:18.849 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:11:18.849 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:18.849 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:11:18.849 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:11:18.849 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:18.849 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:18.849 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:11:18.849 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:11:18.849 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:18.849 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:11:18.849 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:11:18.849 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:11:18.849 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:11:18.849 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:18.849 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:11:18.849 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:11:18.849 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:18.849 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:18.849 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:11:18.849 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:18.849 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:18.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.849 --rc genhtml_branch_coverage=1 00:11:18.849 --rc genhtml_function_coverage=1 00:11:18.849 --rc genhtml_legend=1 00:11:18.849 --rc geninfo_all_blocks=1 00:11:18.849 --rc geninfo_unexecuted_blocks=1 00:11:18.849 00:11:18.849 ' 00:11:18.849 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:18.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.849 --rc genhtml_branch_coverage=1 00:11:18.849 --rc genhtml_function_coverage=1 00:11:18.849 --rc genhtml_legend=1 00:11:18.850 --rc geninfo_all_blocks=1 00:11:18.850 --rc geninfo_unexecuted_blocks=1 00:11:18.850 00:11:18.850 ' 00:11:18.850 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:18.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.850 --rc genhtml_branch_coverage=1 00:11:18.850 --rc genhtml_function_coverage=1 00:11:18.850 --rc genhtml_legend=1 00:11:18.850 --rc geninfo_all_blocks=1 00:11:18.850 --rc geninfo_unexecuted_blocks=1 00:11:18.850 00:11:18.850 ' 00:11:18.850 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:18.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:18.850 --rc genhtml_branch_coverage=1 00:11:18.850 --rc genhtml_function_coverage=1 00:11:18.850 --rc genhtml_legend=1 00:11:18.850 --rc geninfo_all_blocks=1 00:11:18.850 --rc geninfo_unexecuted_blocks=1 00:11:18.850 00:11:18.850 ' 00:11:18.850 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:18.850 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:11:18.850 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:18.850 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:18.850 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:18.850 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:18.850 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:18.850 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:18.850 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:18.850 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:18.850 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:18.850 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:18.850 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:18.850 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:18.850 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:18.850 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:18.850 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:18.850 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:18.850 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:18.850 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:11:18.850 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:18.850 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:18.850 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:18.850 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.850 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.850 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.850 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:11:18.850 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.850 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:11:18.850 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:18.850 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:18.850 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:18.850 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:18.850 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:18.850 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:18.850 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:18.850 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:18.850 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:18.850 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:18.850 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:11:18.850 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:11:18.850 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:11:18.850 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:11:18.850 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:11:18.850 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:11:18.850 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:11:18.850 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:11:18.850 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:18.850 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:18.850 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:11:18.850 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:18.850 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:18.850 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:18.850 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:18.850 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:18.850 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:18.850 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:18.850 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:18.850 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:18.850 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:18.850 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:11:18.850 09:28:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:26.986 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:26.986 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:11:26.986 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:26.986 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:26.986 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:26.986 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:26.986 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:26.986 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:11:26.986 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:26.986 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:11:26.986 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:11:26.986 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:11:26.986 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:11:26.986 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:11:26.986 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:11:26.987 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:26.987 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:26.987 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:26.987 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:26.987 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:26.987 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:26.987 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:26.987 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:26.987 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:26.987 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:26.987 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:26.987 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:26.987 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:26.987 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:26.987 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:26.987 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:26.987 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:26.987 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:26.987 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:26.987 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:26.987 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:26.987 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:26.987 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:26.987 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:26.987 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:26.987 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:26.987 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:26.987 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:26.987 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:26.987 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:26.987 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:26.987 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:26.987 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:26.987 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:26.987 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:26.987 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:26.987 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:26.987 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:26.987 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:26.987 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:26.987 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:26.987 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:26.987 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:26.987 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:26.987 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:26.987 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:26.987 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:26.987 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:26.987 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:26.987 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:26.987 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:26.987 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:26.987 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:26.987 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:26.987 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:26.987 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:26.987 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:26.987 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:26.987 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:11:26.987 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:26.987 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:26.987 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:26.987 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:26.987 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:26.987 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:26.987 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:26.987 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:26.987 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:26.987 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:26.987 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:26.987 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:26.987 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:26.987 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:26.987 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:26.987 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:26.987 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:26.987 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:26.987 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:26.987 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:26.987 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:26.987 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:26.987 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:26.987 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:26.987 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:26.987 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:26.987 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:26.987 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.685 ms 00:11:26.987 00:11:26.987 --- 10.0.0.2 ping statistics --- 00:11:26.987 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:26.987 rtt min/avg/max/mdev = 0.685/0.685/0.685/0.000 ms 00:11:26.987 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:26.987 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:26.987 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.231 ms 00:11:26.987 00:11:26.987 --- 10.0.0.1 ping statistics --- 00:11:26.987 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:26.987 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:11:26.987 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:26.987 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:11:26.987 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:26.988 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:26.988 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:26.988 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:26.988 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:26.988 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:26.988 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:26.988 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:26.988 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:26.988 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:26.988 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:26.988 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:11:26.988 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:11:26.988 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2649441 00:11:26.988 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:26.988 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2649441 00:11:26.988 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:26.988 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 2649441 ']' 00:11:26.988 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:26.988 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:26.988 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:26.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:26.988 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:26.988 09:29:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:27.248 09:29:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:27.248 09:29:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:11:27.248 09:29:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:27.248 09:29:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:27.248 09:29:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:27.248 09:29:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:27.248 09:29:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.248 09:29:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:27.249 09:29:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.249 09:29:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:27.249 09:29:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.249 09:29:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:27.249 09:29:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.249 09:29:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:27.249 09:29:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:27.249 09:29:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.249 09:29:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:27.249 09:29:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.249 09:29:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:27.249 09:29:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:27.249 09:29:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.249 09:29:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:27.249 09:29:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.249 09:29:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:27.249 09:29:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.249 09:29:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:27.249 09:29:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.249 09:29:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:11:27.249 09:29:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:39.555 Initializing NVMe Controllers 00:11:39.555 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:39.555 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:39.555 Initialization complete. Launching workers. 00:11:39.555 ======================================================== 00:11:39.555 Latency(us) 00:11:39.555 Device Information : IOPS MiB/s Average min max 00:11:39.555 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19038.61 74.37 3361.02 609.23 15627.77 00:11:39.555 ======================================================== 00:11:39.555 Total : 19038.61 74.37 3361.02 609.23 15627.77 00:11:39.555 00:11:39.555 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:39.555 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:39.555 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:39.555 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:11:39.555 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:39.555 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:11:39.555 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:39.555 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:39.555 rmmod nvme_tcp 00:11:39.555 rmmod nvme_fabrics 00:11:39.555 rmmod nvme_keyring 00:11:39.555 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:39.555 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:11:39.555 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:11:39.555 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 2649441 ']' 00:11:39.555 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 2649441 00:11:39.555 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 2649441 ']' 00:11:39.555 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 2649441 00:11:39.555 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:11:39.555 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:39.555 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2649441 00:11:39.555 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:11:39.555 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:11:39.555 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2649441' 00:11:39.555 killing process with pid 2649441 00:11:39.555 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 2649441 00:11:39.555 09:29:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 2649441 00:11:39.555 nvmf threads initialize successfully 00:11:39.555 bdev subsystem init successfully 00:11:39.555 created a nvmf target service 00:11:39.555 create targets's poll groups done 00:11:39.555 all subsystems of target started 00:11:39.555 nvmf target is running 00:11:39.555 all subsystems of target stopped 00:11:39.555 destroy targets's poll groups done 00:11:39.555 destroyed the nvmf target service 00:11:39.555 bdev subsystem finish successfully 00:11:39.555 nvmf threads destroy successfully 00:11:39.555 09:29:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:39.555 09:29:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:39.555 09:29:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:39.555 09:29:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:11:39.555 09:29:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:11:39.555 09:29:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:11:39.555 09:29:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:39.555 09:29:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:39.555 09:29:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:39.555 09:29:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:39.555 09:29:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:39.555 09:29:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:39.815 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:39.815 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:39.815 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:39.815 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:39.815 00:11:39.815 real 0m21.291s 00:11:39.815 user 0m46.425s 00:11:39.815 sys 0m6.869s 00:11:39.815 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:39.815 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:39.815 ************************************ 00:11:39.815 END TEST nvmf_example 00:11:39.815 ************************************ 00:11:40.076 09:29:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:40.076 09:29:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:40.076 09:29:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:40.076 09:29:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:40.076 ************************************ 00:11:40.076 START TEST nvmf_filesystem 00:11:40.076 ************************************ 00:11:40.076 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:40.076 * Looking for test storage... 00:11:40.076 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:40.076 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:40.076 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:11:40.076 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:40.076 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:40.076 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:40.076 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:40.076 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:40.076 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:40.076 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:40.076 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:40.076 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:40.076 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:40.076 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:40.076 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:40.076 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:40.076 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:40.076 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:40.076 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:40.076 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:40.076 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:40.076 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:40.076 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:40.076 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:40.076 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:40.076 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:40.076 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:40.076 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:40.076 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:40.076 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:40.076 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:40.076 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:40.076 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:40.076 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:40.076 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:40.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:40.076 --rc genhtml_branch_coverage=1 00:11:40.076 --rc genhtml_function_coverage=1 00:11:40.076 --rc genhtml_legend=1 00:11:40.076 --rc geninfo_all_blocks=1 00:11:40.076 --rc geninfo_unexecuted_blocks=1 00:11:40.076 00:11:40.076 ' 00:11:40.076 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:40.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:40.076 --rc genhtml_branch_coverage=1 00:11:40.076 --rc genhtml_function_coverage=1 00:11:40.076 --rc genhtml_legend=1 00:11:40.076 --rc geninfo_all_blocks=1 00:11:40.076 --rc geninfo_unexecuted_blocks=1 00:11:40.076 00:11:40.076 ' 00:11:40.076 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:40.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:40.076 --rc genhtml_branch_coverage=1 00:11:40.076 --rc genhtml_function_coverage=1 00:11:40.076 --rc genhtml_legend=1 00:11:40.077 --rc geninfo_all_blocks=1 00:11:40.077 --rc geninfo_unexecuted_blocks=1 00:11:40.077 00:11:40.077 ' 00:11:40.077 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:40.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:40.077 --rc genhtml_branch_coverage=1 00:11:40.077 --rc genhtml_function_coverage=1 00:11:40.077 --rc genhtml_legend=1 00:11:40.077 --rc geninfo_all_blocks=1 00:11:40.077 --rc geninfo_unexecuted_blocks=1 00:11:40.077 00:11:40.077 ' 00:11:40.077 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:11:40.077 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:40.077 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:40.077 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:40.077 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:40.077 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:40.077 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:11:40.077 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:40.077 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:11:40.077 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:40.077 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:11:40.077 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:40.077 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:40.077 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:40.077 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:40.077 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:40.077 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:40.077 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:40.077 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:40.342 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:40.342 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:40.342 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:40.342 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:40.342 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:40.342 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:40.342 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:11:40.342 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:11:40.342 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:40.342 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:40.342 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:11:40.342 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:11:40.342 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:11:40.342 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:40.342 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:11:40.342 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:11:40.342 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:11:40.342 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:40.342 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:40.342 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:11:40.342 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:11:40.342 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:11:40.342 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:11:40.342 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:11:40.342 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:11:40.342 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:11:40.342 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:11:40.342 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:11:40.342 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:11:40.342 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:11:40.342 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:11:40.342 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:11:40.342 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:11:40.342 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:40.342 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:11:40.342 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:11:40.342 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:40.342 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:11:40.342 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:11:40.342 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:11:40.342 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:11:40.342 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:40.342 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:11:40.342 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:40.342 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:11:40.342 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:11:40.342 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:11:40.342 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:11:40.342 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:11:40.343 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:11:40.343 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:11:40.343 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:11:40.343 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:11:40.343 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:40.343 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:11:40.343 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:11:40.343 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:11:40.343 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:11:40.343 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:40.343 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:11:40.343 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:11:40.343 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:11:40.343 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:11:40.343 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:11:40.343 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:40.343 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:11:40.343 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:11:40.343 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:11:40.343 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:11:40.343 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:11:40.343 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:11:40.343 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:11:40.343 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:11:40.343 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:11:40.343 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:11:40.343 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:11:40.343 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:40.343 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:11:40.343 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:11:40.343 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:11:40.343 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:40.343 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:40.343 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:40.343 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:40.343 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:40.343 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:40.343 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:40.343 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:40.343 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:40.343 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:40.343 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:40.343 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:40.343 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:40.343 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:40.343 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:11:40.343 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:40.343 #define SPDK_CONFIG_H 00:11:40.343 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:40.343 #define SPDK_CONFIG_APPS 1 00:11:40.343 #define SPDK_CONFIG_ARCH native 00:11:40.343 #undef SPDK_CONFIG_ASAN 00:11:40.343 #undef SPDK_CONFIG_AVAHI 00:11:40.343 #undef SPDK_CONFIG_CET 00:11:40.343 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:40.343 #define SPDK_CONFIG_COVERAGE 1 00:11:40.343 #define SPDK_CONFIG_CROSS_PREFIX 00:11:40.343 #undef SPDK_CONFIG_CRYPTO 00:11:40.343 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:40.343 #undef SPDK_CONFIG_CUSTOMOCF 00:11:40.343 #undef SPDK_CONFIG_DAOS 00:11:40.343 #define SPDK_CONFIG_DAOS_DIR 00:11:40.343 #define SPDK_CONFIG_DEBUG 1 00:11:40.343 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:40.343 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:11:40.343 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:40.343 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:40.343 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:40.343 #undef SPDK_CONFIG_DPDK_UADK 00:11:40.343 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:40.343 #define SPDK_CONFIG_EXAMPLES 1 00:11:40.343 #undef SPDK_CONFIG_FC 00:11:40.343 #define SPDK_CONFIG_FC_PATH 00:11:40.343 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:40.343 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:40.343 #define SPDK_CONFIG_FSDEV 1 00:11:40.343 #undef SPDK_CONFIG_FUSE 00:11:40.343 #undef SPDK_CONFIG_FUZZER 00:11:40.343 #define SPDK_CONFIG_FUZZER_LIB 00:11:40.343 #undef SPDK_CONFIG_GOLANG 00:11:40.343 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:40.343 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:40.343 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:40.343 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:40.343 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:40.343 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:40.343 #undef SPDK_CONFIG_HAVE_LZ4 00:11:40.343 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:40.343 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:40.343 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:40.343 #define SPDK_CONFIG_IDXD 1 00:11:40.343 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:40.343 #undef SPDK_CONFIG_IPSEC_MB 00:11:40.343 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:40.343 #define SPDK_CONFIG_ISAL 1 00:11:40.343 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:40.343 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:40.343 #define SPDK_CONFIG_LIBDIR 00:11:40.343 #undef SPDK_CONFIG_LTO 00:11:40.343 #define SPDK_CONFIG_MAX_LCORES 128 00:11:40.343 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:11:40.343 #define SPDK_CONFIG_NVME_CUSE 1 00:11:40.343 #undef SPDK_CONFIG_OCF 00:11:40.343 #define SPDK_CONFIG_OCF_PATH 00:11:40.343 #define SPDK_CONFIG_OPENSSL_PATH 00:11:40.343 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:40.343 #define SPDK_CONFIG_PGO_DIR 00:11:40.343 #undef SPDK_CONFIG_PGO_USE 00:11:40.343 #define SPDK_CONFIG_PREFIX /usr/local 00:11:40.343 #undef SPDK_CONFIG_RAID5F 00:11:40.343 #undef SPDK_CONFIG_RBD 00:11:40.343 #define SPDK_CONFIG_RDMA 1 00:11:40.343 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:40.343 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:40.343 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:40.343 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:40.343 #define SPDK_CONFIG_SHARED 1 00:11:40.343 #undef SPDK_CONFIG_SMA 00:11:40.343 #define SPDK_CONFIG_TESTS 1 00:11:40.343 #undef SPDK_CONFIG_TSAN 00:11:40.343 #define SPDK_CONFIG_UBLK 1 00:11:40.343 #define SPDK_CONFIG_UBSAN 1 00:11:40.343 #undef SPDK_CONFIG_UNIT_TESTS 00:11:40.343 #undef SPDK_CONFIG_URING 00:11:40.343 #define SPDK_CONFIG_URING_PATH 00:11:40.343 #undef SPDK_CONFIG_URING_ZNS 00:11:40.343 #undef SPDK_CONFIG_USDT 00:11:40.343 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:40.343 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:40.343 #define SPDK_CONFIG_VFIO_USER 1 00:11:40.343 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:40.343 #define SPDK_CONFIG_VHOST 1 00:11:40.343 #define SPDK_CONFIG_VIRTIO 1 00:11:40.343 #undef SPDK_CONFIG_VTUNE 00:11:40.343 #define SPDK_CONFIG_VTUNE_DIR 00:11:40.343 #define SPDK_CONFIG_WERROR 1 00:11:40.343 #define SPDK_CONFIG_WPDK_DIR 00:11:40.343 #undef SPDK_CONFIG_XNVME 00:11:40.343 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:40.343 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:40.343 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:40.343 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:40.343 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:40.343 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:40.343 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:40.344 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.344 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.344 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.344 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:40.344 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.344 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:40.344 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:40.344 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:40.344 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:40.344 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:40.344 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:40.344 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:40.344 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:11:40.344 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:11:40.344 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:40.344 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:40.344 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:40.344 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:40.344 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:40.344 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:40.344 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:40.344 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:40.344 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:40.344 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:40.344 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:40.344 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:40.344 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:40.344 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:40.344 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:40.344 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:40.344 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:40.344 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:11:40.344 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:11:40.344 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:40.344 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:40.344 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:40.344 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:40.344 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:40.344 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:40.344 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:40.344 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:40.344 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:40.344 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:40.344 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:40.344 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:40.344 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:40.344 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:40.344 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:40.344 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:40.344 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:40.344 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:40.344 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:40.344 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:40.344 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:40.344 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:40.344 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:40.344 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:40.344 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:40.344 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:40.344 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:40.344 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:40.344 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:40.344 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:40.344 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:40.344 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:40.344 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:40.344 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:11:40.344 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:40.344 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:40.344 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:40.344 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:40.344 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:40.344 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:40.344 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:40.344 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:40.344 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:40.344 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:40.344 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:40.344 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:40.344 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:40.344 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:40.344 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:40.344 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:40.344 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:40.344 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:40.344 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:40.344 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:40.344 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:40.344 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:40.344 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:40.344 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:40.344 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:40.344 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:40.345 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:40.345 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:11:40.345 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:40.345 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:11:40.345 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:40.345 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:11:40.345 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:40.345 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:40.345 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:40.345 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:40.345 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:40.345 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:40.345 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:40.345 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:40.345 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:40.345 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:40.345 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:40.345 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:11:40.345 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:40.345 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : v23.11 00:11:40.345 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:40.345 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:11:40.345 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:40.345 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:40.345 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:40.345 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:40.345 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:40.345 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:40.345 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:40.345 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:40.345 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:40.345 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:40.345 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:40.345 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:11:40.345 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:40.345 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:40.345 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:40.345 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:40.345 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:40.345 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:40.345 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:40.345 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:40.345 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:40.345 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:40.345 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:40.345 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:40.345 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:40.345 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:40.345 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:40.345 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:40.345 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:40.345 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:11:40.345 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:40.345 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:11:40.345 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:40.345 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:11:40.345 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:11:40.345 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:40.345 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:40.345 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:40.345 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:40.345 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:40.345 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:40.345 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:40.345 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:40.345 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:40.345 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:40.345 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:40.345 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:40.345 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:40.345 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:11:40.345 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:40.345 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:40.345 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:40.345 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:40.346 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:40.346 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:11:40.346 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:11:40.346 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:11:40.346 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:40.346 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:40.346 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:40.346 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:40.346 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:11:40.346 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:11:40.346 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:40.346 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:40.346 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:40.346 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:40.346 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:40.346 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:40.346 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:40.346 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:40.346 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:40.346 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:40.346 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:40.346 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:40.346 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:11:40.346 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:11:40.346 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:11:40.346 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:11:40.346 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:11:40.346 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:40.346 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:11:40.346 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:11:40.346 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:11:40.346 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:11:40.346 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:11:40.346 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:11:40.346 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:11:40.346 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:11:40.346 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:11:40.346 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:11:40.346 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:11:40.346 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j144 00:11:40.346 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:11:40.346 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:11:40.346 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:11:40.346 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:11:40.346 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:11:40.346 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:11:40.346 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:11:40.346 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 2652214 ]] 00:11:40.346 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 2652214 00:11:40.346 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:11:40.346 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:11:40.346 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:11:40.346 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:11:40.346 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:11:40.346 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:11:40.346 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:11:40.346 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:11:40.346 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.Qh7hfJ 00:11:40.346 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:40.346 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:11:40.346 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:11:40.346 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.Qh7hfJ/tests/target /tmp/spdk.Qh7hfJ 00:11:40.346 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:11:40.346 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:40.346 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:11:40.346 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:11:40.346 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:11:40.346 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:11:40.346 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:11:40.346 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:11:40.346 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:11:40.346 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:40.346 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:11:40.346 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:11:40.346 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:11:40.346 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:11:40.346 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:11:40.346 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:40.346 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:11:40.346 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:11:40.346 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=121059983360 00:11:40.346 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=129356521472 00:11:40.346 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=8296538112 00:11:40.346 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:40.346 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:40.346 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:40.346 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=64668229632 00:11:40.346 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=64678260736 00:11:40.346 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:11:40.346 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:40.346 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:40.346 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:40.346 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=25847943168 00:11:40.346 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=25871306752 00:11:40.346 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23363584 00:11:40.346 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:40.346 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=efivarfs 00:11:40.346 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=efivarfs 00:11:40.347 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=216064 00:11:40.347 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=507904 00:11:40.347 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=287744 00:11:40.347 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:40.347 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:40.347 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:40.347 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=64677756928 00:11:40.347 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=64678260736 00:11:40.347 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=503808 00:11:40.347 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:40.347 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:40.347 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:40.347 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=12935639040 00:11:40.347 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=12935651328 00:11:40.347 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:11:40.347 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:40.347 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:11:40.347 * Looking for test storage... 00:11:40.347 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:11:40.347 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:11:40.347 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:40.347 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:40.347 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:11:40.347 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=121059983360 00:11:40.347 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:11:40.347 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:11:40.347 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:11:40.347 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:11:40.347 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:11:40.347 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=10511130624 00:11:40.347 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:40.347 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:40.347 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:40.347 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:40.347 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:40.347 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:11:40.347 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1698 -- # set -o errtrace 00:11:40.347 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:11:40.347 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:40.347 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:40.347 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # true 00:11:40.347 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # xtrace_fd 00:11:40.347 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:40.347 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:40.347 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:40.347 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:40.347 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:40.347 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:40.347 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:40.347 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:40.347 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:40.347 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:11:40.347 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:40.347 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:40.347 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:40.347 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:40.347 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:40.347 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:40.347 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:40.347 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:40.347 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:40.347 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:40.347 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:40.347 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:40.347 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:40.347 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:40.347 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:40.347 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:40.347 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:40.347 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:40.347 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:40.347 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:40.347 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:40.347 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:40.347 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:40.347 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:40.609 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:40.609 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:40.609 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:40.609 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:40.609 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:40.609 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:40.609 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:40.609 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:40.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:40.609 --rc genhtml_branch_coverage=1 00:11:40.609 --rc genhtml_function_coverage=1 00:11:40.609 --rc genhtml_legend=1 00:11:40.609 --rc geninfo_all_blocks=1 00:11:40.609 --rc geninfo_unexecuted_blocks=1 00:11:40.609 00:11:40.609 ' 00:11:40.609 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:40.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:40.609 --rc genhtml_branch_coverage=1 00:11:40.609 --rc genhtml_function_coverage=1 00:11:40.609 --rc genhtml_legend=1 00:11:40.609 --rc geninfo_all_blocks=1 00:11:40.609 --rc geninfo_unexecuted_blocks=1 00:11:40.609 00:11:40.609 ' 00:11:40.609 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:40.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:40.609 --rc genhtml_branch_coverage=1 00:11:40.609 --rc genhtml_function_coverage=1 00:11:40.609 --rc genhtml_legend=1 00:11:40.609 --rc geninfo_all_blocks=1 00:11:40.609 --rc geninfo_unexecuted_blocks=1 00:11:40.609 00:11:40.609 ' 00:11:40.609 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:40.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:40.609 --rc genhtml_branch_coverage=1 00:11:40.609 --rc genhtml_function_coverage=1 00:11:40.609 --rc genhtml_legend=1 00:11:40.609 --rc geninfo_all_blocks=1 00:11:40.609 --rc geninfo_unexecuted_blocks=1 00:11:40.609 00:11:40.609 ' 00:11:40.609 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:40.609 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:40.609 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:40.609 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:40.609 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:40.609 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:40.609 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:40.609 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:40.609 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:40.609 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:40.609 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:40.609 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:40.609 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:40.609 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:40.609 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:40.609 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:40.609 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:40.609 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:40.609 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:40.609 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:40.609 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:40.609 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:40.609 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:40.609 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.609 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.609 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.609 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:40.610 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.610 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:11:40.610 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:40.610 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:40.610 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:40.610 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:40.610 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:40.610 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:40.610 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:40.610 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:40.610 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:40.610 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:40.610 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:40.610 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:40.610 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:40.610 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:40.610 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:40.610 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:40.610 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:40.610 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:40.610 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:40.610 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:40.610 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:40.610 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:40.610 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:40.610 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:11:40.610 09:29:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:48.748 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:48.749 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:11:48.749 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:48.749 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:48.749 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:48.749 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:48.749 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:48.749 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:11:48.749 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:48.749 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:11:48.749 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:11:48.749 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:11:48.749 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:11:48.749 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:11:48.749 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:11:48.749 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:48.749 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:48.749 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:48.749 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:48.749 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:48.749 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:48.749 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:48.749 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:48.749 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:48.749 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:48.749 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:48.749 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:48.749 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:48.749 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:48.749 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:48.749 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:48.749 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:48.749 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:48.749 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:48.749 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:48.749 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:48.749 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:48.749 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:48.749 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:48.749 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:48.749 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:48.749 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:48.749 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:48.749 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:48.749 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:48.749 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:48.749 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:48.749 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:48.749 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:48.749 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:48.749 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:48.749 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:48.749 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:48.749 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:48.749 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:48.749 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:48.749 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:48.749 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:48.749 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:48.749 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:48.749 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:48.749 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:48.749 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:48.749 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:48.749 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:48.749 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:48.749 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:48.749 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:48.749 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:48.749 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:48.749 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:48.749 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:48.749 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:48.749 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:11:48.749 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:48.749 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:48.749 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:48.749 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:48.749 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:48.749 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:48.749 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:48.749 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:48.749 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:48.749 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:48.749 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:48.749 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:48.749 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:48.749 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:48.749 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:48.749 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:48.749 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:48.749 09:29:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:48.749 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:48.749 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:48.749 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:48.749 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:48.749 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:48.749 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:48.749 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:48.749 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:48.749 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:48.749 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.655 ms 00:11:48.749 00:11:48.749 --- 10.0.0.2 ping statistics --- 00:11:48.749 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:48.749 rtt min/avg/max/mdev = 0.655/0.655/0.655/0.000 ms 00:11:48.749 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:48.749 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:48.749 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:11:48.749 00:11:48.749 --- 10.0.0.1 ping statistics --- 00:11:48.749 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:48.750 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:11:48.750 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:48.750 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:11:48.750 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:48.750 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:48.750 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:48.750 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:48.750 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:48.750 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:48.750 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:48.750 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:48.750 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:48.750 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:48.750 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:48.750 ************************************ 00:11:48.750 START TEST nvmf_filesystem_no_in_capsule 00:11:48.750 ************************************ 00:11:48.750 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:11:48.750 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:48.750 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:48.750 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:48.750 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:48.750 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:48.750 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2656086 00:11:48.750 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2656086 00:11:48.750 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:48.750 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 2656086 ']' 00:11:48.750 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:48.750 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:48.750 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:48.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:48.750 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:48.750 09:29:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:48.750 [2024-12-09 09:29:23.361884] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:11:48.750 [2024-12-09 09:29:23.361945] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:48.750 [2024-12-09 09:29:23.460362] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:48.750 [2024-12-09 09:29:23.488895] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:48.750 [2024-12-09 09:29:23.488942] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:48.750 [2024-12-09 09:29:23.488955] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:48.750 [2024-12-09 09:29:23.488965] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:48.750 [2024-12-09 09:29:23.488973] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:48.750 [2024-12-09 09:29:23.491208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:48.750 [2024-12-09 09:29:23.491322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:48.750 [2024-12-09 09:29:23.491488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:48.750 [2024-12-09 09:29:23.491488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:48.750 09:29:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:48.750 09:29:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:48.750 09:29:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:48.750 09:29:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:48.750 09:29:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:49.011 09:29:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:49.011 09:29:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:49.011 09:29:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:49.011 09:29:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.011 09:29:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:49.011 [2024-12-09 09:29:24.218428] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:49.011 09:29:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.011 09:29:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:49.011 09:29:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.011 09:29:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:49.011 Malloc1 00:11:49.011 09:29:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.011 09:29:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:49.011 09:29:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.011 09:29:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:49.011 09:29:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.011 09:29:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:49.011 09:29:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.011 09:29:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:49.011 09:29:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.011 09:29:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:49.011 09:29:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.011 09:29:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:49.011 [2024-12-09 09:29:24.355662] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:49.011 09:29:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.011 09:29:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:49.011 09:29:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:49.011 09:29:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:49.011 09:29:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:49.011 09:29:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:49.011 09:29:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:49.011 09:29:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.011 09:29:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:49.011 09:29:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.011 09:29:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:49.011 { 00:11:49.011 "name": "Malloc1", 00:11:49.011 "aliases": [ 00:11:49.011 "3a7906c5-68ff-4bca-a8dc-9a2ebb5863e2" 00:11:49.011 ], 00:11:49.011 "product_name": "Malloc disk", 00:11:49.011 "block_size": 512, 00:11:49.011 "num_blocks": 1048576, 00:11:49.011 "uuid": "3a7906c5-68ff-4bca-a8dc-9a2ebb5863e2", 00:11:49.011 "assigned_rate_limits": { 00:11:49.011 "rw_ios_per_sec": 0, 00:11:49.011 "rw_mbytes_per_sec": 0, 00:11:49.011 "r_mbytes_per_sec": 0, 00:11:49.011 "w_mbytes_per_sec": 0 00:11:49.011 }, 00:11:49.011 "claimed": true, 00:11:49.011 "claim_type": "exclusive_write", 00:11:49.011 "zoned": false, 00:11:49.011 "supported_io_types": { 00:11:49.011 "read": true, 00:11:49.011 "write": true, 00:11:49.011 "unmap": true, 00:11:49.011 "flush": true, 00:11:49.011 "reset": true, 00:11:49.011 "nvme_admin": false, 00:11:49.011 "nvme_io": false, 00:11:49.011 "nvme_io_md": false, 00:11:49.011 "write_zeroes": true, 00:11:49.011 "zcopy": true, 00:11:49.011 "get_zone_info": false, 00:11:49.011 "zone_management": false, 00:11:49.011 "zone_append": false, 00:11:49.011 "compare": false, 00:11:49.011 "compare_and_write": false, 00:11:49.011 "abort": true, 00:11:49.011 "seek_hole": false, 00:11:49.011 "seek_data": false, 00:11:49.011 "copy": true, 00:11:49.011 "nvme_iov_md": false 00:11:49.011 }, 00:11:49.011 "memory_domains": [ 00:11:49.011 { 00:11:49.011 "dma_device_id": "system", 00:11:49.011 "dma_device_type": 1 00:11:49.011 }, 00:11:49.011 { 00:11:49.011 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:49.011 "dma_device_type": 2 00:11:49.011 } 00:11:49.011 ], 00:11:49.011 "driver_specific": {} 00:11:49.011 } 00:11:49.011 ]' 00:11:49.011 09:29:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:49.011 09:29:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:49.012 09:29:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:49.272 09:29:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:49.272 09:29:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:49.272 09:29:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:49.272 09:29:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:49.272 09:29:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:50.664 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:50.664 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:50.664 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:50.664 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:50.664 09:29:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:53.207 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:53.207 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:53.207 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:53.207 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:53.207 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:53.207 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:53.207 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:53.207 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:53.207 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:53.207 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:53.207 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:53.207 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:53.207 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:53.207 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:53.208 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:53.208 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:53.208 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:53.208 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:53.208 09:29:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:54.149 09:29:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:54.149 09:29:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:54.149 09:29:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:54.149 09:29:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:54.150 09:29:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:54.150 ************************************ 00:11:54.150 START TEST filesystem_ext4 00:11:54.150 ************************************ 00:11:54.150 09:29:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:54.150 09:29:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:54.150 09:29:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:54.150 09:29:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:54.150 09:29:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:54.150 09:29:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:54.150 09:29:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:54.150 09:29:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:54.150 09:29:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:54.150 09:29:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:54.150 09:29:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:54.150 mke2fs 1.47.0 (5-Feb-2023) 00:11:54.150 Discarding device blocks: 0/522240 done 00:11:54.150 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:54.150 Filesystem UUID: ee5aaaf4-6012-43cd-b6f7-852425ec1e8d 00:11:54.150 Superblock backups stored on blocks: 00:11:54.150 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:54.150 00:11:54.150 Allocating group tables: 0/64 done 00:11:54.150 Writing inode tables: 0/64 done 00:11:56.064 Creating journal (8192 blocks): done 00:11:56.064 Writing superblocks and filesystem accounting information: 0/64 done 00:11:56.064 00:11:56.064 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:56.064 09:29:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:02.661 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:02.661 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:12:02.661 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:02.661 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:12:02.661 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:02.661 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:02.661 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2656086 00:12:02.661 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:02.661 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:02.661 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:02.661 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:02.661 00:12:02.661 real 0m7.639s 00:12:02.661 user 0m0.021s 00:12:02.661 sys 0m0.088s 00:12:02.661 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:02.661 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:02.661 ************************************ 00:12:02.661 END TEST filesystem_ext4 00:12:02.661 ************************************ 00:12:02.661 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:02.661 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:02.661 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:02.661 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:02.661 ************************************ 00:12:02.661 START TEST filesystem_btrfs 00:12:02.661 ************************************ 00:12:02.661 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:02.661 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:02.661 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:02.661 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:02.661 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:12:02.661 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:02.661 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:12:02.661 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:12:02.661 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:12:02.661 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:12:02.661 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:02.661 btrfs-progs v6.8.1 00:12:02.661 See https://btrfs.readthedocs.io for more information. 00:12:02.661 00:12:02.661 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:02.661 NOTE: several default settings have changed in version 5.15, please make sure 00:12:02.661 this does not affect your deployments: 00:12:02.661 - DUP for metadata (-m dup) 00:12:02.661 - enabled no-holes (-O no-holes) 00:12:02.661 - enabled free-space-tree (-R free-space-tree) 00:12:02.661 00:12:02.661 Label: (null) 00:12:02.661 UUID: 59fc21d4-cd99-4d46-8fb6-9693b4481d49 00:12:02.661 Node size: 16384 00:12:02.661 Sector size: 4096 (CPU page size: 4096) 00:12:02.661 Filesystem size: 510.00MiB 00:12:02.661 Block group profiles: 00:12:02.661 Data: single 8.00MiB 00:12:02.661 Metadata: DUP 32.00MiB 00:12:02.661 System: DUP 8.00MiB 00:12:02.661 SSD detected: yes 00:12:02.661 Zoned device: no 00:12:02.661 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:02.661 Checksum: crc32c 00:12:02.661 Number of devices: 1 00:12:02.661 Devices: 00:12:02.661 ID SIZE PATH 00:12:02.661 1 510.00MiB /dev/nvme0n1p1 00:12:02.661 00:12:02.661 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:12:02.661 09:29:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:02.921 09:29:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:02.921 09:29:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:12:02.921 09:29:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:02.921 09:29:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:12:02.921 09:29:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:02.921 09:29:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:02.921 09:29:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2656086 00:12:02.921 09:29:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:02.921 09:29:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:02.921 09:29:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:02.921 09:29:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:02.921 00:12:02.921 real 0m0.981s 00:12:02.921 user 0m0.030s 00:12:02.921 sys 0m0.120s 00:12:02.921 09:29:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:02.921 09:29:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:02.921 ************************************ 00:12:02.921 END TEST filesystem_btrfs 00:12:02.921 ************************************ 00:12:02.921 09:29:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:12:02.921 09:29:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:02.921 09:29:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:02.921 09:29:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:02.921 ************************************ 00:12:02.921 START TEST filesystem_xfs 00:12:02.921 ************************************ 00:12:02.921 09:29:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:12:02.921 09:29:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:02.922 09:29:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:02.922 09:29:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:02.922 09:29:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:12:02.922 09:29:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:02.922 09:29:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:12:02.922 09:29:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:12:02.922 09:29:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:12:02.922 09:29:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:12:02.922 09:29:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:02.922 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:02.922 = sectsz=512 attr=2, projid32bit=1 00:12:02.922 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:02.922 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:02.922 data = bsize=4096 blocks=130560, imaxpct=25 00:12:02.922 = sunit=0 swidth=0 blks 00:12:02.922 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:02.922 log =internal log bsize=4096 blocks=16384, version=2 00:12:02.922 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:02.922 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:03.859 Discarding blocks...Done. 00:12:03.859 09:29:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:12:03.859 09:29:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:06.400 09:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:06.400 09:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:12:06.400 09:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:06.400 09:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:12:06.400 09:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:12:06.400 09:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:06.400 09:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2656086 00:12:06.400 09:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:06.400 09:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:06.400 09:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:06.400 09:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:06.400 00:12:06.400 real 0m3.474s 00:12:06.400 user 0m0.030s 00:12:06.400 sys 0m0.079s 00:12:06.400 09:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:06.400 09:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:06.400 ************************************ 00:12:06.400 END TEST filesystem_xfs 00:12:06.400 ************************************ 00:12:06.400 09:29:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:06.668 09:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:06.668 09:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:06.929 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:06.929 09:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:06.929 09:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:12:06.929 09:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:06.929 09:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:06.929 09:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:06.929 09:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:06.929 09:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:12:06.929 09:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:06.929 09:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.929 09:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:06.929 09:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.929 09:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:06.929 09:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2656086 00:12:06.929 09:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 2656086 ']' 00:12:06.929 09:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 2656086 00:12:06.929 09:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:12:06.929 09:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:06.929 09:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2656086 00:12:06.929 09:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:06.929 09:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:06.929 09:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2656086' 00:12:06.929 killing process with pid 2656086 00:12:06.929 09:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 2656086 00:12:06.929 09:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 2656086 00:12:07.189 09:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:07.189 00:12:07.189 real 0m19.243s 00:12:07.189 user 1m16.078s 00:12:07.189 sys 0m1.500s 00:12:07.189 09:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:07.189 09:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:07.189 ************************************ 00:12:07.189 END TEST nvmf_filesystem_no_in_capsule 00:12:07.189 ************************************ 00:12:07.189 09:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:12:07.189 09:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:07.189 09:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:07.189 09:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:07.189 ************************************ 00:12:07.189 START TEST nvmf_filesystem_in_capsule 00:12:07.189 ************************************ 00:12:07.189 09:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:12:07.189 09:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:12:07.189 09:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:07.189 09:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:07.189 09:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:07.189 09:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:07.189 09:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2660009 00:12:07.189 09:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2660009 00:12:07.189 09:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:07.189 09:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 2660009 ']' 00:12:07.189 09:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:07.189 09:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:07.189 09:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:07.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:07.190 09:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:07.190 09:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:07.449 [2024-12-09 09:29:42.686284] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:12:07.450 [2024-12-09 09:29:42.686330] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:07.450 [2024-12-09 09:29:42.754359] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:07.450 [2024-12-09 09:29:42.770290] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:07.450 [2024-12-09 09:29:42.770319] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:07.450 [2024-12-09 09:29:42.770326] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:07.450 [2024-12-09 09:29:42.770333] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:07.450 [2024-12-09 09:29:42.770337] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:07.450 [2024-12-09 09:29:42.771743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:07.450 [2024-12-09 09:29:42.771955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:07.450 [2024-12-09 09:29:42.773653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:07.450 [2024-12-09 09:29:42.773833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:07.450 09:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:07.450 09:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:12:07.450 09:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:07.450 09:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:07.450 09:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:07.710 09:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:07.710 09:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:07.710 09:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:12:07.710 09:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.710 09:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:07.710 [2024-12-09 09:29:42.924281] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:07.710 09:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.710 09:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:07.710 09:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.710 09:29:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:07.710 Malloc1 00:12:07.710 09:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.711 09:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:07.711 09:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.711 09:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:07.711 09:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.711 09:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:07.711 09:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.711 09:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:07.711 09:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.711 09:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:07.711 09:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.711 09:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:07.711 [2024-12-09 09:29:43.059439] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:07.711 09:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.711 09:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:07.711 09:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:12:07.711 09:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:12:07.711 09:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:12:07.711 09:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:12:07.711 09:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:07.711 09:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.711 09:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:07.711 09:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.711 09:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:12:07.711 { 00:12:07.711 "name": "Malloc1", 00:12:07.711 "aliases": [ 00:12:07.711 "7dc02057-e7bb-4952-9dc6-c1124273433c" 00:12:07.711 ], 00:12:07.711 "product_name": "Malloc disk", 00:12:07.711 "block_size": 512, 00:12:07.711 "num_blocks": 1048576, 00:12:07.711 "uuid": "7dc02057-e7bb-4952-9dc6-c1124273433c", 00:12:07.711 "assigned_rate_limits": { 00:12:07.711 "rw_ios_per_sec": 0, 00:12:07.711 "rw_mbytes_per_sec": 0, 00:12:07.711 "r_mbytes_per_sec": 0, 00:12:07.711 "w_mbytes_per_sec": 0 00:12:07.711 }, 00:12:07.711 "claimed": true, 00:12:07.711 "claim_type": "exclusive_write", 00:12:07.711 "zoned": false, 00:12:07.711 "supported_io_types": { 00:12:07.711 "read": true, 00:12:07.711 "write": true, 00:12:07.711 "unmap": true, 00:12:07.711 "flush": true, 00:12:07.711 "reset": true, 00:12:07.711 "nvme_admin": false, 00:12:07.711 "nvme_io": false, 00:12:07.711 "nvme_io_md": false, 00:12:07.711 "write_zeroes": true, 00:12:07.711 "zcopy": true, 00:12:07.711 "get_zone_info": false, 00:12:07.711 "zone_management": false, 00:12:07.711 "zone_append": false, 00:12:07.711 "compare": false, 00:12:07.711 "compare_and_write": false, 00:12:07.711 "abort": true, 00:12:07.711 "seek_hole": false, 00:12:07.711 "seek_data": false, 00:12:07.711 "copy": true, 00:12:07.711 "nvme_iov_md": false 00:12:07.711 }, 00:12:07.711 "memory_domains": [ 00:12:07.711 { 00:12:07.711 "dma_device_id": "system", 00:12:07.711 "dma_device_type": 1 00:12:07.711 }, 00:12:07.711 { 00:12:07.711 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:07.711 "dma_device_type": 2 00:12:07.711 } 00:12:07.711 ], 00:12:07.711 "driver_specific": {} 00:12:07.711 } 00:12:07.711 ]' 00:12:07.711 09:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:12:07.711 09:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:12:07.711 09:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:12:07.972 09:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:12:07.972 09:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:12:07.972 09:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:12:07.972 09:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:07.972 09:29:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:09.356 09:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:09.356 09:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:12:09.356 09:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:09.356 09:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:09.356 09:29:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:12:11.899 09:29:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:11.899 09:29:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:11.900 09:29:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:11.900 09:29:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:11.900 09:29:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:11.900 09:29:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:12:11.900 09:29:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:11.900 09:29:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:11.900 09:29:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:11.900 09:29:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:11.900 09:29:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:11.900 09:29:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:11.900 09:29:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:11.900 09:29:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:11.900 09:29:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:11.900 09:29:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:11.900 09:29:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:11.900 09:29:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:11.900 09:29:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:12.842 09:29:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:12:12.842 09:29:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:12.843 09:29:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:12.843 09:29:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:12.843 09:29:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:12.843 ************************************ 00:12:12.843 START TEST filesystem_in_capsule_ext4 00:12:12.843 ************************************ 00:12:12.843 09:29:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:12.843 09:29:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:12.843 09:29:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:12.843 09:29:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:12.843 09:29:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:12:12.843 09:29:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:12.843 09:29:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:12:12.843 09:29:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:12:12.843 09:29:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:12:12.843 09:29:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:12:12.843 09:29:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:12.843 mke2fs 1.47.0 (5-Feb-2023) 00:12:12.843 Discarding device blocks: 0/522240 done 00:12:12.843 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:12.843 Filesystem UUID: db931011-782f-46a6-8783-738f9dbc559f 00:12:12.843 Superblock backups stored on blocks: 00:12:12.843 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:12.843 00:12:12.843 Allocating group tables: 0/64 done 00:12:12.843 Writing inode tables: 0/64 done 00:12:13.785 Creating journal (8192 blocks): done 00:12:13.785 Writing superblocks and filesystem accounting information: 0/64 done 00:12:13.785 00:12:13.785 09:29:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:12:13.785 09:29:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:20.366 09:29:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:20.366 09:29:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:12:20.366 09:29:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:20.366 09:29:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:12:20.366 09:29:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:20.366 09:29:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:20.366 09:29:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2660009 00:12:20.366 09:29:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:20.366 09:29:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:20.366 09:29:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:20.366 09:29:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:20.366 00:12:20.366 real 0m6.843s 00:12:20.366 user 0m0.027s 00:12:20.366 sys 0m0.081s 00:12:20.366 09:29:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:20.366 09:29:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:20.366 ************************************ 00:12:20.366 END TEST filesystem_in_capsule_ext4 00:12:20.366 ************************************ 00:12:20.366 09:29:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:20.366 09:29:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:20.366 09:29:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:20.366 09:29:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:20.366 ************************************ 00:12:20.366 START TEST filesystem_in_capsule_btrfs 00:12:20.366 ************************************ 00:12:20.366 09:29:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:20.366 09:29:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:20.366 09:29:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:20.366 09:29:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:20.366 09:29:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:12:20.366 09:29:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:20.366 09:29:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:12:20.366 09:29:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:12:20.366 09:29:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:12:20.366 09:29:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:12:20.366 09:29:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:20.366 btrfs-progs v6.8.1 00:12:20.366 See https://btrfs.readthedocs.io for more information. 00:12:20.366 00:12:20.366 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:20.366 NOTE: several default settings have changed in version 5.15, please make sure 00:12:20.366 this does not affect your deployments: 00:12:20.366 - DUP for metadata (-m dup) 00:12:20.366 - enabled no-holes (-O no-holes) 00:12:20.366 - enabled free-space-tree (-R free-space-tree) 00:12:20.366 00:12:20.366 Label: (null) 00:12:20.366 UUID: 8a5de9c9-cc8c-448e-b77a-fe334e11e6b0 00:12:20.366 Node size: 16384 00:12:20.366 Sector size: 4096 (CPU page size: 4096) 00:12:20.366 Filesystem size: 510.00MiB 00:12:20.366 Block group profiles: 00:12:20.366 Data: single 8.00MiB 00:12:20.366 Metadata: DUP 32.00MiB 00:12:20.366 System: DUP 8.00MiB 00:12:20.366 SSD detected: yes 00:12:20.366 Zoned device: no 00:12:20.366 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:20.366 Checksum: crc32c 00:12:20.366 Number of devices: 1 00:12:20.366 Devices: 00:12:20.366 ID SIZE PATH 00:12:20.366 1 510.00MiB /dev/nvme0n1p1 00:12:20.366 00:12:20.366 09:29:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:12:20.366 09:29:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:20.366 09:29:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:20.366 09:29:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:12:20.366 09:29:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:20.366 09:29:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:12:20.366 09:29:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:20.366 09:29:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:20.366 09:29:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2660009 00:12:20.366 09:29:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:20.366 09:29:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:20.366 09:29:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:20.366 09:29:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:20.366 00:12:20.366 real 0m0.695s 00:12:20.366 user 0m0.034s 00:12:20.366 sys 0m0.114s 00:12:20.367 09:29:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:20.367 09:29:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:20.367 ************************************ 00:12:20.367 END TEST filesystem_in_capsule_btrfs 00:12:20.367 ************************************ 00:12:20.626 09:29:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:12:20.626 09:29:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:20.626 09:29:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:20.627 09:29:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:20.627 ************************************ 00:12:20.627 START TEST filesystem_in_capsule_xfs 00:12:20.627 ************************************ 00:12:20.627 09:29:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:12:20.627 09:29:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:20.627 09:29:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:20.627 09:29:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:20.627 09:29:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:12:20.627 09:29:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:20.627 09:29:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:12:20.627 09:29:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:12:20.627 09:29:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:12:20.627 09:29:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:12:20.627 09:29:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:20.627 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:20.627 = sectsz=512 attr=2, projid32bit=1 00:12:20.627 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:20.627 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:20.627 data = bsize=4096 blocks=130560, imaxpct=25 00:12:20.627 = sunit=0 swidth=0 blks 00:12:20.627 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:20.627 log =internal log bsize=4096 blocks=16384, version=2 00:12:20.627 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:20.627 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:21.567 Discarding blocks...Done. 00:12:21.567 09:29:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:12:21.567 09:29:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:24.185 09:29:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:24.185 09:29:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:12:24.185 09:29:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:24.185 09:29:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:12:24.185 09:29:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:12:24.185 09:29:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:24.185 09:29:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2660009 00:12:24.185 09:29:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:24.185 09:29:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:24.185 09:29:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:24.185 09:29:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:24.185 00:12:24.185 real 0m3.332s 00:12:24.185 user 0m0.031s 00:12:24.185 sys 0m0.076s 00:12:24.185 09:29:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:24.185 09:29:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:24.185 ************************************ 00:12:24.185 END TEST filesystem_in_capsule_xfs 00:12:24.185 ************************************ 00:12:24.185 09:29:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:24.185 09:29:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:24.185 09:29:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:24.185 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:24.185 09:29:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:24.185 09:29:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:12:24.185 09:29:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:24.185 09:29:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:24.185 09:29:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:24.185 09:29:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:24.185 09:29:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:12:24.185 09:29:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:24.185 09:29:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.185 09:29:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:24.185 09:29:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.185 09:29:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:24.185 09:29:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2660009 00:12:24.185 09:29:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 2660009 ']' 00:12:24.185 09:29:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 2660009 00:12:24.185 09:29:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:12:24.185 09:29:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:24.185 09:29:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2660009 00:12:24.185 09:29:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:24.185 09:29:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:24.185 09:29:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2660009' 00:12:24.185 killing process with pid 2660009 00:12:24.185 09:29:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 2660009 00:12:24.185 09:29:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 2660009 00:12:24.446 09:29:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:24.446 00:12:24.446 real 0m17.068s 00:12:24.446 user 1m7.570s 00:12:24.446 sys 0m1.336s 00:12:24.446 09:29:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:24.446 09:29:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:24.446 ************************************ 00:12:24.446 END TEST nvmf_filesystem_in_capsule 00:12:24.446 ************************************ 00:12:24.446 09:29:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:12:24.446 09:29:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:24.446 09:29:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:12:24.447 09:29:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:24.447 09:29:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:12:24.447 09:29:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:24.447 09:29:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:24.447 rmmod nvme_tcp 00:12:24.447 rmmod nvme_fabrics 00:12:24.447 rmmod nvme_keyring 00:12:24.447 09:29:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:24.447 09:29:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:12:24.447 09:29:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:12:24.447 09:29:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:12:24.447 09:29:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:24.447 09:29:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:24.447 09:29:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:24.447 09:29:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:12:24.447 09:29:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:12:24.447 09:29:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:24.447 09:29:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:12:24.447 09:29:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:24.447 09:29:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:24.447 09:29:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:24.447 09:29:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:24.447 09:29:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:26.993 09:30:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:26.993 00:12:26.993 real 0m46.566s 00:12:26.993 user 2m26.032s 00:12:26.993 sys 0m8.683s 00:12:26.993 09:30:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:26.993 09:30:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:26.993 ************************************ 00:12:26.993 END TEST nvmf_filesystem 00:12:26.993 ************************************ 00:12:26.993 09:30:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:26.993 09:30:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:26.993 09:30:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:26.993 09:30:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:26.993 ************************************ 00:12:26.993 START TEST nvmf_target_discovery 00:12:26.993 ************************************ 00:12:26.993 09:30:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:26.993 * Looking for test storage... 00:12:26.993 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:26.993 09:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:26.993 09:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:26.993 09:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:12:26.993 09:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:26.993 09:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:26.993 09:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:26.993 09:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:26.993 09:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:12:26.993 09:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:12:26.993 09:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:12:26.993 09:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:12:26.993 09:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:12:26.993 09:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:12:26.993 09:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:12:26.993 09:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:26.993 09:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:12:26.993 09:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:12:26.993 09:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:26.993 09:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:26.993 09:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:12:26.993 09:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:12:26.993 09:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:26.993 09:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:12:26.993 09:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:12:26.993 09:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:12:26.993 09:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:12:26.993 09:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:26.993 09:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:12:26.993 09:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:12:26.993 09:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:26.993 09:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:26.993 09:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:12:26.993 09:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:26.993 09:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:26.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:26.993 --rc genhtml_branch_coverage=1 00:12:26.993 --rc genhtml_function_coverage=1 00:12:26.993 --rc genhtml_legend=1 00:12:26.993 --rc geninfo_all_blocks=1 00:12:26.993 --rc geninfo_unexecuted_blocks=1 00:12:26.993 00:12:26.993 ' 00:12:26.993 09:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:26.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:26.993 --rc genhtml_branch_coverage=1 00:12:26.993 --rc genhtml_function_coverage=1 00:12:26.993 --rc genhtml_legend=1 00:12:26.993 --rc geninfo_all_blocks=1 00:12:26.993 --rc geninfo_unexecuted_blocks=1 00:12:26.993 00:12:26.993 ' 00:12:26.993 09:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:26.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:26.993 --rc genhtml_branch_coverage=1 00:12:26.993 --rc genhtml_function_coverage=1 00:12:26.993 --rc genhtml_legend=1 00:12:26.993 --rc geninfo_all_blocks=1 00:12:26.993 --rc geninfo_unexecuted_blocks=1 00:12:26.993 00:12:26.993 ' 00:12:26.993 09:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:26.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:26.993 --rc genhtml_branch_coverage=1 00:12:26.993 --rc genhtml_function_coverage=1 00:12:26.993 --rc genhtml_legend=1 00:12:26.993 --rc geninfo_all_blocks=1 00:12:26.993 --rc geninfo_unexecuted_blocks=1 00:12:26.993 00:12:26.993 ' 00:12:26.993 09:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:26.993 09:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:12:26.993 09:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:26.993 09:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:26.993 09:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:26.993 09:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:26.993 09:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:26.993 09:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:26.993 09:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:26.993 09:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:26.993 09:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:26.993 09:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:26.993 09:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:26.993 09:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:26.993 09:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:26.993 09:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:26.993 09:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:26.993 09:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:26.993 09:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:26.993 09:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:12:26.993 09:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:26.993 09:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:26.993 09:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:26.994 09:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.994 09:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.994 09:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.994 09:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:12:26.994 09:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.994 09:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:12:26.994 09:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:26.994 09:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:26.994 09:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:26.994 09:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:26.994 09:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:26.994 09:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:26.994 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:26.994 09:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:26.994 09:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:26.994 09:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:26.994 09:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:12:26.994 09:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:12:26.994 09:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:12:26.994 09:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:12:26.994 09:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:12:26.994 09:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:26.994 09:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:26.994 09:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:26.994 09:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:26.994 09:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:26.994 09:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:26.994 09:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:26.994 09:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:26.994 09:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:26.994 09:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:26.994 09:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:12:26.994 09:30:02 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:35.138 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:35.138 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:12:35.138 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:35.138 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:35.138 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:35.138 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:35.138 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:35.138 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:12:35.138 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:35.138 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:12:35.138 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:12:35.138 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:12:35.138 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:12:35.138 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:12:35.138 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:12:35.138 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:35.138 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:35.138 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:35.138 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:35.138 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:35.138 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:35.138 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:35.138 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:35.138 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:35.138 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:35.138 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:35.138 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:35.138 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:35.138 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:35.138 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:35.138 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:35.138 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:35.138 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:35.138 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:35.138 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:35.138 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:35.138 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:35.138 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:35.138 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:35.138 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:35.138 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:35.139 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:35.139 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:35.139 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:35.139 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:35.139 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:35.139 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:35.139 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:35.139 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:35.139 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:35.139 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:35.139 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:35.139 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:35.139 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:35.139 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:35.139 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:35.139 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:35.139 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:35.139 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:35.139 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:35.139 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:35.139 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:35.139 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:35.139 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:35.139 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:35.139 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:35.139 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:35.139 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:35.139 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:35.139 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:35.139 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:35.139 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:35.139 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:35.139 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:12:35.139 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:35.139 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:35.139 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:35.139 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:35.139 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:35.139 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:35.139 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:35.139 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:35.139 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:35.139 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:35.139 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:35.139 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:35.139 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:35.139 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:35.139 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:35.139 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:35.139 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:35.139 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:35.139 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:35.139 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:35.139 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:35.139 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:35.139 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:35.139 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:35.139 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:35.139 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:35.139 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:35.139 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.644 ms 00:12:35.139 00:12:35.139 --- 10.0.0.2 ping statistics --- 00:12:35.139 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:35.139 rtt min/avg/max/mdev = 0.644/0.644/0.644/0.000 ms 00:12:35.139 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:35.139 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:35.139 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:12:35.139 00:12:35.139 --- 10.0.0.1 ping statistics --- 00:12:35.139 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:35.139 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:12:35.139 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:35.139 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:12:35.139 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:35.139 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:35.139 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:35.139 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:35.139 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:35.139 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:35.139 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:35.139 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:35.139 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:35.139 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:35.139 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:35.139 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=2668039 00:12:35.139 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 2668039 00:12:35.139 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:35.139 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 2668039 ']' 00:12:35.139 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:35.139 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:35.139 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:35.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:35.139 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:35.139 09:30:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:35.139 [2024-12-09 09:30:09.720860] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:12:35.139 [2024-12-09 09:30:09.720929] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:35.139 [2024-12-09 09:30:09.820927] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:35.139 [2024-12-09 09:30:09.849690] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:35.139 [2024-12-09 09:30:09.849742] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:35.139 [2024-12-09 09:30:09.849751] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:35.139 [2024-12-09 09:30:09.849759] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:35.139 [2024-12-09 09:30:09.849765] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:35.139 [2024-12-09 09:30:09.852060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:35.139 [2024-12-09 09:30:09.852186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:35.139 [2024-12-09 09:30:09.852351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:35.139 [2024-12-09 09:30:09.852352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:35.139 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:35.140 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:12:35.140 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:35.140 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:35.140 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:35.140 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:35.140 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:35.140 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.140 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:35.140 [2024-12-09 09:30:10.578580] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:35.140 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.401 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:35.401 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:35.401 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:35.401 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.401 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:35.401 Null1 00:12:35.401 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.402 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:35.402 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.402 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:35.402 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.402 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:35.402 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.402 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:35.402 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.402 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:35.402 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.402 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:35.402 [2024-12-09 09:30:10.638916] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:35.402 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.402 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:35.402 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:35.402 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.402 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:35.402 Null2 00:12:35.402 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.402 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:35.402 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.402 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:35.402 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.402 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:35.402 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.402 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:35.402 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.402 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:35.402 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.402 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:35.402 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.402 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:35.402 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:35.402 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.402 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:35.402 Null3 00:12:35.402 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.402 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:35.402 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.402 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:35.402 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.402 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:35.402 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.402 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:35.402 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.402 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:35.402 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.402 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:35.402 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.402 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:35.402 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:35.402 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.402 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:35.402 Null4 00:12:35.402 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.402 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:35.402 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.402 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:35.402 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.402 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:35.402 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.402 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:35.402 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.402 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:35.402 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.402 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:35.402 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.402 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:35.402 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.402 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:35.402 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.402 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:12:35.402 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.402 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:35.402 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.402 09:30:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:12:35.663 00:12:35.663 Discovery Log Number of Records 6, Generation counter 6 00:12:35.663 =====Discovery Log Entry 0====== 00:12:35.663 trtype: tcp 00:12:35.663 adrfam: ipv4 00:12:35.663 subtype: current discovery subsystem 00:12:35.663 treq: not required 00:12:35.663 portid: 0 00:12:35.663 trsvcid: 4420 00:12:35.663 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:35.663 traddr: 10.0.0.2 00:12:35.663 eflags: explicit discovery connections, duplicate discovery information 00:12:35.663 sectype: none 00:12:35.663 =====Discovery Log Entry 1====== 00:12:35.663 trtype: tcp 00:12:35.663 adrfam: ipv4 00:12:35.663 subtype: nvme subsystem 00:12:35.663 treq: not required 00:12:35.663 portid: 0 00:12:35.663 trsvcid: 4420 00:12:35.663 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:35.663 traddr: 10.0.0.2 00:12:35.663 eflags: none 00:12:35.663 sectype: none 00:12:35.663 =====Discovery Log Entry 2====== 00:12:35.663 trtype: tcp 00:12:35.663 adrfam: ipv4 00:12:35.663 subtype: nvme subsystem 00:12:35.663 treq: not required 00:12:35.663 portid: 0 00:12:35.663 trsvcid: 4420 00:12:35.663 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:35.663 traddr: 10.0.0.2 00:12:35.663 eflags: none 00:12:35.663 sectype: none 00:12:35.663 =====Discovery Log Entry 3====== 00:12:35.663 trtype: tcp 00:12:35.663 adrfam: ipv4 00:12:35.663 subtype: nvme subsystem 00:12:35.663 treq: not required 00:12:35.663 portid: 0 00:12:35.663 trsvcid: 4420 00:12:35.663 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:35.663 traddr: 10.0.0.2 00:12:35.663 eflags: none 00:12:35.663 sectype: none 00:12:35.663 =====Discovery Log Entry 4====== 00:12:35.663 trtype: tcp 00:12:35.663 adrfam: ipv4 00:12:35.663 subtype: nvme subsystem 00:12:35.663 treq: not required 00:12:35.663 portid: 0 00:12:35.663 trsvcid: 4420 00:12:35.663 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:35.663 traddr: 10.0.0.2 00:12:35.663 eflags: none 00:12:35.663 sectype: none 00:12:35.663 =====Discovery Log Entry 5====== 00:12:35.663 trtype: tcp 00:12:35.663 adrfam: ipv4 00:12:35.663 subtype: discovery subsystem referral 00:12:35.663 treq: not required 00:12:35.663 portid: 0 00:12:35.663 trsvcid: 4430 00:12:35.663 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:35.663 traddr: 10.0.0.2 00:12:35.663 eflags: none 00:12:35.663 sectype: none 00:12:35.663 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:35.663 Perform nvmf subsystem discovery via RPC 00:12:35.663 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:35.663 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.663 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:35.663 [ 00:12:35.663 { 00:12:35.663 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:35.663 "subtype": "Discovery", 00:12:35.663 "listen_addresses": [ 00:12:35.663 { 00:12:35.663 "trtype": "TCP", 00:12:35.663 "adrfam": "IPv4", 00:12:35.663 "traddr": "10.0.0.2", 00:12:35.663 "trsvcid": "4420" 00:12:35.663 } 00:12:35.663 ], 00:12:35.663 "allow_any_host": true, 00:12:35.663 "hosts": [] 00:12:35.663 }, 00:12:35.663 { 00:12:35.663 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:35.663 "subtype": "NVMe", 00:12:35.663 "listen_addresses": [ 00:12:35.663 { 00:12:35.664 "trtype": "TCP", 00:12:35.664 "adrfam": "IPv4", 00:12:35.664 "traddr": "10.0.0.2", 00:12:35.664 "trsvcid": "4420" 00:12:35.664 } 00:12:35.664 ], 00:12:35.664 "allow_any_host": true, 00:12:35.664 "hosts": [], 00:12:35.664 "serial_number": "SPDK00000000000001", 00:12:35.664 "model_number": "SPDK bdev Controller", 00:12:35.664 "max_namespaces": 32, 00:12:35.664 "min_cntlid": 1, 00:12:35.664 "max_cntlid": 65519, 00:12:35.664 "namespaces": [ 00:12:35.664 { 00:12:35.664 "nsid": 1, 00:12:35.664 "bdev_name": "Null1", 00:12:35.664 "name": "Null1", 00:12:35.664 "nguid": "07B12FC551D54D22A2A4D8CF9B3877B9", 00:12:35.664 "uuid": "07b12fc5-51d5-4d22-a2a4-d8cf9b3877b9" 00:12:35.664 } 00:12:35.664 ] 00:12:35.664 }, 00:12:35.664 { 00:12:35.664 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:35.664 "subtype": "NVMe", 00:12:35.664 "listen_addresses": [ 00:12:35.664 { 00:12:35.664 "trtype": "TCP", 00:12:35.664 "adrfam": "IPv4", 00:12:35.664 "traddr": "10.0.0.2", 00:12:35.664 "trsvcid": "4420" 00:12:35.664 } 00:12:35.664 ], 00:12:35.664 "allow_any_host": true, 00:12:35.664 "hosts": [], 00:12:35.664 "serial_number": "SPDK00000000000002", 00:12:35.664 "model_number": "SPDK bdev Controller", 00:12:35.664 "max_namespaces": 32, 00:12:35.664 "min_cntlid": 1, 00:12:35.664 "max_cntlid": 65519, 00:12:35.664 "namespaces": [ 00:12:35.664 { 00:12:35.664 "nsid": 1, 00:12:35.664 "bdev_name": "Null2", 00:12:35.664 "name": "Null2", 00:12:35.664 "nguid": "6AA06C7D0E194D0F9454FA751BDCB035", 00:12:35.664 "uuid": "6aa06c7d-0e19-4d0f-9454-fa751bdcb035" 00:12:35.664 } 00:12:35.664 ] 00:12:35.664 }, 00:12:35.664 { 00:12:35.664 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:35.664 "subtype": "NVMe", 00:12:35.664 "listen_addresses": [ 00:12:35.664 { 00:12:35.664 "trtype": "TCP", 00:12:35.664 "adrfam": "IPv4", 00:12:35.664 "traddr": "10.0.0.2", 00:12:35.664 "trsvcid": "4420" 00:12:35.664 } 00:12:35.664 ], 00:12:35.664 "allow_any_host": true, 00:12:35.664 "hosts": [], 00:12:35.664 "serial_number": "SPDK00000000000003", 00:12:35.664 "model_number": "SPDK bdev Controller", 00:12:35.664 "max_namespaces": 32, 00:12:35.664 "min_cntlid": 1, 00:12:35.664 "max_cntlid": 65519, 00:12:35.664 "namespaces": [ 00:12:35.664 { 00:12:35.664 "nsid": 1, 00:12:35.664 "bdev_name": "Null3", 00:12:35.664 "name": "Null3", 00:12:35.664 "nguid": "41752545940F429DA31F35C1E1E9B29A", 00:12:35.664 "uuid": "41752545-940f-429d-a31f-35c1e1e9b29a" 00:12:35.664 } 00:12:35.664 ] 00:12:35.664 }, 00:12:35.664 { 00:12:35.664 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:35.664 "subtype": "NVMe", 00:12:35.664 "listen_addresses": [ 00:12:35.664 { 00:12:35.664 "trtype": "TCP", 00:12:35.664 "adrfam": "IPv4", 00:12:35.664 "traddr": "10.0.0.2", 00:12:35.664 "trsvcid": "4420" 00:12:35.664 } 00:12:35.664 ], 00:12:35.664 "allow_any_host": true, 00:12:35.664 "hosts": [], 00:12:35.664 "serial_number": "SPDK00000000000004", 00:12:35.664 "model_number": "SPDK bdev Controller", 00:12:35.664 "max_namespaces": 32, 00:12:35.664 "min_cntlid": 1, 00:12:35.664 "max_cntlid": 65519, 00:12:35.664 "namespaces": [ 00:12:35.664 { 00:12:35.664 "nsid": 1, 00:12:35.664 "bdev_name": "Null4", 00:12:35.664 "name": "Null4", 00:12:35.664 "nguid": "CB8E734C424548FCA969F3118FF182CC", 00:12:35.664 "uuid": "cb8e734c-4245-48fc-a969-f3118ff182cc" 00:12:35.664 } 00:12:35.664 ] 00:12:35.664 } 00:12:35.664 ] 00:12:35.664 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.664 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:35.664 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:35.664 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:35.664 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.664 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:35.664 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.664 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:35.664 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.664 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:35.664 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.664 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:35.664 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:35.664 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.664 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:35.664 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.664 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:35.664 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.664 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:35.925 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.926 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:35.926 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:35.926 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.926 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:35.926 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.926 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:35.926 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.926 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:35.926 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.926 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:35.926 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:35.926 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.926 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:35.926 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.926 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:35.926 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.926 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:35.926 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.926 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:12:35.926 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.926 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:35.926 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.926 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:35.926 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:35.926 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.926 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:35.926 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.926 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:35.926 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:35.926 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:35.926 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:35.926 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:35.926 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:12:35.926 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:35.926 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:12:35.926 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:35.926 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:35.926 rmmod nvme_tcp 00:12:35.926 rmmod nvme_fabrics 00:12:35.926 rmmod nvme_keyring 00:12:35.926 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:35.926 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:12:35.926 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:12:35.926 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 2668039 ']' 00:12:35.926 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 2668039 00:12:35.926 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 2668039 ']' 00:12:35.926 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 2668039 00:12:35.926 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:12:35.926 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:35.926 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2668039 00:12:35.926 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:35.926 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:35.926 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2668039' 00:12:35.926 killing process with pid 2668039 00:12:35.926 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 2668039 00:12:35.926 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 2668039 00:12:36.187 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:36.187 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:36.188 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:36.188 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:12:36.188 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:12:36.188 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:36.188 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:12:36.188 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:36.188 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:36.188 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:36.188 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:36.188 09:30:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:38.103 09:30:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:38.103 00:12:38.103 real 0m11.582s 00:12:38.103 user 0m8.835s 00:12:38.103 sys 0m6.000s 00:12:38.103 09:30:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:38.103 09:30:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:38.103 ************************************ 00:12:38.103 END TEST nvmf_target_discovery 00:12:38.103 ************************************ 00:12:38.365 09:30:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:38.365 09:30:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:38.365 09:30:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:38.365 09:30:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:38.365 ************************************ 00:12:38.365 START TEST nvmf_referrals 00:12:38.365 ************************************ 00:12:38.365 09:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:38.365 * Looking for test storage... 00:12:38.365 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:38.365 09:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:38.365 09:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lcov --version 00:12:38.365 09:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:38.365 09:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:38.365 09:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:38.365 09:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:38.365 09:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:38.365 09:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:12:38.365 09:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:12:38.365 09:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:12:38.365 09:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:12:38.365 09:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:12:38.365 09:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:12:38.365 09:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:12:38.365 09:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:38.365 09:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:12:38.365 09:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:12:38.365 09:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:38.365 09:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:38.365 09:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:12:38.365 09:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:12:38.365 09:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:38.365 09:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:12:38.626 09:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:12:38.626 09:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:12:38.626 09:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:12:38.626 09:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:38.626 09:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:12:38.626 09:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:12:38.626 09:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:38.626 09:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:38.626 09:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:12:38.626 09:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:38.626 09:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:38.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:38.626 --rc genhtml_branch_coverage=1 00:12:38.626 --rc genhtml_function_coverage=1 00:12:38.626 --rc genhtml_legend=1 00:12:38.626 --rc geninfo_all_blocks=1 00:12:38.626 --rc geninfo_unexecuted_blocks=1 00:12:38.626 00:12:38.626 ' 00:12:38.626 09:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:38.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:38.626 --rc genhtml_branch_coverage=1 00:12:38.626 --rc genhtml_function_coverage=1 00:12:38.626 --rc genhtml_legend=1 00:12:38.626 --rc geninfo_all_blocks=1 00:12:38.626 --rc geninfo_unexecuted_blocks=1 00:12:38.626 00:12:38.626 ' 00:12:38.626 09:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:38.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:38.626 --rc genhtml_branch_coverage=1 00:12:38.626 --rc genhtml_function_coverage=1 00:12:38.626 --rc genhtml_legend=1 00:12:38.626 --rc geninfo_all_blocks=1 00:12:38.626 --rc geninfo_unexecuted_blocks=1 00:12:38.626 00:12:38.626 ' 00:12:38.626 09:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:38.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:38.626 --rc genhtml_branch_coverage=1 00:12:38.626 --rc genhtml_function_coverage=1 00:12:38.626 --rc genhtml_legend=1 00:12:38.626 --rc geninfo_all_blocks=1 00:12:38.626 --rc geninfo_unexecuted_blocks=1 00:12:38.626 00:12:38.626 ' 00:12:38.626 09:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:38.626 09:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:38.626 09:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:38.626 09:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:38.626 09:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:38.626 09:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:38.626 09:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:38.626 09:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:38.626 09:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:38.626 09:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:38.626 09:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:38.626 09:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:38.626 09:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:38.626 09:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:38.626 09:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:38.626 09:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:38.626 09:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:38.626 09:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:38.626 09:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:38.626 09:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:12:38.626 09:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:38.626 09:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:38.626 09:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:38.626 09:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:38.626 09:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:38.626 09:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:38.626 09:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:38.626 09:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:38.626 09:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:12:38.626 09:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:38.626 09:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:38.626 09:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:38.626 09:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:38.626 09:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:38.626 09:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:38.626 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:38.626 09:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:38.626 09:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:38.626 09:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:38.626 09:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:38.626 09:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:38.626 09:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:38.626 09:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:38.626 09:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:38.626 09:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:38.626 09:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:38.626 09:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:38.626 09:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:38.627 09:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:38.627 09:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:38.627 09:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:38.627 09:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:38.627 09:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:38.627 09:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:38.627 09:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:38.627 09:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:38.627 09:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:12:38.627 09:30:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:46.767 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:46.767 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:12:46.767 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:46.767 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:46.767 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:46.767 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:46.767 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:46.767 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:12:46.767 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:46.767 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:12:46.767 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:12:46.767 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:12:46.767 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:12:46.767 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:12:46.767 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:12:46.767 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:46.767 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:46.767 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:46.767 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:46.767 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:46.767 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:46.767 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:46.767 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:46.767 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:46.767 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:46.767 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:46.767 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:46.767 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:46.767 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:46.767 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:46.767 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:46.767 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:46.767 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:46.767 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:46.767 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:46.767 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:46.767 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:46.767 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:46.767 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:46.767 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:46.767 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:46.767 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:46.767 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:46.767 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:46.767 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:46.767 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:46.767 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:46.767 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:46.767 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:46.767 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:46.767 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:46.767 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:46.767 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:46.767 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:46.767 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:46.767 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:46.767 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:46.767 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:46.767 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:46.767 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:46.767 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:46.767 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:46.767 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:46.767 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:46.767 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:46.767 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:46.767 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:46.767 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:46.767 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:46.767 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:46.767 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:46.767 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:46.767 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:46.767 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:12:46.767 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:46.767 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:46.767 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:46.767 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:46.767 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:46.767 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:46.767 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:46.767 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:46.767 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:46.767 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:46.767 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:46.767 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:46.767 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:46.767 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:46.767 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:46.767 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:46.767 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:46.767 09:30:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:46.767 09:30:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:46.767 09:30:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:46.768 09:30:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:46.768 09:30:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:46.768 09:30:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:46.768 09:30:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:46.768 09:30:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:46.768 09:30:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:46.768 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:46.768 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.570 ms 00:12:46.768 00:12:46.768 --- 10.0.0.2 ping statistics --- 00:12:46.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:46.768 rtt min/avg/max/mdev = 0.570/0.570/0.570/0.000 ms 00:12:46.768 09:30:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:46.768 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:46.768 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.329 ms 00:12:46.768 00:12:46.768 --- 10.0.0.1 ping statistics --- 00:12:46.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:46.768 rtt min/avg/max/mdev = 0.329/0.329/0.329/0.000 ms 00:12:46.768 09:30:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:46.768 09:30:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:12:46.768 09:30:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:46.768 09:30:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:46.768 09:30:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:46.768 09:30:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:46.768 09:30:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:46.768 09:30:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:46.768 09:30:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:46.768 09:30:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:46.768 09:30:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:46.768 09:30:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:46.768 09:30:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:46.768 09:30:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=2672854 00:12:46.768 09:30:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 2672854 00:12:46.768 09:30:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:46.768 09:30:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 2672854 ']' 00:12:46.768 09:30:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:46.768 09:30:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:46.768 09:30:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:46.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:46.768 09:30:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:46.768 09:30:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:46.768 [2024-12-09 09:30:21.307779] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:12:46.768 [2024-12-09 09:30:21.307845] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:46.768 [2024-12-09 09:30:21.407192] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:46.768 [2024-12-09 09:30:21.435431] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:46.768 [2024-12-09 09:30:21.435475] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:46.768 [2024-12-09 09:30:21.435483] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:46.768 [2024-12-09 09:30:21.435490] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:46.768 [2024-12-09 09:30:21.435497] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:46.768 [2024-12-09 09:30:21.437712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:46.768 [2024-12-09 09:30:21.437851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:46.768 [2024-12-09 09:30:21.438018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:46.768 [2024-12-09 09:30:21.438019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:46.768 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:46.768 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:12:46.768 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:46.768 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:46.768 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:46.768 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:46.768 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:46.768 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.768 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:46.768 [2024-12-09 09:30:22.169083] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:46.768 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.768 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:12:46.768 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.768 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:46.768 [2024-12-09 09:30:22.197786] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:46.768 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.768 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:46.768 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.768 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:46.768 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.768 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:46.768 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.768 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:47.029 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.029 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:47.029 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.029 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:47.029 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.029 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:47.029 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:47.029 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.029 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:47.029 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.029 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:47.030 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:47.030 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:47.030 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:47.030 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:47.030 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.030 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:47.030 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:47.030 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.030 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:47.030 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:47.030 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:47.030 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:47.030 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:47.030 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:47.030 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:47.030 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:47.291 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:47.291 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:47.291 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:47.291 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.291 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:47.291 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.291 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:47.291 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.291 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:47.291 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.291 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:47.291 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.291 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:47.291 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.291 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:47.291 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:47.291 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.291 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:47.291 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.291 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:47.291 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:47.291 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:47.291 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:47.291 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:47.291 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:47.291 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:47.552 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:47.552 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:47.552 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:47.552 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.552 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:47.552 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.552 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:47.552 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.552 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:47.552 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.552 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:47.552 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:47.552 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:47.552 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:47.553 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.553 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:47.553 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:47.553 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.553 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:47.553 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:47.553 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:47.553 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:47.553 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:47.553 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:47.553 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:47.553 09:30:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:47.814 09:30:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:47.814 09:30:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:47.814 09:30:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:47.814 09:30:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:47.814 09:30:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:47.814 09:30:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:47.814 09:30:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:47.814 09:30:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:48.075 09:30:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:48.075 09:30:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:48.075 09:30:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:48.075 09:30:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:48.075 09:30:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:48.075 09:30:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:48.075 09:30:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:48.075 09:30:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.075 09:30:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:48.075 09:30:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.075 09:30:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:48.075 09:30:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:48.075 09:30:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:48.075 09:30:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:48.075 09:30:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.075 09:30:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:48.075 09:30:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:48.075 09:30:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.335 09:30:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:48.335 09:30:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:48.335 09:30:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:48.335 09:30:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:48.335 09:30:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:48.335 09:30:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:48.335 09:30:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:48.335 09:30:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:48.335 09:30:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:48.335 09:30:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:48.335 09:30:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:48.335 09:30:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:48.335 09:30:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:48.336 09:30:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:48.336 09:30:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:48.596 09:30:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:48.596 09:30:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:48.596 09:30:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:48.596 09:30:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:48.596 09:30:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:48.596 09:30:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:48.596 09:30:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:48.596 09:30:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:48.596 09:30:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.596 09:30:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:48.857 09:30:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.857 09:30:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:48.857 09:30:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:48.857 09:30:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.857 09:30:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:48.857 09:30:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.857 09:30:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:48.857 09:30:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:48.857 09:30:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:48.857 09:30:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:48.857 09:30:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:48.857 09:30:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:48.857 09:30:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:49.117 09:30:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:49.117 09:30:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:49.117 09:30:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:49.117 09:30:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:49.117 09:30:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:49.117 09:30:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:12:49.117 09:30:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:49.117 09:30:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:12:49.117 09:30:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:49.117 09:30:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:49.117 rmmod nvme_tcp 00:12:49.117 rmmod nvme_fabrics 00:12:49.117 rmmod nvme_keyring 00:12:49.117 09:30:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:49.117 09:30:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:12:49.117 09:30:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:12:49.117 09:30:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 2672854 ']' 00:12:49.117 09:30:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 2672854 00:12:49.117 09:30:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 2672854 ']' 00:12:49.117 09:30:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 2672854 00:12:49.117 09:30:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:12:49.117 09:30:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:49.117 09:30:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2672854 00:12:49.117 09:30:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:49.117 09:30:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:49.117 09:30:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2672854' 00:12:49.117 killing process with pid 2672854 00:12:49.117 09:30:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 2672854 00:12:49.117 09:30:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 2672854 00:12:49.377 09:30:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:49.377 09:30:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:49.377 09:30:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:49.377 09:30:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:12:49.377 09:30:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:12:49.377 09:30:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:49.377 09:30:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:12:49.377 09:30:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:49.377 09:30:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:49.377 09:30:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:49.377 09:30:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:49.377 09:30:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:51.292 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:51.292 00:12:51.292 real 0m13.034s 00:12:51.292 user 0m15.646s 00:12:51.292 sys 0m6.393s 00:12:51.292 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:51.292 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:51.292 ************************************ 00:12:51.292 END TEST nvmf_referrals 00:12:51.292 ************************************ 00:12:51.292 09:30:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:51.292 09:30:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:51.292 09:30:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:51.292 09:30:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:51.554 ************************************ 00:12:51.554 START TEST nvmf_connect_disconnect 00:12:51.554 ************************************ 00:12:51.554 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:51.554 * Looking for test storage... 00:12:51.554 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:51.554 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:51.554 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:12:51.554 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:51.554 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:51.554 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:51.554 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:51.554 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:51.554 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:12:51.554 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:12:51.554 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:12:51.554 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:12:51.554 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:12:51.554 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:12:51.554 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:12:51.554 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:51.554 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:12:51.554 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:12:51.554 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:51.554 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:51.554 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:12:51.554 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:12:51.554 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:51.554 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:12:51.554 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:12:51.554 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:12:51.554 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:12:51.554 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:51.554 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:12:51.554 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:12:51.554 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:51.554 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:51.554 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:12:51.554 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:51.554 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:51.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:51.555 --rc genhtml_branch_coverage=1 00:12:51.555 --rc genhtml_function_coverage=1 00:12:51.555 --rc genhtml_legend=1 00:12:51.555 --rc geninfo_all_blocks=1 00:12:51.555 --rc geninfo_unexecuted_blocks=1 00:12:51.555 00:12:51.555 ' 00:12:51.555 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:51.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:51.555 --rc genhtml_branch_coverage=1 00:12:51.555 --rc genhtml_function_coverage=1 00:12:51.555 --rc genhtml_legend=1 00:12:51.555 --rc geninfo_all_blocks=1 00:12:51.555 --rc geninfo_unexecuted_blocks=1 00:12:51.555 00:12:51.555 ' 00:12:51.555 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:51.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:51.555 --rc genhtml_branch_coverage=1 00:12:51.555 --rc genhtml_function_coverage=1 00:12:51.555 --rc genhtml_legend=1 00:12:51.555 --rc geninfo_all_blocks=1 00:12:51.555 --rc geninfo_unexecuted_blocks=1 00:12:51.555 00:12:51.555 ' 00:12:51.555 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:51.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:51.555 --rc genhtml_branch_coverage=1 00:12:51.555 --rc genhtml_function_coverage=1 00:12:51.555 --rc genhtml_legend=1 00:12:51.555 --rc geninfo_all_blocks=1 00:12:51.555 --rc geninfo_unexecuted_blocks=1 00:12:51.555 00:12:51.555 ' 00:12:51.555 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:51.555 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:51.555 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:51.555 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:51.555 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:51.555 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:51.555 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:51.555 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:51.555 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:51.555 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:51.555 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:51.555 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:51.555 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:51.555 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:51.555 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:51.555 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:51.555 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:51.555 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:51.555 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:51.555 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:12:51.555 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:51.555 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:51.555 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:51.555 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.555 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.555 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.555 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:51.555 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.555 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:12:51.555 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:51.555 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:51.555 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:51.555 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:51.555 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:51.555 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:51.555 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:51.555 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:51.555 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:51.555 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:51.555 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:51.555 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:51.555 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:51.555 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:51.555 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:51.555 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:51.555 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:51.555 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:51.555 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:51.555 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:51.555 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:51.555 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:51.555 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:51.555 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:12:51.555 09:30:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:59.702 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:59.702 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:12:59.702 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:59.702 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:59.702 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:59.702 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:59.702 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:59.702 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:12:59.702 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:59.702 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:12:59.702 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:12:59.702 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:12:59.702 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:12:59.702 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:12:59.702 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:12:59.702 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:59.702 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:59.702 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:59.702 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:59.702 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:59.702 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:59.702 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:59.702 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:59.702 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:59.702 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:59.702 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:59.702 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:59.702 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:59.702 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:59.702 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:59.702 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:59.702 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:59.702 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:59.702 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:59.702 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:59.702 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:59.703 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:59.703 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:59.703 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:59.703 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:59.703 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:59.703 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:59.703 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:59.703 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:59.703 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:59.703 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:59.703 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:59.703 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:59.703 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:59.703 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:59.703 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:59.703 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:59.703 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:59.703 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:59.703 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:59.703 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:59.703 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:59.703 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:59.703 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:59.703 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:59.703 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:59.703 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:59.703 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:59.703 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:59.703 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:59.703 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:59.703 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:59.703 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:59.703 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:59.703 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:59.703 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:59.703 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:59.703 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:59.703 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:12:59.703 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:59.703 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:59.703 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:59.703 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:59.703 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:59.703 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:59.703 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:59.703 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:59.703 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:59.703 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:59.703 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:59.703 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:59.703 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:59.703 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:59.703 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:59.703 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:59.703 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:59.703 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:59.703 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:59.703 09:30:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:59.703 09:30:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:59.703 09:30:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:59.703 09:30:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:59.703 09:30:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:59.703 09:30:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:59.703 09:30:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:59.703 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:59.703 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.655 ms 00:12:59.703 00:12:59.703 --- 10.0.0.2 ping statistics --- 00:12:59.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:59.703 rtt min/avg/max/mdev = 0.655/0.655/0.655/0.000 ms 00:12:59.703 09:30:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:59.703 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:59.703 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.301 ms 00:12:59.703 00:12:59.703 --- 10.0.0.1 ping statistics --- 00:12:59.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:59.703 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:12:59.703 09:30:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:59.703 09:30:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:12:59.703 09:30:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:59.703 09:30:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:59.703 09:30:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:59.703 09:30:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:59.703 09:30:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:59.703 09:30:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:59.703 09:30:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:59.703 09:30:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:59.703 09:30:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:59.703 09:30:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:59.703 09:30:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:59.703 09:30:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=2677629 00:12:59.703 09:30:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 2677629 00:12:59.703 09:30:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:59.703 09:30:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 2677629 ']' 00:12:59.703 09:30:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:59.703 09:30:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:59.703 09:30:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:59.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:59.703 09:30:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:59.703 09:30:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:59.703 [2024-12-09 09:30:34.247365] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:12:59.703 [2024-12-09 09:30:34.247432] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:59.703 [2024-12-09 09:30:34.334360] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:59.703 [2024-12-09 09:30:34.370115] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:59.704 [2024-12-09 09:30:34.370184] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:59.704 [2024-12-09 09:30:34.370196] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:59.704 [2024-12-09 09:30:34.370205] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:59.704 [2024-12-09 09:30:34.370213] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:59.704 [2024-12-09 09:30:34.372715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:59.704 [2024-12-09 09:30:34.372751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:59.704 [2024-12-09 09:30:34.372929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:59.704 [2024-12-09 09:30:34.372932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:59.704 09:30:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:59.704 09:30:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:12:59.704 09:30:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:59.704 09:30:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:59.704 09:30:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:59.704 09:30:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:59.704 09:30:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:59.704 09:30:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.704 09:30:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:59.704 [2024-12-09 09:30:34.521902] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:59.704 09:30:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.704 09:30:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:59.704 09:30:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.704 09:30:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:59.704 09:30:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.704 09:30:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:59.704 09:30:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:59.704 09:30:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.704 09:30:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:59.704 09:30:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.704 09:30:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:59.704 09:30:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.704 09:30:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:59.704 09:30:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.704 09:30:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:59.704 09:30:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.704 09:30:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:59.704 [2024-12-09 09:30:34.589044] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:59.704 09:30:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.704 09:30:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:12:59.704 09:30:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:12:59.704 09:30:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:12:59.704 09:30:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:13:01.617 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:04.160 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:06.073 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:08.618 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:11.161 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:13.244 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:15.869 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:17.779 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:20.323 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:22.864 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:24.775 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:27.318 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:29.865 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:31.775 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:34.323 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:36.871 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:38.785 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:41.327 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:43.872 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:46.416 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:48.333 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:50.874 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:52.781 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:55.320 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:57.861 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:00.403 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:02.314 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:04.905 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:07.446 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:09.358 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:11.895 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:13.803 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:16.403 [2024-12-09 09:31:51.558569] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdd91c0 is same with the state(6) to be set 00:14:16.403 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:18.948 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:20.863 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:23.410 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:25.956 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:27.865 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:30.424 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:32.969 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:34.881 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:37.442 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:39.985 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:41.997 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:44.660 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:46.573 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:49.115 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:51.656 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:54.195 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:56.107 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:58.654 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:01.202 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:03.119 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:05.673 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:08.209 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:10.744 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:12.658 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:15.208 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:17.754 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:19.660 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:22.200 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:24.753 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:26.667 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:29.252 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:31.797 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:33.715 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:36.263 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:38.807 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:40.717 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:43.258 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:45.817 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:47.748 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:50.292 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:52.836 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:54.749 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:57.288 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:59.838 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:01.755 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:04.303 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:06.219 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:08.759 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:11.298 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:13.846 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:15.760 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:18.305 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:20.849 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:22.763 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:25.310 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:27.854 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:29.769 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:32.310 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:35.006 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:36.921 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:39.469 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:42.011 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:43.921 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:46.461 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:49.002 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:51.545 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:53.479 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:53.479 09:34:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:16:53.479 09:34:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:16:53.479 09:34:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:53.479 09:34:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:16:53.479 09:34:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:53.479 09:34:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:16:53.479 09:34:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:53.479 09:34:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:53.479 rmmod nvme_tcp 00:16:53.479 rmmod nvme_fabrics 00:16:53.479 rmmod nvme_keyring 00:16:53.479 09:34:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:53.479 09:34:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:16:53.479 09:34:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:16:53.479 09:34:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 2677629 ']' 00:16:53.479 09:34:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 2677629 00:16:53.479 09:34:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 2677629 ']' 00:16:53.479 09:34:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 2677629 00:16:53.479 09:34:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:16:53.479 09:34:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:53.479 09:34:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2677629 00:16:53.479 09:34:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:53.479 09:34:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:53.479 09:34:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2677629' 00:16:53.479 killing process with pid 2677629 00:16:53.479 09:34:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 2677629 00:16:53.479 09:34:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 2677629 00:16:53.740 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:53.740 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:53.740 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:53.740 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:16:53.740 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:16:53.740 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:16:53.740 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:53.740 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:53.740 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:53.740 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:53.740 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:53.740 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:56.294 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:56.294 00:16:56.294 real 4m4.371s 00:16:56.294 user 15m30.224s 00:16:56.294 sys 0m26.327s 00:16:56.294 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:56.294 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:56.294 ************************************ 00:16:56.294 END TEST nvmf_connect_disconnect 00:16:56.294 ************************************ 00:16:56.294 09:34:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:56.294 09:34:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:56.294 09:34:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:56.294 09:34:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:56.294 ************************************ 00:16:56.294 START TEST nvmf_multitarget 00:16:56.294 ************************************ 00:16:56.294 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:56.294 * Looking for test storage... 00:16:56.294 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:56.294 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:56.294 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lcov --version 00:16:56.294 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:56.294 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:56.294 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:56.294 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:56.294 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:56.295 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:16:56.295 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:16:56.295 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:16:56.295 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:16:56.295 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:16:56.295 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:16:56.295 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:16:56.295 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:56.295 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:16:56.295 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:16:56.295 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:56.295 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:56.295 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:16:56.295 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:16:56.295 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:56.295 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:16:56.295 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:16:56.295 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:16:56.295 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:16:56.295 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:56.295 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:16:56.295 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:16:56.295 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:56.295 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:56.295 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:16:56.295 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:56.295 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:56.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:56.295 --rc genhtml_branch_coverage=1 00:16:56.295 --rc genhtml_function_coverage=1 00:16:56.295 --rc genhtml_legend=1 00:16:56.295 --rc geninfo_all_blocks=1 00:16:56.295 --rc geninfo_unexecuted_blocks=1 00:16:56.295 00:16:56.295 ' 00:16:56.295 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:56.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:56.295 --rc genhtml_branch_coverage=1 00:16:56.295 --rc genhtml_function_coverage=1 00:16:56.295 --rc genhtml_legend=1 00:16:56.295 --rc geninfo_all_blocks=1 00:16:56.295 --rc geninfo_unexecuted_blocks=1 00:16:56.295 00:16:56.295 ' 00:16:56.295 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:56.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:56.295 --rc genhtml_branch_coverage=1 00:16:56.295 --rc genhtml_function_coverage=1 00:16:56.295 --rc genhtml_legend=1 00:16:56.295 --rc geninfo_all_blocks=1 00:16:56.295 --rc geninfo_unexecuted_blocks=1 00:16:56.295 00:16:56.295 ' 00:16:56.295 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:56.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:56.295 --rc genhtml_branch_coverage=1 00:16:56.295 --rc genhtml_function_coverage=1 00:16:56.295 --rc genhtml_legend=1 00:16:56.295 --rc geninfo_all_blocks=1 00:16:56.295 --rc geninfo_unexecuted_blocks=1 00:16:56.295 00:16:56.295 ' 00:16:56.295 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:56.295 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:16:56.295 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:56.295 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:56.295 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:56.295 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:56.295 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:56.295 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:56.295 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:56.295 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:56.295 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:56.295 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:56.295 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:56.295 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:56.295 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:56.295 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:56.295 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:56.295 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:56.295 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:56.295 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:16:56.295 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:56.295 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:56.295 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:56.295 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.295 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.295 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.295 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:16:56.295 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.295 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:16:56.295 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:56.295 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:56.295 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:56.295 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:56.295 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:56.295 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:56.295 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:56.295 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:56.295 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:56.295 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:56.295 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:16:56.295 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:16:56.295 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:56.295 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:56.295 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:56.295 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:56.295 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:56.295 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:56.295 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:56.295 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:56.295 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:56.295 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:56.295 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:16:56.295 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:02.889 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:02.889 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:17:02.889 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:02.889 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:02.889 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:02.889 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:02.889 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:02.889 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:17:02.889 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:02.889 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:17:02.889 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:17:02.889 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:17:02.889 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:17:02.889 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:17:02.890 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:17:02.890 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:02.890 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:02.890 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:02.890 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:02.890 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:02.890 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:02.890 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:02.890 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:02.890 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:02.890 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:02.890 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:02.890 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:02.890 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:02.890 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:02.890 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:02.890 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:02.890 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:02.890 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:02.890 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:02.890 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:02.890 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:02.890 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:02.890 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:02.890 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:02.890 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:02.890 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:02.890 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:02.890 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:02.890 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:02.890 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:02.890 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:02.890 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:02.890 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:02.890 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:02.890 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:02.890 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:02.890 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:02.890 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:02.890 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:02.890 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:02.890 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:02.890 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:02.890 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:02.890 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:02.890 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:02.890 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:02.890 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:02.890 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:02.890 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:02.890 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:02.890 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:02.890 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:02.890 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:02.890 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:02.890 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:02.890 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:02.890 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:02.890 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:02.890 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:17:02.890 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:02.890 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:02.890 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:02.890 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:02.890 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:02.890 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:02.890 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:02.890 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:02.890 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:02.890 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:02.890 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:02.890 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:02.890 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:02.890 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:02.890 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:02.890 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:02.890 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:02.890 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:03.151 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:03.151 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:03.151 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:03.151 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:03.151 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:03.151 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:03.151 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:03.151 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:03.151 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:03.151 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.491 ms 00:17:03.151 00:17:03.151 --- 10.0.0.2 ping statistics --- 00:17:03.151 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:03.151 rtt min/avg/max/mdev = 0.491/0.491/0.491/0.000 ms 00:17:03.151 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:03.151 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:03.151 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:17:03.151 00:17:03.151 --- 10.0.0.1 ping statistics --- 00:17:03.151 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:03.151 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:17:03.152 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:03.152 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:17:03.152 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:03.152 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:03.152 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:03.152 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:03.152 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:03.152 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:03.152 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:03.413 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:17:03.413 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:03.413 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:03.413 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:03.413 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=2728945 00:17:03.413 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 2728945 00:17:03.413 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:03.413 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 2728945 ']' 00:17:03.413 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:03.413 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:03.413 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:03.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:03.413 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:03.413 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:03.413 [2024-12-09 09:34:38.711346] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:17:03.413 [2024-12-09 09:34:38.711415] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:03.413 [2024-12-09 09:34:38.811010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:03.413 [2024-12-09 09:34:38.839719] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:03.413 [2024-12-09 09:34:38.839766] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:03.413 [2024-12-09 09:34:38.839774] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:03.413 [2024-12-09 09:34:38.839782] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:03.413 [2024-12-09 09:34:38.839788] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:03.413 [2024-12-09 09:34:38.842006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:03.413 [2024-12-09 09:34:38.842132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:03.413 [2024-12-09 09:34:38.842300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:03.413 [2024-12-09 09:34:38.842299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:04.357 09:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:04.357 09:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:17:04.357 09:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:04.357 09:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:04.357 09:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:04.357 09:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:04.357 09:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:04.357 09:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:04.357 09:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:17:04.357 09:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:17:04.357 09:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:17:04.357 "nvmf_tgt_1" 00:17:04.357 09:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:17:04.638 "nvmf_tgt_2" 00:17:04.638 09:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:04.638 09:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:17:04.638 09:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:17:04.638 09:34:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:17:04.904 true 00:17:04.904 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:17:04.904 true 00:17:04.904 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:04.904 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:17:04.904 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:17:04.904 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:17:04.904 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:17:04.904 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:04.904 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:17:04.904 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:04.904 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:17:04.904 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:04.904 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:04.904 rmmod nvme_tcp 00:17:05.166 rmmod nvme_fabrics 00:17:05.166 rmmod nvme_keyring 00:17:05.166 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:05.166 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:17:05.166 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:17:05.166 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 2728945 ']' 00:17:05.166 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 2728945 00:17:05.166 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 2728945 ']' 00:17:05.166 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 2728945 00:17:05.166 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:17:05.166 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:05.166 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2728945 00:17:05.166 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:05.166 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:05.166 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2728945' 00:17:05.166 killing process with pid 2728945 00:17:05.166 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 2728945 00:17:05.166 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 2728945 00:17:05.166 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:05.166 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:05.166 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:05.166 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:17:05.166 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:17:05.166 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:05.166 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:17:05.166 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:05.166 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:05.166 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:05.166 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:05.166 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:07.714 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:07.714 00:17:07.714 real 0m11.453s 00:17:07.714 user 0m9.781s 00:17:07.714 sys 0m5.962s 00:17:07.714 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:07.714 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:07.714 ************************************ 00:17:07.714 END TEST nvmf_multitarget 00:17:07.714 ************************************ 00:17:07.714 09:34:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:17:07.714 09:34:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:07.714 09:34:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:07.714 09:34:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:07.714 ************************************ 00:17:07.714 START TEST nvmf_rpc 00:17:07.714 ************************************ 00:17:07.714 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:17:07.714 * Looking for test storage... 00:17:07.714 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:07.714 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:07.714 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:17:07.714 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:07.714 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:07.714 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:07.714 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:07.714 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:07.714 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:17:07.714 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:17:07.714 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:17:07.714 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:17:07.714 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:17:07.714 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:17:07.714 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:17:07.714 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:07.714 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:17:07.714 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:17:07.714 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:07.714 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:07.714 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:17:07.714 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:17:07.714 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:07.714 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:17:07.714 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:17:07.714 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:17:07.714 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:17:07.714 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:07.714 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:17:07.714 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:17:07.715 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:07.715 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:07.715 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:17:07.715 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:07.715 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:07.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:07.715 --rc genhtml_branch_coverage=1 00:17:07.715 --rc genhtml_function_coverage=1 00:17:07.715 --rc genhtml_legend=1 00:17:07.715 --rc geninfo_all_blocks=1 00:17:07.715 --rc geninfo_unexecuted_blocks=1 00:17:07.715 00:17:07.715 ' 00:17:07.715 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:07.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:07.715 --rc genhtml_branch_coverage=1 00:17:07.715 --rc genhtml_function_coverage=1 00:17:07.715 --rc genhtml_legend=1 00:17:07.715 --rc geninfo_all_blocks=1 00:17:07.715 --rc geninfo_unexecuted_blocks=1 00:17:07.715 00:17:07.715 ' 00:17:07.715 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:07.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:07.715 --rc genhtml_branch_coverage=1 00:17:07.715 --rc genhtml_function_coverage=1 00:17:07.715 --rc genhtml_legend=1 00:17:07.715 --rc geninfo_all_blocks=1 00:17:07.715 --rc geninfo_unexecuted_blocks=1 00:17:07.715 00:17:07.715 ' 00:17:07.715 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:07.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:07.715 --rc genhtml_branch_coverage=1 00:17:07.715 --rc genhtml_function_coverage=1 00:17:07.715 --rc genhtml_legend=1 00:17:07.715 --rc geninfo_all_blocks=1 00:17:07.715 --rc geninfo_unexecuted_blocks=1 00:17:07.715 00:17:07.715 ' 00:17:07.715 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:07.715 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:17:07.715 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:07.715 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:07.715 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:07.715 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:07.715 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:07.715 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:07.715 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:07.715 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:07.715 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:07.715 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:07.715 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:07.715 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:07.715 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:07.715 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:07.715 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:07.715 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:07.715 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:07.715 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:17:07.715 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:07.715 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:07.715 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:07.715 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.715 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.715 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.715 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:17:07.715 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.715 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:17:07.715 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:07.715 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:07.715 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:07.715 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:07.715 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:07.715 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:07.715 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:07.715 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:07.715 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:07.715 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:07.715 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:17:07.715 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:17:07.715 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:07.715 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:07.715 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:07.715 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:07.715 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:07.715 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:07.715 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:07.715 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:07.715 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:07.715 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:07.715 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:17:07.715 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:15.862 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:15.862 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:17:15.862 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:15.862 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:15.862 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:15.862 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:15.862 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:15.862 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:17:15.862 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:15.862 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:17:15.862 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:17:15.862 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:17:15.862 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:17:15.862 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:17:15.862 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:17:15.862 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:15.862 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:15.862 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:15.862 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:15.862 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:15.862 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:15.862 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:15.862 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:15.862 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:15.862 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:15.862 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:15.862 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:15.862 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:15.862 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:15.862 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:15.862 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:15.862 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:15.862 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:15.862 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:15.862 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:15.862 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:15.862 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:15.862 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:15.862 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:15.862 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:15.862 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:15.862 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:15.862 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:15.862 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:15.862 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:15.862 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:15.862 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:15.862 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:15.863 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:15.863 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:15.863 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:15.863 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:15.863 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:15.863 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:15.863 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:15.863 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:15.863 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:15.863 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:15.863 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:15.863 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:15.863 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:15.863 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:15.863 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:15.863 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:15.863 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:15.863 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:15.863 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:15.863 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:15.863 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:15.863 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:15.863 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:15.863 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:15.863 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:15.863 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:17:15.863 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:15.863 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:15.863 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:15.863 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:15.863 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:15.863 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:15.863 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:15.863 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:15.863 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:15.863 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:15.863 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:15.863 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:15.863 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:15.863 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:15.863 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:15.863 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:15.863 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:15.863 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:15.863 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:15.863 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:15.863 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:15.863 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:15.863 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:15.863 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:15.863 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:15.863 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:15.863 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:15.863 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.665 ms 00:17:15.863 00:17:15.863 --- 10.0.0.2 ping statistics --- 00:17:15.863 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:15.863 rtt min/avg/max/mdev = 0.665/0.665/0.665/0.000 ms 00:17:15.863 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:15.863 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:15.863 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.227 ms 00:17:15.863 00:17:15.863 --- 10.0.0.1 ping statistics --- 00:17:15.863 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:15.863 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:17:15.863 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:15.863 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:17:15.863 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:15.863 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:15.863 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:15.863 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:15.863 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:15.863 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:15.863 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:15.863 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:17:15.863 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:15.863 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:15.863 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:15.863 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=2733633 00:17:15.863 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 2733633 00:17:15.863 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:15.863 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 2733633 ']' 00:17:15.863 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:15.863 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:15.863 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:15.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:15.863 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:15.863 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:15.863 [2024-12-09 09:34:50.452076] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:17:15.863 [2024-12-09 09:34:50.452146] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:15.863 [2024-12-09 09:34:50.552603] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:15.863 [2024-12-09 09:34:50.581110] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:15.863 [2024-12-09 09:34:50.581165] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:15.863 [2024-12-09 09:34:50.581173] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:15.863 [2024-12-09 09:34:50.581180] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:15.863 [2024-12-09 09:34:50.581187] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:15.863 [2024-12-09 09:34:50.583437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:15.863 [2024-12-09 09:34:50.583583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:15.863 [2024-12-09 09:34:50.583731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:15.863 [2024-12-09 09:34:50.583734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:15.863 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:15.863 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:17:15.863 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:15.863 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:15.863 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:15.863 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:15.863 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:17:15.863 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.863 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:16.124 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.124 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:17:16.124 "tick_rate": 2400000000, 00:17:16.124 "poll_groups": [ 00:17:16.124 { 00:17:16.124 "name": "nvmf_tgt_poll_group_000", 00:17:16.124 "admin_qpairs": 0, 00:17:16.124 "io_qpairs": 0, 00:17:16.124 "current_admin_qpairs": 0, 00:17:16.124 "current_io_qpairs": 0, 00:17:16.124 "pending_bdev_io": 0, 00:17:16.124 "completed_nvme_io": 0, 00:17:16.124 "transports": [] 00:17:16.124 }, 00:17:16.124 { 00:17:16.124 "name": "nvmf_tgt_poll_group_001", 00:17:16.124 "admin_qpairs": 0, 00:17:16.124 "io_qpairs": 0, 00:17:16.124 "current_admin_qpairs": 0, 00:17:16.124 "current_io_qpairs": 0, 00:17:16.124 "pending_bdev_io": 0, 00:17:16.124 "completed_nvme_io": 0, 00:17:16.124 "transports": [] 00:17:16.124 }, 00:17:16.124 { 00:17:16.125 "name": "nvmf_tgt_poll_group_002", 00:17:16.125 "admin_qpairs": 0, 00:17:16.125 "io_qpairs": 0, 00:17:16.125 "current_admin_qpairs": 0, 00:17:16.125 "current_io_qpairs": 0, 00:17:16.125 "pending_bdev_io": 0, 00:17:16.125 "completed_nvme_io": 0, 00:17:16.125 "transports": [] 00:17:16.125 }, 00:17:16.125 { 00:17:16.125 "name": "nvmf_tgt_poll_group_003", 00:17:16.125 "admin_qpairs": 0, 00:17:16.125 "io_qpairs": 0, 00:17:16.125 "current_admin_qpairs": 0, 00:17:16.125 "current_io_qpairs": 0, 00:17:16.125 "pending_bdev_io": 0, 00:17:16.125 "completed_nvme_io": 0, 00:17:16.125 "transports": [] 00:17:16.125 } 00:17:16.125 ] 00:17:16.125 }' 00:17:16.125 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:17:16.125 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:17:16.125 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:17:16.125 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:17:16.125 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:17:16.125 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:17:16.125 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:17:16.125 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:16.125 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.125 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:16.125 [2024-12-09 09:34:51.427054] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:16.125 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.125 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:17:16.125 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.125 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:16.125 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.125 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:17:16.125 "tick_rate": 2400000000, 00:17:16.125 "poll_groups": [ 00:17:16.125 { 00:17:16.125 "name": "nvmf_tgt_poll_group_000", 00:17:16.125 "admin_qpairs": 0, 00:17:16.125 "io_qpairs": 0, 00:17:16.125 "current_admin_qpairs": 0, 00:17:16.125 "current_io_qpairs": 0, 00:17:16.125 "pending_bdev_io": 0, 00:17:16.125 "completed_nvme_io": 0, 00:17:16.125 "transports": [ 00:17:16.125 { 00:17:16.125 "trtype": "TCP" 00:17:16.125 } 00:17:16.125 ] 00:17:16.125 }, 00:17:16.125 { 00:17:16.125 "name": "nvmf_tgt_poll_group_001", 00:17:16.125 "admin_qpairs": 0, 00:17:16.125 "io_qpairs": 0, 00:17:16.125 "current_admin_qpairs": 0, 00:17:16.125 "current_io_qpairs": 0, 00:17:16.125 "pending_bdev_io": 0, 00:17:16.125 "completed_nvme_io": 0, 00:17:16.125 "transports": [ 00:17:16.125 { 00:17:16.125 "trtype": "TCP" 00:17:16.125 } 00:17:16.125 ] 00:17:16.125 }, 00:17:16.125 { 00:17:16.125 "name": "nvmf_tgt_poll_group_002", 00:17:16.125 "admin_qpairs": 0, 00:17:16.125 "io_qpairs": 0, 00:17:16.125 "current_admin_qpairs": 0, 00:17:16.125 "current_io_qpairs": 0, 00:17:16.125 "pending_bdev_io": 0, 00:17:16.125 "completed_nvme_io": 0, 00:17:16.125 "transports": [ 00:17:16.125 { 00:17:16.125 "trtype": "TCP" 00:17:16.125 } 00:17:16.125 ] 00:17:16.125 }, 00:17:16.125 { 00:17:16.125 "name": "nvmf_tgt_poll_group_003", 00:17:16.125 "admin_qpairs": 0, 00:17:16.125 "io_qpairs": 0, 00:17:16.125 "current_admin_qpairs": 0, 00:17:16.125 "current_io_qpairs": 0, 00:17:16.125 "pending_bdev_io": 0, 00:17:16.125 "completed_nvme_io": 0, 00:17:16.125 "transports": [ 00:17:16.125 { 00:17:16.125 "trtype": "TCP" 00:17:16.125 } 00:17:16.125 ] 00:17:16.125 } 00:17:16.125 ] 00:17:16.125 }' 00:17:16.125 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:17:16.125 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:17:16.125 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:17:16.125 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:16.125 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:17:16.125 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:17:16.125 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:17:16.125 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:17:16.125 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:16.125 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:17:16.125 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:17:16.125 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:17:16.125 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:17:16.125 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:16.125 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.125 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:16.386 Malloc1 00:17:16.386 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.386 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:16.386 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.386 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:16.386 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.387 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:16.387 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.387 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:16.387 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.387 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:17:16.387 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.387 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:16.387 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.387 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:16.387 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.387 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:16.387 [2024-12-09 09:34:51.629046] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:16.387 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.387 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:17:16.387 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:17:16.387 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:17:16.387 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:17:16.387 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:16.387 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:17:16.387 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:16.387 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:17:16.387 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:16.387 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:17:16.387 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:17:16.387 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:17:16.387 [2024-12-09 09:34:51.665943] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:17:16.387 Failed to write to /dev/nvme-fabrics: Input/output error 00:17:16.387 could not add new controller: failed to write to nvme-fabrics device 00:17:16.387 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:17:16.387 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:16.387 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:16.387 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:16.387 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:16.387 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.387 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:16.387 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.387 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:17.772 09:34:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:17:17.772 09:34:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:17.772 09:34:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:17.772 09:34:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:17.772 09:34:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:20.319 09:34:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:20.319 09:34:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:20.319 09:34:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:20.319 09:34:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:20.319 09:34:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:20.319 09:34:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:20.319 09:34:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:20.319 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:20.319 09:34:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:20.319 09:34:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:20.319 09:34:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:20.319 09:34:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:20.319 09:34:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:20.319 09:34:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:20.319 09:34:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:20.319 09:34:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:20.319 09:34:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.319 09:34:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:20.319 09:34:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.319 09:34:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:20.319 09:34:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:17:20.319 09:34:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:20.319 09:34:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:17:20.319 09:34:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:20.319 09:34:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:17:20.319 09:34:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:20.319 09:34:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:17:20.319 09:34:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:20.319 09:34:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:17:20.319 09:34:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:17:20.319 09:34:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:20.319 [2024-12-09 09:34:55.393431] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:17:20.319 Failed to write to /dev/nvme-fabrics: Input/output error 00:17:20.319 could not add new controller: failed to write to nvme-fabrics device 00:17:20.319 09:34:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:17:20.319 09:34:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:20.319 09:34:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:20.319 09:34:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:20.319 09:34:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:17:20.319 09:34:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.319 09:34:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:20.319 09:34:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.319 09:34:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:21.717 09:34:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:17:21.717 09:34:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:21.717 09:34:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:21.717 09:34:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:21.717 09:34:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:23.629 09:34:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:23.629 09:34:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:23.629 09:34:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:23.629 09:34:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:23.629 09:34:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:23.629 09:34:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:23.629 09:34:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:23.629 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:23.629 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:23.629 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:23.629 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:23.629 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:23.629 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:23.629 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:23.889 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:23.889 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:23.889 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.889 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:23.889 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.889 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:17:23.889 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:23.889 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:23.889 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.889 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:23.889 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.889 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:23.889 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.889 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:23.889 [2024-12-09 09:34:59.123485] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:23.889 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.889 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:23.889 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.889 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:23.889 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.889 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:23.889 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.889 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:23.889 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.889 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:25.291 09:35:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:25.291 09:35:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:25.291 09:35:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:25.291 09:35:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:25.291 09:35:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:27.837 09:35:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:27.837 09:35:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:27.837 09:35:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:27.837 09:35:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:27.837 09:35:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:27.837 09:35:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:27.837 09:35:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:27.837 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:27.837 09:35:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:27.837 09:35:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:27.837 09:35:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:27.837 09:35:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:27.837 09:35:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:27.837 09:35:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:27.837 09:35:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:27.837 09:35:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:27.837 09:35:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.837 09:35:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:27.837 09:35:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.837 09:35:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:27.837 09:35:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.837 09:35:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:27.837 09:35:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.837 09:35:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:27.837 09:35:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:27.837 09:35:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.837 09:35:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:27.837 09:35:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.837 09:35:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:27.837 09:35:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.837 09:35:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:27.837 [2024-12-09 09:35:02.871691] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:27.837 09:35:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.837 09:35:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:27.837 09:35:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.837 09:35:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:27.837 09:35:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.837 09:35:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:27.837 09:35:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.837 09:35:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:27.837 09:35:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.837 09:35:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:29.232 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:29.232 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:29.232 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:29.232 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:29.232 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:31.144 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:31.144 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:31.144 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:31.144 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:31.144 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:31.144 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:31.144 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:31.144 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:31.144 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:31.144 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:31.144 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:31.144 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:31.144 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:31.144 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:31.144 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:31.144 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:31.144 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.144 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.144 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.144 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:31.144 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.144 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.144 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.144 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:31.144 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:31.144 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.144 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.144 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.144 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:31.144 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.144 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.144 [2024-12-09 09:35:06.590410] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:31.144 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.144 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:31.144 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.144 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.404 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.404 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:31.404 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.404 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:31.404 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.404 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:32.891 09:35:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:32.891 09:35:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:32.891 09:35:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:32.891 09:35:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:32.891 09:35:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:34.800 09:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:34.800 09:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:34.800 09:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:34.800 09:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:34.800 09:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:34.800 09:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:34.800 09:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:35.061 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:35.061 09:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:35.061 09:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:35.061 09:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:35.061 09:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:35.061 09:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:35.061 09:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:35.061 09:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:35.061 09:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:35.061 09:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.061 09:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:35.061 09:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.061 09:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:35.061 09:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.061 09:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:35.061 09:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.061 09:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:35.061 09:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:35.061 09:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.061 09:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:35.061 09:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.061 09:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:35.061 09:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.061 09:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:35.061 [2024-12-09 09:35:10.356830] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:35.061 09:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.061 09:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:35.061 09:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.061 09:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:35.061 09:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.061 09:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:35.061 09:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.061 09:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:35.061 09:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.061 09:35:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:36.445 09:35:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:36.445 09:35:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:36.445 09:35:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:36.445 09:35:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:36.445 09:35:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:38.993 09:35:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:38.993 09:35:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:38.993 09:35:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:38.993 09:35:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:38.993 09:35:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:38.993 09:35:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:38.993 09:35:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:38.993 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:38.993 09:35:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:38.993 09:35:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:38.993 09:35:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:38.993 09:35:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:38.993 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:38.993 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:38.993 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:38.993 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:38.993 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.993 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:38.993 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.993 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:38.993 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.993 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:38.993 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.993 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:38.993 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:38.993 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.993 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:38.993 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.993 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:38.993 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.993 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:38.993 [2024-12-09 09:35:14.071693] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:38.993 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.993 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:38.993 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.993 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:38.993 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.993 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:38.993 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.993 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:38.993 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.993 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:40.376 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:40.376 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:40.376 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:40.376 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:40.376 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:42.287 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:42.287 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:42.287 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:42.287 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:42.287 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:42.287 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:42.287 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:42.287 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:42.287 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:42.287 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:42.287 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:42.287 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:42.548 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:42.549 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:42.549 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:42.549 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:42.549 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.549 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:42.549 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.549 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:42.549 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.549 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:42.549 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.549 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:17:42.549 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:42.549 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:42.549 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.549 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:42.549 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.549 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:42.549 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.549 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:42.549 [2024-12-09 09:35:17.810195] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:42.549 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.549 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:42.549 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.549 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:42.549 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.549 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:42.549 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.549 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:42.549 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.549 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:42.549 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.549 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:42.549 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.549 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:42.549 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.549 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:42.549 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.549 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:42.549 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:42.549 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.549 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:42.549 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.549 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:42.549 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.549 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:42.549 [2024-12-09 09:35:17.882368] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:42.549 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.549 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:42.549 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.549 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:42.549 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.549 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:42.549 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.549 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:42.549 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.549 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:42.549 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.549 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:42.549 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.549 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:42.549 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.549 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:42.549 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.549 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:42.549 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:42.549 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.549 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:42.549 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.549 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:42.549 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.549 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:42.549 [2024-12-09 09:35:17.950573] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:42.549 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.549 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:42.549 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.549 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:42.549 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.549 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:42.549 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.549 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:42.549 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.549 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:42.549 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.549 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:42.549 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.549 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:42.549 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.549 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:42.809 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.809 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:42.809 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:42.809 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.809 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:42.809 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.809 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:42.809 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.809 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:42.809 [2024-12-09 09:35:18.022805] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:42.809 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.809 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:42.809 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.809 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:42.809 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.809 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:42.809 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.809 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:42.809 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.809 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:42.809 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.809 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:42.809 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.809 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:42.809 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.809 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:42.809 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.809 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:42.809 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:42.809 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.809 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:42.809 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.809 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:42.809 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.809 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:42.809 [2024-12-09 09:35:18.095037] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:42.809 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.809 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:42.809 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.809 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:42.809 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.809 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:42.809 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.809 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:42.809 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.809 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:42.809 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.809 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:42.809 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.809 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:42.809 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.809 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:42.809 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.809 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:17:42.809 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.809 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:42.809 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.809 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:17:42.809 "tick_rate": 2400000000, 00:17:42.809 "poll_groups": [ 00:17:42.809 { 00:17:42.809 "name": "nvmf_tgt_poll_group_000", 00:17:42.809 "admin_qpairs": 0, 00:17:42.809 "io_qpairs": 224, 00:17:42.809 "current_admin_qpairs": 0, 00:17:42.809 "current_io_qpairs": 0, 00:17:42.809 "pending_bdev_io": 0, 00:17:42.809 "completed_nvme_io": 273, 00:17:42.809 "transports": [ 00:17:42.809 { 00:17:42.809 "trtype": "TCP" 00:17:42.809 } 00:17:42.809 ] 00:17:42.809 }, 00:17:42.809 { 00:17:42.809 "name": "nvmf_tgt_poll_group_001", 00:17:42.809 "admin_qpairs": 1, 00:17:42.809 "io_qpairs": 223, 00:17:42.809 "current_admin_qpairs": 0, 00:17:42.809 "current_io_qpairs": 0, 00:17:42.809 "pending_bdev_io": 0, 00:17:42.809 "completed_nvme_io": 520, 00:17:42.809 "transports": [ 00:17:42.809 { 00:17:42.809 "trtype": "TCP" 00:17:42.809 } 00:17:42.809 ] 00:17:42.809 }, 00:17:42.809 { 00:17:42.809 "name": "nvmf_tgt_poll_group_002", 00:17:42.809 "admin_qpairs": 6, 00:17:42.809 "io_qpairs": 218, 00:17:42.809 "current_admin_qpairs": 0, 00:17:42.809 "current_io_qpairs": 0, 00:17:42.809 "pending_bdev_io": 0, 00:17:42.809 "completed_nvme_io": 219, 00:17:42.809 "transports": [ 00:17:42.809 { 00:17:42.809 "trtype": "TCP" 00:17:42.809 } 00:17:42.809 ] 00:17:42.809 }, 00:17:42.809 { 00:17:42.809 "name": "nvmf_tgt_poll_group_003", 00:17:42.809 "admin_qpairs": 0, 00:17:42.809 "io_qpairs": 224, 00:17:42.809 "current_admin_qpairs": 0, 00:17:42.809 "current_io_qpairs": 0, 00:17:42.809 "pending_bdev_io": 0, 00:17:42.809 "completed_nvme_io": 227, 00:17:42.809 "transports": [ 00:17:42.809 { 00:17:42.809 "trtype": "TCP" 00:17:42.809 } 00:17:42.809 ] 00:17:42.809 } 00:17:42.809 ] 00:17:42.809 }' 00:17:42.809 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:17:42.809 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:17:42.809 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:17:42.809 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:42.809 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:17:42.809 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:17:42.809 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:17:42.809 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:17:42.809 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:43.069 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:17:43.069 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:17:43.069 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:17:43.069 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:17:43.069 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:43.069 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:17:43.069 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:43.069 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:17:43.069 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:43.069 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:43.069 rmmod nvme_tcp 00:17:43.069 rmmod nvme_fabrics 00:17:43.069 rmmod nvme_keyring 00:17:43.069 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:43.069 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:17:43.069 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:17:43.069 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 2733633 ']' 00:17:43.069 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 2733633 00:17:43.069 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 2733633 ']' 00:17:43.069 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 2733633 00:17:43.069 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:17:43.069 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:43.069 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2733633 00:17:43.069 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:43.069 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:43.069 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2733633' 00:17:43.069 killing process with pid 2733633 00:17:43.069 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 2733633 00:17:43.069 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 2733633 00:17:43.069 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:43.069 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:43.069 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:43.069 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:17:43.069 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:17:43.069 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:43.069 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:17:43.329 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:43.329 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:43.329 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:43.329 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:43.329 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:45.237 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:45.237 00:17:45.237 real 0m37.849s 00:17:45.237 user 1m53.748s 00:17:45.237 sys 0m7.723s 00:17:45.237 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:45.237 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:45.237 ************************************ 00:17:45.237 END TEST nvmf_rpc 00:17:45.237 ************************************ 00:17:45.237 09:35:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:17:45.237 09:35:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:45.237 09:35:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:45.237 09:35:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:45.237 ************************************ 00:17:45.237 START TEST nvmf_invalid 00:17:45.237 ************************************ 00:17:45.237 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:17:45.498 * Looking for test storage... 00:17:45.498 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:45.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:45.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:45.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lcov --version 00:17:45.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:45.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:45.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:45.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:45.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:17:45.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:17:45.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:17:45.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:17:45.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:17:45.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:17:45.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:17:45.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:45.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:17:45.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:17:45.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:45.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:45.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:17:45.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:17:45.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:45.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:17:45.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:17:45.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:17:45.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:17:45.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:45.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:17:45.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:17:45.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:45.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:45.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:17:45.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:45.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:45.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:45.498 --rc genhtml_branch_coverage=1 00:17:45.498 --rc genhtml_function_coverage=1 00:17:45.498 --rc genhtml_legend=1 00:17:45.498 --rc geninfo_all_blocks=1 00:17:45.498 --rc geninfo_unexecuted_blocks=1 00:17:45.498 00:17:45.498 ' 00:17:45.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:45.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:45.498 --rc genhtml_branch_coverage=1 00:17:45.498 --rc genhtml_function_coverage=1 00:17:45.498 --rc genhtml_legend=1 00:17:45.498 --rc geninfo_all_blocks=1 00:17:45.498 --rc geninfo_unexecuted_blocks=1 00:17:45.498 00:17:45.498 ' 00:17:45.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:45.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:45.498 --rc genhtml_branch_coverage=1 00:17:45.498 --rc genhtml_function_coverage=1 00:17:45.498 --rc genhtml_legend=1 00:17:45.498 --rc geninfo_all_blocks=1 00:17:45.498 --rc geninfo_unexecuted_blocks=1 00:17:45.498 00:17:45.498 ' 00:17:45.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:45.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:45.498 --rc genhtml_branch_coverage=1 00:17:45.498 --rc genhtml_function_coverage=1 00:17:45.498 --rc genhtml_legend=1 00:17:45.498 --rc geninfo_all_blocks=1 00:17:45.498 --rc geninfo_unexecuted_blocks=1 00:17:45.498 00:17:45.498 ' 00:17:45.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:45.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:17:45.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:45.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:45.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:45.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:45.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:45.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:45.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:45.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:45.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:45.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:45.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:45.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:45.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:45.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:45.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:45.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:45.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:45.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:17:45.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:45.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:45.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:45.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:17:45.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:17:45.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:45.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:45.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:45.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:45.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:45.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:45.498 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:45.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:45.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:45.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:45.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:17:45.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:45.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:17:45.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:17:45.498 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:17:45.499 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:17:45.499 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:45.499 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:45.499 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:45.499 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:45.499 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:45.499 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:45.499 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:45.499 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:45.499 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:45.499 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:45.499 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:17:45.499 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:53.644 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:53.644 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:17:53.644 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:53.644 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:53.644 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:53.644 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:53.644 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:53.644 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:17:53.644 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:53.644 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:17:53.644 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:17:53.644 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:17:53.644 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:17:53.644 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:17:53.644 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:17:53.644 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:53.644 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:53.644 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:53.644 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:53.644 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:53.644 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:53.644 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:53.644 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:53.644 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:53.644 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:53.644 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:53.644 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:53.644 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:53.644 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:53.644 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:53.644 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:53.644 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:53.644 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:53.644 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:53.644 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:53.644 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:53.644 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:53.644 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:53.644 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:53.644 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:53.644 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:53.644 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:53.644 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:53.644 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:53.644 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:53.644 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:53.644 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:53.644 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:53.644 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:53.644 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:53.645 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:53.645 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:53.645 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:53.645 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:53.645 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:53.645 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:53.645 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:53.645 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:53.645 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:53.645 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:53.645 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:53.645 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:53.645 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:53.645 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:53.645 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:53.645 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:53.645 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:53.645 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:53.645 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:53.645 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:53.645 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:53.645 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:53.645 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:53.645 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:17:53.645 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:53.645 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:53.645 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:53.645 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:53.645 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:53.645 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:53.645 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:53.645 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:53.645 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:53.645 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:53.645 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:53.645 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:53.645 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:53.645 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:53.645 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:53.645 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:53.645 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:53.645 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:53.645 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:53.645 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:53.645 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:53.645 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:53.645 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:53.645 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:53.645 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:53.645 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:53.645 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:53.645 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.550 ms 00:17:53.645 00:17:53.645 --- 10.0.0.2 ping statistics --- 00:17:53.645 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:53.645 rtt min/avg/max/mdev = 0.550/0.550/0.550/0.000 ms 00:17:53.645 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:53.645 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:53.645 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.246 ms 00:17:53.645 00:17:53.645 --- 10.0.0.1 ping statistics --- 00:17:53.645 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:53.645 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:17:53.645 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:53.645 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:17:53.645 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:53.645 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:53.645 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:53.645 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:53.645 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:53.645 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:53.645 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:53.645 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:17:53.645 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:53.645 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:53.645 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:53.645 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=2743291 00:17:53.645 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 2743291 00:17:53.645 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:53.645 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 2743291 ']' 00:17:53.645 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:53.645 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:53.645 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:53.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:53.645 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:53.645 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:53.645 [2024-12-09 09:35:28.496117] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:17:53.645 [2024-12-09 09:35:28.496184] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:53.645 [2024-12-09 09:35:28.596621] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:53.645 [2024-12-09 09:35:28.625159] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:53.645 [2024-12-09 09:35:28.625210] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:53.645 [2024-12-09 09:35:28.625219] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:53.645 [2024-12-09 09:35:28.625226] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:53.645 [2024-12-09 09:35:28.625232] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:53.645 [2024-12-09 09:35:28.627275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:53.645 [2024-12-09 09:35:28.627402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:53.645 [2024-12-09 09:35:28.627560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:53.645 [2024-12-09 09:35:28.627560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:53.906 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:53.906 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:17:53.906 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:53.906 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:53.906 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:53.906 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:53.906 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:53.906 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode7668 00:17:54.166 [2024-12-09 09:35:29.506765] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:17:54.166 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:17:54.166 { 00:17:54.166 "nqn": "nqn.2016-06.io.spdk:cnode7668", 00:17:54.166 "tgt_name": "foobar", 00:17:54.167 "method": "nvmf_create_subsystem", 00:17:54.167 "req_id": 1 00:17:54.167 } 00:17:54.167 Got JSON-RPC error response 00:17:54.167 response: 00:17:54.167 { 00:17:54.167 "code": -32603, 00:17:54.167 "message": "Unable to find target foobar" 00:17:54.167 }' 00:17:54.167 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:17:54.167 { 00:17:54.167 "nqn": "nqn.2016-06.io.spdk:cnode7668", 00:17:54.167 "tgt_name": "foobar", 00:17:54.167 "method": "nvmf_create_subsystem", 00:17:54.167 "req_id": 1 00:17:54.167 } 00:17:54.167 Got JSON-RPC error response 00:17:54.167 response: 00:17:54.167 { 00:17:54.167 "code": -32603, 00:17:54.167 "message": "Unable to find target foobar" 00:17:54.167 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:17:54.167 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:17:54.167 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode27922 00:17:54.427 [2024-12-09 09:35:29.695430] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27922: invalid serial number 'SPDKISFASTANDAWESOME' 00:17:54.427 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:17:54.427 { 00:17:54.427 "nqn": "nqn.2016-06.io.spdk:cnode27922", 00:17:54.427 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:54.427 "method": "nvmf_create_subsystem", 00:17:54.427 "req_id": 1 00:17:54.427 } 00:17:54.427 Got JSON-RPC error response 00:17:54.427 response: 00:17:54.427 { 00:17:54.427 "code": -32602, 00:17:54.427 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:54.427 }' 00:17:54.427 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:17:54.427 { 00:17:54.427 "nqn": "nqn.2016-06.io.spdk:cnode27922", 00:17:54.427 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:54.427 "method": "nvmf_create_subsystem", 00:17:54.427 "req_id": 1 00:17:54.427 } 00:17:54.427 Got JSON-RPC error response 00:17:54.427 response: 00:17:54.427 { 00:17:54.427 "code": -32602, 00:17:54.427 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:54.427 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:54.427 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:17:54.427 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode15985 00:17:54.688 [2024-12-09 09:35:29.884004] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15985: invalid model number 'SPDK_Controller' 00:17:54.688 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:17:54.688 { 00:17:54.688 "nqn": "nqn.2016-06.io.spdk:cnode15985", 00:17:54.688 "model_number": "SPDK_Controller\u001f", 00:17:54.688 "method": "nvmf_create_subsystem", 00:17:54.688 "req_id": 1 00:17:54.688 } 00:17:54.688 Got JSON-RPC error response 00:17:54.688 response: 00:17:54.688 { 00:17:54.688 "code": -32602, 00:17:54.688 "message": "Invalid MN SPDK_Controller\u001f" 00:17:54.688 }' 00:17:54.688 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:17:54.688 { 00:17:54.688 "nqn": "nqn.2016-06.io.spdk:cnode15985", 00:17:54.688 "model_number": "SPDK_Controller\u001f", 00:17:54.688 "method": "nvmf_create_subsystem", 00:17:54.688 "req_id": 1 00:17:54.688 } 00:17:54.688 Got JSON-RPC error response 00:17:54.688 response: 00:17:54.688 { 00:17:54.688 "code": -32602, 00:17:54.689 "message": "Invalid MN SPDK_Controller\u001f" 00:17:54.689 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:17:54.689 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:17:54.689 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:17:54.689 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:54.689 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:54.689 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:54.689 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:54.689 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:54.689 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:17:54.689 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:17:54.689 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:17:54.689 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:54.689 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:54.689 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:17:54.689 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:17:54.689 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:17:54.689 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:54.689 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:54.689 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:17:54.689 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:17:54.689 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:17:54.689 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:54.689 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:54.689 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:17:54.689 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:17:54.689 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:17:54.689 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:54.689 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:54.689 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:17:54.689 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:17:54.689 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:17:54.689 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:54.689 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:54.689 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:17:54.689 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:17:54.689 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:17:54.689 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:54.689 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:54.689 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:17:54.689 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:17:54.689 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:17:54.689 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:54.689 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:54.689 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:17:54.689 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:17:54.689 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:17:54.689 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:54.689 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:54.689 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:17:54.689 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:17:54.689 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:17:54.689 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:54.689 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:54.689 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:17:54.689 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:17:54.689 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:17:54.689 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:54.689 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:54.689 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:17:54.689 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:17:54.689 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:17:54.689 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:54.689 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:54.689 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:17:54.689 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:17:54.689 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:17:54.689 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:54.689 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:54.689 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:17:54.689 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:17:54.689 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:17:54.689 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:54.689 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:54.689 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:17:54.689 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:17:54.689 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:17:54.689 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:54.689 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:54.689 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:17:54.689 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:17:54.689 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:17:54.689 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:54.689 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:54.689 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:17:54.689 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:17:54.690 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:17:54.690 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:54.690 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:54.690 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:17:54.690 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:17:54.690 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:17:54.690 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:54.690 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:54.690 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:17:54.690 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:17:54.690 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:17:54.690 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:54.690 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:54.690 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:17:54.690 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:17:54.690 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:17:54.690 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:54.690 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:54.690 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:17:54.690 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:17:54.690 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:17:54.690 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:54.690 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:54.690 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:17:54.690 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:17:54.690 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:17:54.690 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:54.690 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:54.690 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ 6 == \- ]] 00:17:54.690 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '6oL42amjM+CO3 /dev/null' 00:17:57.293 09:35:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:59.839 09:35:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:59.840 00:17:59.840 real 0m14.047s 00:17:59.840 user 0m20.642s 00:17:59.840 sys 0m6.628s 00:17:59.840 09:35:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:59.840 09:35:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:59.840 ************************************ 00:17:59.840 END TEST nvmf_invalid 00:17:59.840 ************************************ 00:17:59.840 09:35:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:59.840 09:35:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:59.840 09:35:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:59.840 09:35:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:59.840 ************************************ 00:17:59.840 START TEST nvmf_connect_stress 00:17:59.840 ************************************ 00:17:59.840 09:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:59.840 * Looking for test storage... 00:17:59.840 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:59.840 09:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:59.840 09:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:17:59.840 09:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:59.840 09:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:59.840 09:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:59.840 09:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:59.840 09:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:59.840 09:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:17:59.840 09:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:17:59.840 09:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:17:59.840 09:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:17:59.840 09:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:17:59.840 09:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:17:59.840 09:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:17:59.840 09:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:59.840 09:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:17:59.840 09:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:17:59.840 09:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:59.840 09:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:59.840 09:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:17:59.840 09:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:17:59.840 09:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:59.840 09:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:17:59.840 09:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:17:59.840 09:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:17:59.840 09:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:17:59.840 09:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:59.840 09:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:17:59.840 09:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:17:59.840 09:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:59.840 09:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:59.840 09:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:17:59.840 09:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:59.840 09:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:59.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:59.840 --rc genhtml_branch_coverage=1 00:17:59.840 --rc genhtml_function_coverage=1 00:17:59.840 --rc genhtml_legend=1 00:17:59.840 --rc geninfo_all_blocks=1 00:17:59.840 --rc geninfo_unexecuted_blocks=1 00:17:59.840 00:17:59.840 ' 00:17:59.840 09:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:59.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:59.840 --rc genhtml_branch_coverage=1 00:17:59.840 --rc genhtml_function_coverage=1 00:17:59.840 --rc genhtml_legend=1 00:17:59.840 --rc geninfo_all_blocks=1 00:17:59.840 --rc geninfo_unexecuted_blocks=1 00:17:59.840 00:17:59.840 ' 00:17:59.840 09:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:59.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:59.840 --rc genhtml_branch_coverage=1 00:17:59.840 --rc genhtml_function_coverage=1 00:17:59.840 --rc genhtml_legend=1 00:17:59.840 --rc geninfo_all_blocks=1 00:17:59.840 --rc geninfo_unexecuted_blocks=1 00:17:59.840 00:17:59.840 ' 00:17:59.840 09:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:59.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:59.840 --rc genhtml_branch_coverage=1 00:17:59.840 --rc genhtml_function_coverage=1 00:17:59.840 --rc genhtml_legend=1 00:17:59.840 --rc geninfo_all_blocks=1 00:17:59.840 --rc geninfo_unexecuted_blocks=1 00:17:59.840 00:17:59.840 ' 00:17:59.840 09:35:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:59.840 09:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:17:59.840 09:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:59.840 09:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:59.840 09:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:59.840 09:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:59.840 09:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:59.840 09:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:59.840 09:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:59.840 09:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:59.840 09:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:59.840 09:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:59.841 09:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:59.841 09:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:59.841 09:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:59.841 09:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:59.841 09:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:59.841 09:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:59.841 09:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:59.841 09:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:17:59.841 09:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:59.841 09:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:59.841 09:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:59.841 09:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:59.841 09:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:59.841 09:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:59.841 09:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:17:59.841 09:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:59.841 09:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:17:59.841 09:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:59.841 09:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:59.841 09:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:59.841 09:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:59.841 09:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:59.841 09:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:59.841 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:59.841 09:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:59.841 09:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:59.841 09:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:59.841 09:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:17:59.841 09:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:59.841 09:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:59.841 09:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:59.841 09:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:59.841 09:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:59.841 09:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:59.841 09:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:59.841 09:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:59.841 09:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:59.841 09:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:59.841 09:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:17:59.841 09:35:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:08.041 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:08.041 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:18:08.041 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:08.041 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:08.041 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:08.041 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:08.041 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:08.041 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:18:08.041 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:08.041 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:18:08.041 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:18:08.041 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:18:08.041 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:18:08.041 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:18:08.041 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:18:08.041 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:08.041 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:08.041 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:08.041 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:08.041 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:08.041 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:08.041 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:08.041 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:08.041 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:08.041 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:08.041 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:08.041 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:08.041 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:08.041 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:08.041 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:08.041 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:08.041 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:08.041 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:08.041 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:08.041 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:18:08.041 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:18:08.041 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:08.041 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:08.041 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:08.041 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:08.041 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:08.041 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:08.041 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:18:08.041 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:18:08.041 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:08.041 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:08.041 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:08.041 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:08.041 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:08.041 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:08.041 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:08.041 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:08.041 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:08.041 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:08.041 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:08.041 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:08.041 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:08.041 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:08.042 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:08.042 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:18:08.042 Found net devices under 0000:4b:00.0: cvl_0_0 00:18:08.042 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:08.042 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:08.042 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:08.042 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:08.042 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:08.042 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:08.042 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:08.042 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:08.042 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:18:08.042 Found net devices under 0000:4b:00.1: cvl_0_1 00:18:08.042 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:08.042 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:08.042 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:18:08.042 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:08.042 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:08.042 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:08.042 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:08.042 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:08.042 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:08.042 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:08.042 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:08.042 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:08.042 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:08.042 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:08.042 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:08.042 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:08.042 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:08.042 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:08.042 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:08.042 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:08.042 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:08.042 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:08.042 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:08.042 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:08.042 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:08.042 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:08.042 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:08.042 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:08.042 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:08.042 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:08.042 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.637 ms 00:18:08.042 00:18:08.042 --- 10.0.0.2 ping statistics --- 00:18:08.042 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:08.042 rtt min/avg/max/mdev = 0.637/0.637/0.637/0.000 ms 00:18:08.042 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:08.042 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:08.042 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.331 ms 00:18:08.042 00:18:08.042 --- 10.0.0.1 ping statistics --- 00:18:08.042 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:08.042 rtt min/avg/max/mdev = 0.331/0.331/0.331/0.000 ms 00:18:08.042 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:08.042 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:18:08.042 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:08.042 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:08.042 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:08.042 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:08.042 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:08.042 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:08.042 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:08.042 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:18:08.042 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:08.042 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:08.042 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:08.042 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=2748367 00:18:08.042 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 2748367 00:18:08.042 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:18:08.042 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 2748367 ']' 00:18:08.042 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:08.042 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:08.042 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:08.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:08.042 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:08.042 09:35:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:08.042 [2024-12-09 09:35:42.474895] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:18:08.042 [2024-12-09 09:35:42.474964] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:08.042 [2024-12-09 09:35:42.575931] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:08.042 [2024-12-09 09:35:42.603451] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:08.042 [2024-12-09 09:35:42.603504] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:08.042 [2024-12-09 09:35:42.603513] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:08.042 [2024-12-09 09:35:42.603522] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:08.042 [2024-12-09 09:35:42.603528] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:08.042 [2024-12-09 09:35:42.605333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:08.042 [2024-12-09 09:35:42.605497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:08.042 [2024-12-09 09:35:42.605498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:08.042 09:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:08.042 09:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:18:08.042 09:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:08.042 09:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:08.042 09:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:08.042 09:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:08.042 09:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:08.042 09:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.042 09:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:08.042 [2024-12-09 09:35:43.338451] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:08.042 09:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.042 09:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:08.042 09:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.042 09:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:08.042 09:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.042 09:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:08.042 09:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.042 09:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:08.042 [2024-12-09 09:35:43.362823] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:08.043 09:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.043 09:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:18:08.043 09:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.043 09:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:08.043 NULL1 00:18:08.043 09:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.043 09:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2748692 00:18:08.043 09:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:18:08.043 09:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:18:08.043 09:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:18:08.043 09:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:18:08.043 09:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:08.043 09:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:08.043 09:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:08.043 09:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:08.043 09:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:08.043 09:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:08.043 09:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:08.043 09:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:08.043 09:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:08.043 09:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:08.043 09:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:08.043 09:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:08.043 09:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:08.043 09:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:08.043 09:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:08.043 09:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:08.043 09:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:08.043 09:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:08.043 09:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:08.043 09:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:08.043 09:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:08.043 09:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:08.043 09:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:08.043 09:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:08.043 09:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:08.043 09:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:08.043 09:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:08.043 09:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:08.043 09:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:08.043 09:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:08.043 09:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:08.043 09:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:08.043 09:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:08.043 09:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:08.043 09:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:08.043 09:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:08.043 09:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:08.043 09:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:08.043 09:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:08.043 09:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:08.043 09:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2748692 00:18:08.043 09:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:08.043 09:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.043 09:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:08.612 09:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.612 09:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2748692 00:18:08.612 09:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:08.612 09:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.612 09:35:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:08.872 09:35:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.872 09:35:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2748692 00:18:08.872 09:35:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:08.872 09:35:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.872 09:35:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:09.132 09:35:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.132 09:35:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2748692 00:18:09.132 09:35:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:09.132 09:35:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.132 09:35:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:09.392 09:35:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.392 09:35:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2748692 00:18:09.392 09:35:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:09.392 09:35:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.392 09:35:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:09.975 09:35:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.975 09:35:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2748692 00:18:09.975 09:35:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:09.975 09:35:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.975 09:35:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:10.234 09:35:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.235 09:35:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2748692 00:18:10.235 09:35:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:10.235 09:35:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.235 09:35:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:10.494 09:35:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.494 09:35:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2748692 00:18:10.494 09:35:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:10.494 09:35:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.494 09:35:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:10.755 09:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.755 09:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2748692 00:18:10.755 09:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:10.755 09:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.755 09:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:11.016 09:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.016 09:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2748692 00:18:11.016 09:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:11.016 09:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.016 09:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:11.588 09:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.588 09:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2748692 00:18:11.588 09:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:11.588 09:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.588 09:35:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:11.849 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.849 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2748692 00:18:11.849 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:11.849 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.849 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:12.110 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.110 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2748692 00:18:12.110 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:12.110 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.110 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:12.370 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.370 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2748692 00:18:12.370 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:12.370 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.370 09:35:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:12.631 09:35:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.631 09:35:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2748692 00:18:12.631 09:35:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:12.631 09:35:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.631 09:35:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:13.201 09:35:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.201 09:35:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2748692 00:18:13.201 09:35:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:13.201 09:35:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.201 09:35:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:13.472 09:35:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.472 09:35:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2748692 00:18:13.472 09:35:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:13.472 09:35:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.472 09:35:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:13.734 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.734 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2748692 00:18:13.734 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:13.734 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.734 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:13.994 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.994 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2748692 00:18:13.994 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:13.994 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.994 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:14.254 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.254 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2748692 00:18:14.254 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:14.254 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.254 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:14.825 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.825 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2748692 00:18:14.825 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:14.825 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.825 09:35:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:15.087 09:35:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.087 09:35:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2748692 00:18:15.087 09:35:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:15.087 09:35:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.087 09:35:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:15.348 09:35:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.348 09:35:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2748692 00:18:15.348 09:35:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:15.348 09:35:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.348 09:35:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:15.608 09:35:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.608 09:35:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2748692 00:18:15.608 09:35:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:15.608 09:35:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.608 09:35:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:15.869 09:35:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.869 09:35:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2748692 00:18:15.869 09:35:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:15.869 09:35:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.869 09:35:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:16.440 09:35:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.440 09:35:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2748692 00:18:16.440 09:35:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:16.440 09:35:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.440 09:35:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:16.701 09:35:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.701 09:35:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2748692 00:18:16.701 09:35:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:16.701 09:35:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.701 09:35:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:16.962 09:35:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.962 09:35:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2748692 00:18:16.962 09:35:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:16.962 09:35:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.962 09:35:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:17.223 09:35:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.223 09:35:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2748692 00:18:17.223 09:35:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:17.223 09:35:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.223 09:35:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:17.484 09:35:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.484 09:35:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2748692 00:18:17.484 09:35:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:17.484 09:35:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.484 09:35:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:18.054 09:35:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.054 09:35:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2748692 00:18:18.054 09:35:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:18.054 09:35:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.054 09:35:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:18.313 09:35:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.313 09:35:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2748692 00:18:18.313 09:35:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:18.313 09:35:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.313 09:35:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:18.313 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:18.573 09:35:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.573 09:35:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2748692 00:18:18.573 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2748692) - No such process 00:18:18.573 09:35:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2748692 00:18:18.573 09:35:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:18:18.573 09:35:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:18:18.573 09:35:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:18:18.573 09:35:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:18.573 09:35:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:18:18.573 09:35:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:18.573 09:35:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:18:18.573 09:35:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:18.573 09:35:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:18.573 rmmod nvme_tcp 00:18:18.573 rmmod nvme_fabrics 00:18:18.573 rmmod nvme_keyring 00:18:18.573 09:35:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:18.573 09:35:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:18:18.573 09:35:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:18:18.573 09:35:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 2748367 ']' 00:18:18.573 09:35:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 2748367 00:18:18.573 09:35:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 2748367 ']' 00:18:18.573 09:35:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 2748367 00:18:18.573 09:35:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:18:18.573 09:35:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:18.573 09:35:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2748367 00:18:18.833 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:18.833 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:18.834 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2748367' 00:18:18.834 killing process with pid 2748367 00:18:18.834 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 2748367 00:18:18.834 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 2748367 00:18:18.834 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:18.834 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:18.834 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:18.834 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:18:18.834 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:18.834 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:18:18.834 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:18:18.834 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:18.834 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:18.834 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:18.834 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:18.834 09:35:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:21.386 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:21.386 00:18:21.386 real 0m21.418s 00:18:21.386 user 0m43.571s 00:18:21.386 sys 0m8.995s 00:18:21.386 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:21.386 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:21.386 ************************************ 00:18:21.386 END TEST nvmf_connect_stress 00:18:21.386 ************************************ 00:18:21.386 09:35:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:18:21.386 09:35:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:21.386 09:35:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:21.386 09:35:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:21.386 ************************************ 00:18:21.386 START TEST nvmf_fused_ordering 00:18:21.386 ************************************ 00:18:21.386 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:18:21.386 * Looking for test storage... 00:18:21.386 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:21.386 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:21.386 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lcov --version 00:18:21.386 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:21.386 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:21.386 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:21.386 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:21.386 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:21.386 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:18:21.386 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:18:21.386 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:18:21.386 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:18:21.386 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:18:21.386 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:18:21.386 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:18:21.386 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:21.386 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:18:21.386 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:18:21.386 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:21.386 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:21.386 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:18:21.386 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:18:21.386 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:21.386 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:18:21.386 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:18:21.386 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:18:21.386 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:18:21.386 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:21.386 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:18:21.386 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:18:21.386 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:21.386 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:21.386 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:18:21.386 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:21.386 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:21.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:21.386 --rc genhtml_branch_coverage=1 00:18:21.386 --rc genhtml_function_coverage=1 00:18:21.386 --rc genhtml_legend=1 00:18:21.386 --rc geninfo_all_blocks=1 00:18:21.386 --rc geninfo_unexecuted_blocks=1 00:18:21.386 00:18:21.386 ' 00:18:21.386 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:21.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:21.386 --rc genhtml_branch_coverage=1 00:18:21.386 --rc genhtml_function_coverage=1 00:18:21.386 --rc genhtml_legend=1 00:18:21.386 --rc geninfo_all_blocks=1 00:18:21.386 --rc geninfo_unexecuted_blocks=1 00:18:21.386 00:18:21.386 ' 00:18:21.386 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:21.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:21.386 --rc genhtml_branch_coverage=1 00:18:21.386 --rc genhtml_function_coverage=1 00:18:21.386 --rc genhtml_legend=1 00:18:21.386 --rc geninfo_all_blocks=1 00:18:21.386 --rc geninfo_unexecuted_blocks=1 00:18:21.386 00:18:21.386 ' 00:18:21.386 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:21.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:21.386 --rc genhtml_branch_coverage=1 00:18:21.386 --rc genhtml_function_coverage=1 00:18:21.386 --rc genhtml_legend=1 00:18:21.386 --rc geninfo_all_blocks=1 00:18:21.386 --rc geninfo_unexecuted_blocks=1 00:18:21.386 00:18:21.386 ' 00:18:21.386 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:21.386 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:18:21.386 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:21.386 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:21.386 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:21.386 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:21.386 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:21.386 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:21.386 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:21.386 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:21.386 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:21.386 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:21.386 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:21.386 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:21.386 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:21.386 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:21.386 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:21.386 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:21.386 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:21.386 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:18:21.386 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:21.386 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:21.386 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:21.386 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:21.386 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:21.387 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:21.387 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:18:21.387 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:21.387 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:18:21.387 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:21.387 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:21.387 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:21.387 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:21.387 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:21.387 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:21.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:21.387 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:21.387 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:21.387 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:21.387 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:18:21.387 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:21.387 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:21.387 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:21.387 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:21.387 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:21.387 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:21.387 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:21.387 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:21.387 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:21.387 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:21.387 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:18:21.387 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:29.597 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:29.597 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:18:29.597 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:29.597 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:29.597 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:29.597 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:29.597 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:29.597 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:18:29.597 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:29.597 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:18:29.597 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:18:29.597 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:18:29.597 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:18:29.597 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:18:29.597 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:18:29.597 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:29.597 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:29.597 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:29.597 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:29.597 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:29.597 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:29.597 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:29.597 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:29.597 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:29.597 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:29.597 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:29.597 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:29.597 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:29.597 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:29.597 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:29.597 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:29.597 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:29.597 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:29.597 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:29.597 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:18:29.597 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:18:29.597 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:29.597 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:29.597 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:29.597 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:29.597 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:29.597 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:29.597 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:18:29.597 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:18:29.597 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:29.597 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:29.597 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:29.597 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:29.597 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:29.597 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:29.597 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:29.597 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:29.597 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:29.597 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:29.598 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:29.598 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:29.598 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:29.598 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:29.598 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:29.598 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:18:29.598 Found net devices under 0000:4b:00.0: cvl_0_0 00:18:29.598 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:29.598 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:29.598 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:29.598 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:29.598 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:29.598 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:29.598 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:29.598 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:29.598 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:18:29.598 Found net devices under 0000:4b:00.1: cvl_0_1 00:18:29.598 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:29.598 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:29.598 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:18:29.598 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:29.598 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:29.598 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:29.598 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:29.598 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:29.598 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:29.598 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:29.598 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:29.598 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:29.598 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:29.598 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:29.598 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:29.598 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:29.598 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:29.598 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:29.598 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:29.598 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:29.598 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:29.598 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:29.598 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:29.598 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:29.598 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:29.598 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:29.598 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:29.598 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:29.598 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:29.598 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:29.598 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.676 ms 00:18:29.598 00:18:29.598 --- 10.0.0.2 ping statistics --- 00:18:29.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:29.598 rtt min/avg/max/mdev = 0.676/0.676/0.676/0.000 ms 00:18:29.598 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:29.598 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:29.598 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:18:29.598 00:18:29.598 --- 10.0.0.1 ping statistics --- 00:18:29.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:29.598 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:18:29.598 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:29.598 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:18:29.598 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:29.598 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:29.598 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:29.598 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:29.598 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:29.598 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:29.598 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:29.598 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:18:29.598 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:29.598 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:29.598 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:29.598 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=2755043 00:18:29.598 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 2755043 00:18:29.598 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:29.598 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 2755043 ']' 00:18:29.598 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:29.598 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:29.598 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:29.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:29.598 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:29.598 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:29.598 [2024-12-09 09:36:04.103255] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:18:29.598 [2024-12-09 09:36:04.103323] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:29.598 [2024-12-09 09:36:04.202918] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:29.598 [2024-12-09 09:36:04.229079] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:29.598 [2024-12-09 09:36:04.229133] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:29.598 [2024-12-09 09:36:04.229145] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:29.598 [2024-12-09 09:36:04.229153] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:29.598 [2024-12-09 09:36:04.229160] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:29.598 [2024-12-09 09:36:04.229929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:29.598 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:29.598 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:18:29.598 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:29.598 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:29.598 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:29.598 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:29.598 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:29.598 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.598 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:29.598 [2024-12-09 09:36:04.974166] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:29.598 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.598 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:29.598 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.598 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:29.599 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.599 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:29.599 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.599 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:29.599 [2024-12-09 09:36:04.990392] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:29.599 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.599 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:18:29.599 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.599 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:29.599 NULL1 00:18:29.599 09:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.599 09:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:18:29.599 09:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.599 09:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:29.599 09:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.599 09:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:18:29.599 09:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.599 09:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:29.599 09:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.599 09:36:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:18:29.599 [2024-12-09 09:36:05.048236] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:18:29.599 [2024-12-09 09:36:05.048294] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2755191 ] 00:18:30.171 Attached to nqn.2016-06.io.spdk:cnode1 00:18:30.171 Namespace ID: 1 size: 1GB 00:18:30.171 fused_ordering(0) 00:18:30.171 fused_ordering(1) 00:18:30.171 fused_ordering(2) 00:18:30.171 fused_ordering(3) 00:18:30.171 fused_ordering(4) 00:18:30.171 fused_ordering(5) 00:18:30.171 fused_ordering(6) 00:18:30.171 fused_ordering(7) 00:18:30.171 fused_ordering(8) 00:18:30.171 fused_ordering(9) 00:18:30.171 fused_ordering(10) 00:18:30.171 fused_ordering(11) 00:18:30.171 fused_ordering(12) 00:18:30.171 fused_ordering(13) 00:18:30.171 fused_ordering(14) 00:18:30.171 fused_ordering(15) 00:18:30.171 fused_ordering(16) 00:18:30.171 fused_ordering(17) 00:18:30.171 fused_ordering(18) 00:18:30.171 fused_ordering(19) 00:18:30.171 fused_ordering(20) 00:18:30.171 fused_ordering(21) 00:18:30.171 fused_ordering(22) 00:18:30.171 fused_ordering(23) 00:18:30.171 fused_ordering(24) 00:18:30.171 fused_ordering(25) 00:18:30.171 fused_ordering(26) 00:18:30.171 fused_ordering(27) 00:18:30.171 fused_ordering(28) 00:18:30.171 fused_ordering(29) 00:18:30.171 fused_ordering(30) 00:18:30.171 fused_ordering(31) 00:18:30.171 fused_ordering(32) 00:18:30.171 fused_ordering(33) 00:18:30.171 fused_ordering(34) 00:18:30.171 fused_ordering(35) 00:18:30.171 fused_ordering(36) 00:18:30.171 fused_ordering(37) 00:18:30.171 fused_ordering(38) 00:18:30.171 fused_ordering(39) 00:18:30.171 fused_ordering(40) 00:18:30.171 fused_ordering(41) 00:18:30.171 fused_ordering(42) 00:18:30.171 fused_ordering(43) 00:18:30.171 fused_ordering(44) 00:18:30.171 fused_ordering(45) 00:18:30.171 fused_ordering(46) 00:18:30.171 fused_ordering(47) 00:18:30.171 fused_ordering(48) 00:18:30.171 fused_ordering(49) 00:18:30.171 fused_ordering(50) 00:18:30.171 fused_ordering(51) 00:18:30.171 fused_ordering(52) 00:18:30.171 fused_ordering(53) 00:18:30.171 fused_ordering(54) 00:18:30.171 fused_ordering(55) 00:18:30.171 fused_ordering(56) 00:18:30.171 fused_ordering(57) 00:18:30.171 fused_ordering(58) 00:18:30.171 fused_ordering(59) 00:18:30.171 fused_ordering(60) 00:18:30.171 fused_ordering(61) 00:18:30.171 fused_ordering(62) 00:18:30.171 fused_ordering(63) 00:18:30.171 fused_ordering(64) 00:18:30.171 fused_ordering(65) 00:18:30.171 fused_ordering(66) 00:18:30.171 fused_ordering(67) 00:18:30.171 fused_ordering(68) 00:18:30.171 fused_ordering(69) 00:18:30.171 fused_ordering(70) 00:18:30.171 fused_ordering(71) 00:18:30.171 fused_ordering(72) 00:18:30.171 fused_ordering(73) 00:18:30.171 fused_ordering(74) 00:18:30.171 fused_ordering(75) 00:18:30.171 fused_ordering(76) 00:18:30.171 fused_ordering(77) 00:18:30.171 fused_ordering(78) 00:18:30.171 fused_ordering(79) 00:18:30.171 fused_ordering(80) 00:18:30.171 fused_ordering(81) 00:18:30.171 fused_ordering(82) 00:18:30.171 fused_ordering(83) 00:18:30.171 fused_ordering(84) 00:18:30.171 fused_ordering(85) 00:18:30.171 fused_ordering(86) 00:18:30.171 fused_ordering(87) 00:18:30.171 fused_ordering(88) 00:18:30.171 fused_ordering(89) 00:18:30.171 fused_ordering(90) 00:18:30.171 fused_ordering(91) 00:18:30.171 fused_ordering(92) 00:18:30.171 fused_ordering(93) 00:18:30.171 fused_ordering(94) 00:18:30.171 fused_ordering(95) 00:18:30.171 fused_ordering(96) 00:18:30.171 fused_ordering(97) 00:18:30.171 fused_ordering(98) 00:18:30.171 fused_ordering(99) 00:18:30.171 fused_ordering(100) 00:18:30.171 fused_ordering(101) 00:18:30.171 fused_ordering(102) 00:18:30.171 fused_ordering(103) 00:18:30.171 fused_ordering(104) 00:18:30.171 fused_ordering(105) 00:18:30.171 fused_ordering(106) 00:18:30.171 fused_ordering(107) 00:18:30.171 fused_ordering(108) 00:18:30.171 fused_ordering(109) 00:18:30.171 fused_ordering(110) 00:18:30.171 fused_ordering(111) 00:18:30.171 fused_ordering(112) 00:18:30.171 fused_ordering(113) 00:18:30.171 fused_ordering(114) 00:18:30.171 fused_ordering(115) 00:18:30.171 fused_ordering(116) 00:18:30.171 fused_ordering(117) 00:18:30.171 fused_ordering(118) 00:18:30.171 fused_ordering(119) 00:18:30.171 fused_ordering(120) 00:18:30.171 fused_ordering(121) 00:18:30.171 fused_ordering(122) 00:18:30.171 fused_ordering(123) 00:18:30.171 fused_ordering(124) 00:18:30.171 fused_ordering(125) 00:18:30.171 fused_ordering(126) 00:18:30.171 fused_ordering(127) 00:18:30.171 fused_ordering(128) 00:18:30.171 fused_ordering(129) 00:18:30.171 fused_ordering(130) 00:18:30.171 fused_ordering(131) 00:18:30.171 fused_ordering(132) 00:18:30.171 fused_ordering(133) 00:18:30.171 fused_ordering(134) 00:18:30.171 fused_ordering(135) 00:18:30.171 fused_ordering(136) 00:18:30.171 fused_ordering(137) 00:18:30.171 fused_ordering(138) 00:18:30.171 fused_ordering(139) 00:18:30.171 fused_ordering(140) 00:18:30.171 fused_ordering(141) 00:18:30.171 fused_ordering(142) 00:18:30.171 fused_ordering(143) 00:18:30.171 fused_ordering(144) 00:18:30.171 fused_ordering(145) 00:18:30.171 fused_ordering(146) 00:18:30.171 fused_ordering(147) 00:18:30.171 fused_ordering(148) 00:18:30.171 fused_ordering(149) 00:18:30.171 fused_ordering(150) 00:18:30.171 fused_ordering(151) 00:18:30.171 fused_ordering(152) 00:18:30.171 fused_ordering(153) 00:18:30.171 fused_ordering(154) 00:18:30.171 fused_ordering(155) 00:18:30.171 fused_ordering(156) 00:18:30.171 fused_ordering(157) 00:18:30.171 fused_ordering(158) 00:18:30.171 fused_ordering(159) 00:18:30.171 fused_ordering(160) 00:18:30.171 fused_ordering(161) 00:18:30.171 fused_ordering(162) 00:18:30.171 fused_ordering(163) 00:18:30.171 fused_ordering(164) 00:18:30.171 fused_ordering(165) 00:18:30.171 fused_ordering(166) 00:18:30.171 fused_ordering(167) 00:18:30.171 fused_ordering(168) 00:18:30.171 fused_ordering(169) 00:18:30.171 fused_ordering(170) 00:18:30.171 fused_ordering(171) 00:18:30.171 fused_ordering(172) 00:18:30.171 fused_ordering(173) 00:18:30.171 fused_ordering(174) 00:18:30.171 fused_ordering(175) 00:18:30.171 fused_ordering(176) 00:18:30.171 fused_ordering(177) 00:18:30.171 fused_ordering(178) 00:18:30.171 fused_ordering(179) 00:18:30.171 fused_ordering(180) 00:18:30.171 fused_ordering(181) 00:18:30.171 fused_ordering(182) 00:18:30.171 fused_ordering(183) 00:18:30.171 fused_ordering(184) 00:18:30.171 fused_ordering(185) 00:18:30.171 fused_ordering(186) 00:18:30.171 fused_ordering(187) 00:18:30.171 fused_ordering(188) 00:18:30.171 fused_ordering(189) 00:18:30.171 fused_ordering(190) 00:18:30.171 fused_ordering(191) 00:18:30.171 fused_ordering(192) 00:18:30.171 fused_ordering(193) 00:18:30.171 fused_ordering(194) 00:18:30.171 fused_ordering(195) 00:18:30.171 fused_ordering(196) 00:18:30.171 fused_ordering(197) 00:18:30.171 fused_ordering(198) 00:18:30.171 fused_ordering(199) 00:18:30.171 fused_ordering(200) 00:18:30.171 fused_ordering(201) 00:18:30.171 fused_ordering(202) 00:18:30.171 fused_ordering(203) 00:18:30.171 fused_ordering(204) 00:18:30.171 fused_ordering(205) 00:18:30.433 fused_ordering(206) 00:18:30.433 fused_ordering(207) 00:18:30.433 fused_ordering(208) 00:18:30.433 fused_ordering(209) 00:18:30.433 fused_ordering(210) 00:18:30.433 fused_ordering(211) 00:18:30.433 fused_ordering(212) 00:18:30.433 fused_ordering(213) 00:18:30.433 fused_ordering(214) 00:18:30.433 fused_ordering(215) 00:18:30.433 fused_ordering(216) 00:18:30.433 fused_ordering(217) 00:18:30.433 fused_ordering(218) 00:18:30.433 fused_ordering(219) 00:18:30.433 fused_ordering(220) 00:18:30.433 fused_ordering(221) 00:18:30.433 fused_ordering(222) 00:18:30.433 fused_ordering(223) 00:18:30.433 fused_ordering(224) 00:18:30.433 fused_ordering(225) 00:18:30.433 fused_ordering(226) 00:18:30.433 fused_ordering(227) 00:18:30.433 fused_ordering(228) 00:18:30.433 fused_ordering(229) 00:18:30.433 fused_ordering(230) 00:18:30.433 fused_ordering(231) 00:18:30.433 fused_ordering(232) 00:18:30.433 fused_ordering(233) 00:18:30.433 fused_ordering(234) 00:18:30.433 fused_ordering(235) 00:18:30.433 fused_ordering(236) 00:18:30.433 fused_ordering(237) 00:18:30.433 fused_ordering(238) 00:18:30.433 fused_ordering(239) 00:18:30.433 fused_ordering(240) 00:18:30.433 fused_ordering(241) 00:18:30.433 fused_ordering(242) 00:18:30.433 fused_ordering(243) 00:18:30.433 fused_ordering(244) 00:18:30.433 fused_ordering(245) 00:18:30.433 fused_ordering(246) 00:18:30.433 fused_ordering(247) 00:18:30.433 fused_ordering(248) 00:18:30.433 fused_ordering(249) 00:18:30.433 fused_ordering(250) 00:18:30.433 fused_ordering(251) 00:18:30.433 fused_ordering(252) 00:18:30.433 fused_ordering(253) 00:18:30.433 fused_ordering(254) 00:18:30.433 fused_ordering(255) 00:18:30.433 fused_ordering(256) 00:18:30.433 fused_ordering(257) 00:18:30.433 fused_ordering(258) 00:18:30.433 fused_ordering(259) 00:18:30.433 fused_ordering(260) 00:18:30.433 fused_ordering(261) 00:18:30.433 fused_ordering(262) 00:18:30.433 fused_ordering(263) 00:18:30.433 fused_ordering(264) 00:18:30.433 fused_ordering(265) 00:18:30.433 fused_ordering(266) 00:18:30.433 fused_ordering(267) 00:18:30.433 fused_ordering(268) 00:18:30.433 fused_ordering(269) 00:18:30.433 fused_ordering(270) 00:18:30.433 fused_ordering(271) 00:18:30.433 fused_ordering(272) 00:18:30.433 fused_ordering(273) 00:18:30.433 fused_ordering(274) 00:18:30.433 fused_ordering(275) 00:18:30.433 fused_ordering(276) 00:18:30.433 fused_ordering(277) 00:18:30.433 fused_ordering(278) 00:18:30.433 fused_ordering(279) 00:18:30.433 fused_ordering(280) 00:18:30.433 fused_ordering(281) 00:18:30.433 fused_ordering(282) 00:18:30.433 fused_ordering(283) 00:18:30.433 fused_ordering(284) 00:18:30.433 fused_ordering(285) 00:18:30.433 fused_ordering(286) 00:18:30.433 fused_ordering(287) 00:18:30.433 fused_ordering(288) 00:18:30.433 fused_ordering(289) 00:18:30.433 fused_ordering(290) 00:18:30.433 fused_ordering(291) 00:18:30.433 fused_ordering(292) 00:18:30.433 fused_ordering(293) 00:18:30.433 fused_ordering(294) 00:18:30.433 fused_ordering(295) 00:18:30.433 fused_ordering(296) 00:18:30.433 fused_ordering(297) 00:18:30.433 fused_ordering(298) 00:18:30.433 fused_ordering(299) 00:18:30.433 fused_ordering(300) 00:18:30.433 fused_ordering(301) 00:18:30.433 fused_ordering(302) 00:18:30.433 fused_ordering(303) 00:18:30.433 fused_ordering(304) 00:18:30.433 fused_ordering(305) 00:18:30.433 fused_ordering(306) 00:18:30.433 fused_ordering(307) 00:18:30.433 fused_ordering(308) 00:18:30.433 fused_ordering(309) 00:18:30.433 fused_ordering(310) 00:18:30.433 fused_ordering(311) 00:18:30.433 fused_ordering(312) 00:18:30.433 fused_ordering(313) 00:18:30.433 fused_ordering(314) 00:18:30.433 fused_ordering(315) 00:18:30.433 fused_ordering(316) 00:18:30.433 fused_ordering(317) 00:18:30.433 fused_ordering(318) 00:18:30.433 fused_ordering(319) 00:18:30.433 fused_ordering(320) 00:18:30.433 fused_ordering(321) 00:18:30.433 fused_ordering(322) 00:18:30.433 fused_ordering(323) 00:18:30.433 fused_ordering(324) 00:18:30.433 fused_ordering(325) 00:18:30.433 fused_ordering(326) 00:18:30.433 fused_ordering(327) 00:18:30.433 fused_ordering(328) 00:18:30.433 fused_ordering(329) 00:18:30.433 fused_ordering(330) 00:18:30.433 fused_ordering(331) 00:18:30.433 fused_ordering(332) 00:18:30.433 fused_ordering(333) 00:18:30.433 fused_ordering(334) 00:18:30.433 fused_ordering(335) 00:18:30.433 fused_ordering(336) 00:18:30.433 fused_ordering(337) 00:18:30.433 fused_ordering(338) 00:18:30.433 fused_ordering(339) 00:18:30.433 fused_ordering(340) 00:18:30.433 fused_ordering(341) 00:18:30.433 fused_ordering(342) 00:18:30.433 fused_ordering(343) 00:18:30.433 fused_ordering(344) 00:18:30.433 fused_ordering(345) 00:18:30.433 fused_ordering(346) 00:18:30.433 fused_ordering(347) 00:18:30.433 fused_ordering(348) 00:18:30.433 fused_ordering(349) 00:18:30.433 fused_ordering(350) 00:18:30.433 fused_ordering(351) 00:18:30.433 fused_ordering(352) 00:18:30.433 fused_ordering(353) 00:18:30.433 fused_ordering(354) 00:18:30.433 fused_ordering(355) 00:18:30.433 fused_ordering(356) 00:18:30.433 fused_ordering(357) 00:18:30.433 fused_ordering(358) 00:18:30.433 fused_ordering(359) 00:18:30.433 fused_ordering(360) 00:18:30.433 fused_ordering(361) 00:18:30.433 fused_ordering(362) 00:18:30.433 fused_ordering(363) 00:18:30.433 fused_ordering(364) 00:18:30.433 fused_ordering(365) 00:18:30.433 fused_ordering(366) 00:18:30.433 fused_ordering(367) 00:18:30.433 fused_ordering(368) 00:18:30.433 fused_ordering(369) 00:18:30.433 fused_ordering(370) 00:18:30.433 fused_ordering(371) 00:18:30.433 fused_ordering(372) 00:18:30.433 fused_ordering(373) 00:18:30.433 fused_ordering(374) 00:18:30.433 fused_ordering(375) 00:18:30.433 fused_ordering(376) 00:18:30.433 fused_ordering(377) 00:18:30.433 fused_ordering(378) 00:18:30.433 fused_ordering(379) 00:18:30.433 fused_ordering(380) 00:18:30.433 fused_ordering(381) 00:18:30.433 fused_ordering(382) 00:18:30.433 fused_ordering(383) 00:18:30.433 fused_ordering(384) 00:18:30.433 fused_ordering(385) 00:18:30.433 fused_ordering(386) 00:18:30.433 fused_ordering(387) 00:18:30.433 fused_ordering(388) 00:18:30.433 fused_ordering(389) 00:18:30.433 fused_ordering(390) 00:18:30.433 fused_ordering(391) 00:18:30.433 fused_ordering(392) 00:18:30.433 fused_ordering(393) 00:18:30.433 fused_ordering(394) 00:18:30.433 fused_ordering(395) 00:18:30.433 fused_ordering(396) 00:18:30.433 fused_ordering(397) 00:18:30.433 fused_ordering(398) 00:18:30.433 fused_ordering(399) 00:18:30.433 fused_ordering(400) 00:18:30.433 fused_ordering(401) 00:18:30.433 fused_ordering(402) 00:18:30.433 fused_ordering(403) 00:18:30.433 fused_ordering(404) 00:18:30.434 fused_ordering(405) 00:18:30.434 fused_ordering(406) 00:18:30.434 fused_ordering(407) 00:18:30.434 fused_ordering(408) 00:18:30.434 fused_ordering(409) 00:18:30.434 fused_ordering(410) 00:18:31.006 fused_ordering(411) 00:18:31.006 fused_ordering(412) 00:18:31.006 fused_ordering(413) 00:18:31.006 fused_ordering(414) 00:18:31.006 fused_ordering(415) 00:18:31.006 fused_ordering(416) 00:18:31.006 fused_ordering(417) 00:18:31.006 fused_ordering(418) 00:18:31.006 fused_ordering(419) 00:18:31.006 fused_ordering(420) 00:18:31.006 fused_ordering(421) 00:18:31.006 fused_ordering(422) 00:18:31.006 fused_ordering(423) 00:18:31.006 fused_ordering(424) 00:18:31.006 fused_ordering(425) 00:18:31.006 fused_ordering(426) 00:18:31.006 fused_ordering(427) 00:18:31.006 fused_ordering(428) 00:18:31.006 fused_ordering(429) 00:18:31.006 fused_ordering(430) 00:18:31.006 fused_ordering(431) 00:18:31.006 fused_ordering(432) 00:18:31.006 fused_ordering(433) 00:18:31.006 fused_ordering(434) 00:18:31.006 fused_ordering(435) 00:18:31.006 fused_ordering(436) 00:18:31.006 fused_ordering(437) 00:18:31.006 fused_ordering(438) 00:18:31.006 fused_ordering(439) 00:18:31.006 fused_ordering(440) 00:18:31.006 fused_ordering(441) 00:18:31.006 fused_ordering(442) 00:18:31.006 fused_ordering(443) 00:18:31.006 fused_ordering(444) 00:18:31.006 fused_ordering(445) 00:18:31.006 fused_ordering(446) 00:18:31.006 fused_ordering(447) 00:18:31.006 fused_ordering(448) 00:18:31.006 fused_ordering(449) 00:18:31.006 fused_ordering(450) 00:18:31.006 fused_ordering(451) 00:18:31.006 fused_ordering(452) 00:18:31.006 fused_ordering(453) 00:18:31.006 fused_ordering(454) 00:18:31.006 fused_ordering(455) 00:18:31.006 fused_ordering(456) 00:18:31.006 fused_ordering(457) 00:18:31.006 fused_ordering(458) 00:18:31.006 fused_ordering(459) 00:18:31.006 fused_ordering(460) 00:18:31.006 fused_ordering(461) 00:18:31.006 fused_ordering(462) 00:18:31.006 fused_ordering(463) 00:18:31.006 fused_ordering(464) 00:18:31.006 fused_ordering(465) 00:18:31.006 fused_ordering(466) 00:18:31.006 fused_ordering(467) 00:18:31.006 fused_ordering(468) 00:18:31.006 fused_ordering(469) 00:18:31.006 fused_ordering(470) 00:18:31.006 fused_ordering(471) 00:18:31.006 fused_ordering(472) 00:18:31.006 fused_ordering(473) 00:18:31.006 fused_ordering(474) 00:18:31.006 fused_ordering(475) 00:18:31.006 fused_ordering(476) 00:18:31.006 fused_ordering(477) 00:18:31.006 fused_ordering(478) 00:18:31.006 fused_ordering(479) 00:18:31.006 fused_ordering(480) 00:18:31.006 fused_ordering(481) 00:18:31.006 fused_ordering(482) 00:18:31.006 fused_ordering(483) 00:18:31.006 fused_ordering(484) 00:18:31.006 fused_ordering(485) 00:18:31.006 fused_ordering(486) 00:18:31.006 fused_ordering(487) 00:18:31.006 fused_ordering(488) 00:18:31.006 fused_ordering(489) 00:18:31.006 fused_ordering(490) 00:18:31.006 fused_ordering(491) 00:18:31.006 fused_ordering(492) 00:18:31.006 fused_ordering(493) 00:18:31.006 fused_ordering(494) 00:18:31.006 fused_ordering(495) 00:18:31.006 fused_ordering(496) 00:18:31.006 fused_ordering(497) 00:18:31.006 fused_ordering(498) 00:18:31.006 fused_ordering(499) 00:18:31.006 fused_ordering(500) 00:18:31.006 fused_ordering(501) 00:18:31.006 fused_ordering(502) 00:18:31.006 fused_ordering(503) 00:18:31.006 fused_ordering(504) 00:18:31.006 fused_ordering(505) 00:18:31.006 fused_ordering(506) 00:18:31.006 fused_ordering(507) 00:18:31.006 fused_ordering(508) 00:18:31.006 fused_ordering(509) 00:18:31.006 fused_ordering(510) 00:18:31.006 fused_ordering(511) 00:18:31.006 fused_ordering(512) 00:18:31.006 fused_ordering(513) 00:18:31.006 fused_ordering(514) 00:18:31.006 fused_ordering(515) 00:18:31.006 fused_ordering(516) 00:18:31.006 fused_ordering(517) 00:18:31.006 fused_ordering(518) 00:18:31.006 fused_ordering(519) 00:18:31.006 fused_ordering(520) 00:18:31.006 fused_ordering(521) 00:18:31.006 fused_ordering(522) 00:18:31.006 fused_ordering(523) 00:18:31.006 fused_ordering(524) 00:18:31.006 fused_ordering(525) 00:18:31.006 fused_ordering(526) 00:18:31.006 fused_ordering(527) 00:18:31.006 fused_ordering(528) 00:18:31.006 fused_ordering(529) 00:18:31.006 fused_ordering(530) 00:18:31.006 fused_ordering(531) 00:18:31.006 fused_ordering(532) 00:18:31.006 fused_ordering(533) 00:18:31.006 fused_ordering(534) 00:18:31.006 fused_ordering(535) 00:18:31.006 fused_ordering(536) 00:18:31.006 fused_ordering(537) 00:18:31.006 fused_ordering(538) 00:18:31.006 fused_ordering(539) 00:18:31.006 fused_ordering(540) 00:18:31.006 fused_ordering(541) 00:18:31.006 fused_ordering(542) 00:18:31.006 fused_ordering(543) 00:18:31.006 fused_ordering(544) 00:18:31.006 fused_ordering(545) 00:18:31.006 fused_ordering(546) 00:18:31.006 fused_ordering(547) 00:18:31.006 fused_ordering(548) 00:18:31.006 fused_ordering(549) 00:18:31.006 fused_ordering(550) 00:18:31.006 fused_ordering(551) 00:18:31.006 fused_ordering(552) 00:18:31.006 fused_ordering(553) 00:18:31.006 fused_ordering(554) 00:18:31.006 fused_ordering(555) 00:18:31.006 fused_ordering(556) 00:18:31.006 fused_ordering(557) 00:18:31.006 fused_ordering(558) 00:18:31.006 fused_ordering(559) 00:18:31.006 fused_ordering(560) 00:18:31.006 fused_ordering(561) 00:18:31.006 fused_ordering(562) 00:18:31.006 fused_ordering(563) 00:18:31.006 fused_ordering(564) 00:18:31.006 fused_ordering(565) 00:18:31.006 fused_ordering(566) 00:18:31.006 fused_ordering(567) 00:18:31.006 fused_ordering(568) 00:18:31.006 fused_ordering(569) 00:18:31.006 fused_ordering(570) 00:18:31.006 fused_ordering(571) 00:18:31.006 fused_ordering(572) 00:18:31.007 fused_ordering(573) 00:18:31.007 fused_ordering(574) 00:18:31.007 fused_ordering(575) 00:18:31.007 fused_ordering(576) 00:18:31.007 fused_ordering(577) 00:18:31.007 fused_ordering(578) 00:18:31.007 fused_ordering(579) 00:18:31.007 fused_ordering(580) 00:18:31.007 fused_ordering(581) 00:18:31.007 fused_ordering(582) 00:18:31.007 fused_ordering(583) 00:18:31.007 fused_ordering(584) 00:18:31.007 fused_ordering(585) 00:18:31.007 fused_ordering(586) 00:18:31.007 fused_ordering(587) 00:18:31.007 fused_ordering(588) 00:18:31.007 fused_ordering(589) 00:18:31.007 fused_ordering(590) 00:18:31.007 fused_ordering(591) 00:18:31.007 fused_ordering(592) 00:18:31.007 fused_ordering(593) 00:18:31.007 fused_ordering(594) 00:18:31.007 fused_ordering(595) 00:18:31.007 fused_ordering(596) 00:18:31.007 fused_ordering(597) 00:18:31.007 fused_ordering(598) 00:18:31.007 fused_ordering(599) 00:18:31.007 fused_ordering(600) 00:18:31.007 fused_ordering(601) 00:18:31.007 fused_ordering(602) 00:18:31.007 fused_ordering(603) 00:18:31.007 fused_ordering(604) 00:18:31.007 fused_ordering(605) 00:18:31.007 fused_ordering(606) 00:18:31.007 fused_ordering(607) 00:18:31.007 fused_ordering(608) 00:18:31.007 fused_ordering(609) 00:18:31.007 fused_ordering(610) 00:18:31.007 fused_ordering(611) 00:18:31.007 fused_ordering(612) 00:18:31.007 fused_ordering(613) 00:18:31.007 fused_ordering(614) 00:18:31.007 fused_ordering(615) 00:18:31.663 fused_ordering(616) 00:18:31.663 fused_ordering(617) 00:18:31.663 fused_ordering(618) 00:18:31.663 fused_ordering(619) 00:18:31.663 fused_ordering(620) 00:18:31.663 fused_ordering(621) 00:18:31.663 fused_ordering(622) 00:18:31.663 fused_ordering(623) 00:18:31.663 fused_ordering(624) 00:18:31.663 fused_ordering(625) 00:18:31.663 fused_ordering(626) 00:18:31.663 fused_ordering(627) 00:18:31.663 fused_ordering(628) 00:18:31.663 fused_ordering(629) 00:18:31.663 fused_ordering(630) 00:18:31.663 fused_ordering(631) 00:18:31.663 fused_ordering(632) 00:18:31.663 fused_ordering(633) 00:18:31.663 fused_ordering(634) 00:18:31.663 fused_ordering(635) 00:18:31.663 fused_ordering(636) 00:18:31.663 fused_ordering(637) 00:18:31.663 fused_ordering(638) 00:18:31.663 fused_ordering(639) 00:18:31.663 fused_ordering(640) 00:18:31.663 fused_ordering(641) 00:18:31.663 fused_ordering(642) 00:18:31.663 fused_ordering(643) 00:18:31.663 fused_ordering(644) 00:18:31.663 fused_ordering(645) 00:18:31.663 fused_ordering(646) 00:18:31.663 fused_ordering(647) 00:18:31.663 fused_ordering(648) 00:18:31.663 fused_ordering(649) 00:18:31.663 fused_ordering(650) 00:18:31.663 fused_ordering(651) 00:18:31.663 fused_ordering(652) 00:18:31.663 fused_ordering(653) 00:18:31.663 fused_ordering(654) 00:18:31.663 fused_ordering(655) 00:18:31.663 fused_ordering(656) 00:18:31.663 fused_ordering(657) 00:18:31.663 fused_ordering(658) 00:18:31.663 fused_ordering(659) 00:18:31.663 fused_ordering(660) 00:18:31.663 fused_ordering(661) 00:18:31.663 fused_ordering(662) 00:18:31.663 fused_ordering(663) 00:18:31.663 fused_ordering(664) 00:18:31.663 fused_ordering(665) 00:18:31.663 fused_ordering(666) 00:18:31.663 fused_ordering(667) 00:18:31.663 fused_ordering(668) 00:18:31.663 fused_ordering(669) 00:18:31.663 fused_ordering(670) 00:18:31.663 fused_ordering(671) 00:18:31.663 fused_ordering(672) 00:18:31.663 fused_ordering(673) 00:18:31.663 fused_ordering(674) 00:18:31.663 fused_ordering(675) 00:18:31.663 fused_ordering(676) 00:18:31.663 fused_ordering(677) 00:18:31.663 fused_ordering(678) 00:18:31.663 fused_ordering(679) 00:18:31.663 fused_ordering(680) 00:18:31.663 fused_ordering(681) 00:18:31.663 fused_ordering(682) 00:18:31.663 fused_ordering(683) 00:18:31.663 fused_ordering(684) 00:18:31.663 fused_ordering(685) 00:18:31.663 fused_ordering(686) 00:18:31.663 fused_ordering(687) 00:18:31.663 fused_ordering(688) 00:18:31.663 fused_ordering(689) 00:18:31.663 fused_ordering(690) 00:18:31.663 fused_ordering(691) 00:18:31.663 fused_ordering(692) 00:18:31.663 fused_ordering(693) 00:18:31.663 fused_ordering(694) 00:18:31.663 fused_ordering(695) 00:18:31.663 fused_ordering(696) 00:18:31.663 fused_ordering(697) 00:18:31.663 fused_ordering(698) 00:18:31.663 fused_ordering(699) 00:18:31.663 fused_ordering(700) 00:18:31.663 fused_ordering(701) 00:18:31.663 fused_ordering(702) 00:18:31.663 fused_ordering(703) 00:18:31.663 fused_ordering(704) 00:18:31.663 fused_ordering(705) 00:18:31.663 fused_ordering(706) 00:18:31.663 fused_ordering(707) 00:18:31.663 fused_ordering(708) 00:18:31.663 fused_ordering(709) 00:18:31.663 fused_ordering(710) 00:18:31.663 fused_ordering(711) 00:18:31.663 fused_ordering(712) 00:18:31.663 fused_ordering(713) 00:18:31.663 fused_ordering(714) 00:18:31.663 fused_ordering(715) 00:18:31.663 fused_ordering(716) 00:18:31.663 fused_ordering(717) 00:18:31.663 fused_ordering(718) 00:18:31.663 fused_ordering(719) 00:18:31.663 fused_ordering(720) 00:18:31.663 fused_ordering(721) 00:18:31.663 fused_ordering(722) 00:18:31.663 fused_ordering(723) 00:18:31.663 fused_ordering(724) 00:18:31.663 fused_ordering(725) 00:18:31.663 fused_ordering(726) 00:18:31.663 fused_ordering(727) 00:18:31.663 fused_ordering(728) 00:18:31.663 fused_ordering(729) 00:18:31.663 fused_ordering(730) 00:18:31.663 fused_ordering(731) 00:18:31.663 fused_ordering(732) 00:18:31.663 fused_ordering(733) 00:18:31.663 fused_ordering(734) 00:18:31.663 fused_ordering(735) 00:18:31.663 fused_ordering(736) 00:18:31.663 fused_ordering(737) 00:18:31.663 fused_ordering(738) 00:18:31.663 fused_ordering(739) 00:18:31.663 fused_ordering(740) 00:18:31.663 fused_ordering(741) 00:18:31.663 fused_ordering(742) 00:18:31.663 fused_ordering(743) 00:18:31.663 fused_ordering(744) 00:18:31.663 fused_ordering(745) 00:18:31.663 fused_ordering(746) 00:18:31.663 fused_ordering(747) 00:18:31.663 fused_ordering(748) 00:18:31.663 fused_ordering(749) 00:18:31.663 fused_ordering(750) 00:18:31.663 fused_ordering(751) 00:18:31.663 fused_ordering(752) 00:18:31.663 fused_ordering(753) 00:18:31.663 fused_ordering(754) 00:18:31.663 fused_ordering(755) 00:18:31.663 fused_ordering(756) 00:18:31.663 fused_ordering(757) 00:18:31.663 fused_ordering(758) 00:18:31.663 fused_ordering(759) 00:18:31.663 fused_ordering(760) 00:18:31.663 fused_ordering(761) 00:18:31.663 fused_ordering(762) 00:18:31.663 fused_ordering(763) 00:18:31.663 fused_ordering(764) 00:18:31.663 fused_ordering(765) 00:18:31.663 fused_ordering(766) 00:18:31.663 fused_ordering(767) 00:18:31.663 fused_ordering(768) 00:18:31.663 fused_ordering(769) 00:18:31.663 fused_ordering(770) 00:18:31.663 fused_ordering(771) 00:18:31.663 fused_ordering(772) 00:18:31.663 fused_ordering(773) 00:18:31.663 fused_ordering(774) 00:18:31.663 fused_ordering(775) 00:18:31.663 fused_ordering(776) 00:18:31.663 fused_ordering(777) 00:18:31.663 fused_ordering(778) 00:18:31.663 fused_ordering(779) 00:18:31.663 fused_ordering(780) 00:18:31.663 fused_ordering(781) 00:18:31.663 fused_ordering(782) 00:18:31.663 fused_ordering(783) 00:18:31.663 fused_ordering(784) 00:18:31.663 fused_ordering(785) 00:18:31.663 fused_ordering(786) 00:18:31.663 fused_ordering(787) 00:18:31.663 fused_ordering(788) 00:18:31.663 fused_ordering(789) 00:18:31.663 fused_ordering(790) 00:18:31.663 fused_ordering(791) 00:18:31.663 fused_ordering(792) 00:18:31.663 fused_ordering(793) 00:18:31.663 fused_ordering(794) 00:18:31.663 fused_ordering(795) 00:18:31.663 fused_ordering(796) 00:18:31.663 fused_ordering(797) 00:18:31.663 fused_ordering(798) 00:18:31.663 fused_ordering(799) 00:18:31.663 fused_ordering(800) 00:18:31.663 fused_ordering(801) 00:18:31.663 fused_ordering(802) 00:18:31.663 fused_ordering(803) 00:18:31.663 fused_ordering(804) 00:18:31.663 fused_ordering(805) 00:18:31.663 fused_ordering(806) 00:18:31.663 fused_ordering(807) 00:18:31.663 fused_ordering(808) 00:18:31.663 fused_ordering(809) 00:18:31.663 fused_ordering(810) 00:18:31.663 fused_ordering(811) 00:18:31.663 fused_ordering(812) 00:18:31.663 fused_ordering(813) 00:18:31.663 fused_ordering(814) 00:18:31.663 fused_ordering(815) 00:18:31.663 fused_ordering(816) 00:18:31.663 fused_ordering(817) 00:18:31.663 fused_ordering(818) 00:18:31.663 fused_ordering(819) 00:18:31.663 fused_ordering(820) 00:18:32.293 fused_ordering(821) 00:18:32.293 fused_ordering(822) 00:18:32.293 fused_ordering(823) 00:18:32.293 fused_ordering(824) 00:18:32.293 fused_ordering(825) 00:18:32.293 fused_ordering(826) 00:18:32.293 fused_ordering(827) 00:18:32.293 fused_ordering(828) 00:18:32.293 fused_ordering(829) 00:18:32.293 fused_ordering(830) 00:18:32.293 fused_ordering(831) 00:18:32.293 fused_ordering(832) 00:18:32.293 fused_ordering(833) 00:18:32.293 fused_ordering(834) 00:18:32.293 fused_ordering(835) 00:18:32.293 fused_ordering(836) 00:18:32.293 fused_ordering(837) 00:18:32.293 fused_ordering(838) 00:18:32.293 fused_ordering(839) 00:18:32.293 fused_ordering(840) 00:18:32.293 fused_ordering(841) 00:18:32.293 fused_ordering(842) 00:18:32.293 fused_ordering(843) 00:18:32.293 fused_ordering(844) 00:18:32.293 fused_ordering(845) 00:18:32.293 fused_ordering(846) 00:18:32.293 fused_ordering(847) 00:18:32.293 fused_ordering(848) 00:18:32.293 fused_ordering(849) 00:18:32.293 fused_ordering(850) 00:18:32.293 fused_ordering(851) 00:18:32.293 fused_ordering(852) 00:18:32.293 fused_ordering(853) 00:18:32.293 fused_ordering(854) 00:18:32.293 fused_ordering(855) 00:18:32.293 fused_ordering(856) 00:18:32.293 fused_ordering(857) 00:18:32.293 fused_ordering(858) 00:18:32.293 fused_ordering(859) 00:18:32.293 fused_ordering(860) 00:18:32.293 fused_ordering(861) 00:18:32.293 fused_ordering(862) 00:18:32.293 fused_ordering(863) 00:18:32.293 fused_ordering(864) 00:18:32.293 fused_ordering(865) 00:18:32.293 fused_ordering(866) 00:18:32.293 fused_ordering(867) 00:18:32.293 fused_ordering(868) 00:18:32.293 fused_ordering(869) 00:18:32.293 fused_ordering(870) 00:18:32.293 fused_ordering(871) 00:18:32.293 fused_ordering(872) 00:18:32.293 fused_ordering(873) 00:18:32.293 fused_ordering(874) 00:18:32.293 fused_ordering(875) 00:18:32.293 fused_ordering(876) 00:18:32.293 fused_ordering(877) 00:18:32.293 fused_ordering(878) 00:18:32.293 fused_ordering(879) 00:18:32.293 fused_ordering(880) 00:18:32.293 fused_ordering(881) 00:18:32.293 fused_ordering(882) 00:18:32.293 fused_ordering(883) 00:18:32.293 fused_ordering(884) 00:18:32.293 fused_ordering(885) 00:18:32.293 fused_ordering(886) 00:18:32.293 fused_ordering(887) 00:18:32.293 fused_ordering(888) 00:18:32.293 fused_ordering(889) 00:18:32.293 fused_ordering(890) 00:18:32.293 fused_ordering(891) 00:18:32.293 fused_ordering(892) 00:18:32.293 fused_ordering(893) 00:18:32.293 fused_ordering(894) 00:18:32.293 fused_ordering(895) 00:18:32.293 fused_ordering(896) 00:18:32.293 fused_ordering(897) 00:18:32.293 fused_ordering(898) 00:18:32.293 fused_ordering(899) 00:18:32.293 fused_ordering(900) 00:18:32.293 fused_ordering(901) 00:18:32.293 fused_ordering(902) 00:18:32.293 fused_ordering(903) 00:18:32.293 fused_ordering(904) 00:18:32.293 fused_ordering(905) 00:18:32.293 fused_ordering(906) 00:18:32.293 fused_ordering(907) 00:18:32.293 fused_ordering(908) 00:18:32.293 fused_ordering(909) 00:18:32.293 fused_ordering(910) 00:18:32.293 fused_ordering(911) 00:18:32.293 fused_ordering(912) 00:18:32.293 fused_ordering(913) 00:18:32.293 fused_ordering(914) 00:18:32.293 fused_ordering(915) 00:18:32.293 fused_ordering(916) 00:18:32.293 fused_ordering(917) 00:18:32.293 fused_ordering(918) 00:18:32.293 fused_ordering(919) 00:18:32.293 fused_ordering(920) 00:18:32.293 fused_ordering(921) 00:18:32.293 fused_ordering(922) 00:18:32.293 fused_ordering(923) 00:18:32.293 fused_ordering(924) 00:18:32.293 fused_ordering(925) 00:18:32.293 fused_ordering(926) 00:18:32.293 fused_ordering(927) 00:18:32.293 fused_ordering(928) 00:18:32.293 fused_ordering(929) 00:18:32.293 fused_ordering(930) 00:18:32.293 fused_ordering(931) 00:18:32.293 fused_ordering(932) 00:18:32.293 fused_ordering(933) 00:18:32.293 fused_ordering(934) 00:18:32.293 fused_ordering(935) 00:18:32.293 fused_ordering(936) 00:18:32.293 fused_ordering(937) 00:18:32.293 fused_ordering(938) 00:18:32.293 fused_ordering(939) 00:18:32.293 fused_ordering(940) 00:18:32.293 fused_ordering(941) 00:18:32.293 fused_ordering(942) 00:18:32.293 fused_ordering(943) 00:18:32.293 fused_ordering(944) 00:18:32.293 fused_ordering(945) 00:18:32.293 fused_ordering(946) 00:18:32.293 fused_ordering(947) 00:18:32.293 fused_ordering(948) 00:18:32.293 fused_ordering(949) 00:18:32.293 fused_ordering(950) 00:18:32.293 fused_ordering(951) 00:18:32.293 fused_ordering(952) 00:18:32.293 fused_ordering(953) 00:18:32.293 fused_ordering(954) 00:18:32.293 fused_ordering(955) 00:18:32.293 fused_ordering(956) 00:18:32.293 fused_ordering(957) 00:18:32.293 fused_ordering(958) 00:18:32.293 fused_ordering(959) 00:18:32.293 fused_ordering(960) 00:18:32.293 fused_ordering(961) 00:18:32.293 fused_ordering(962) 00:18:32.293 fused_ordering(963) 00:18:32.293 fused_ordering(964) 00:18:32.293 fused_ordering(965) 00:18:32.293 fused_ordering(966) 00:18:32.293 fused_ordering(967) 00:18:32.293 fused_ordering(968) 00:18:32.293 fused_ordering(969) 00:18:32.293 fused_ordering(970) 00:18:32.293 fused_ordering(971) 00:18:32.293 fused_ordering(972) 00:18:32.293 fused_ordering(973) 00:18:32.293 fused_ordering(974) 00:18:32.293 fused_ordering(975) 00:18:32.293 fused_ordering(976) 00:18:32.293 fused_ordering(977) 00:18:32.293 fused_ordering(978) 00:18:32.293 fused_ordering(979) 00:18:32.293 fused_ordering(980) 00:18:32.293 fused_ordering(981) 00:18:32.293 fused_ordering(982) 00:18:32.293 fused_ordering(983) 00:18:32.293 fused_ordering(984) 00:18:32.293 fused_ordering(985) 00:18:32.293 fused_ordering(986) 00:18:32.293 fused_ordering(987) 00:18:32.293 fused_ordering(988) 00:18:32.293 fused_ordering(989) 00:18:32.293 fused_ordering(990) 00:18:32.293 fused_ordering(991) 00:18:32.293 fused_ordering(992) 00:18:32.293 fused_ordering(993) 00:18:32.293 fused_ordering(994) 00:18:32.293 fused_ordering(995) 00:18:32.293 fused_ordering(996) 00:18:32.293 fused_ordering(997) 00:18:32.293 fused_ordering(998) 00:18:32.293 fused_ordering(999) 00:18:32.293 fused_ordering(1000) 00:18:32.293 fused_ordering(1001) 00:18:32.293 fused_ordering(1002) 00:18:32.293 fused_ordering(1003) 00:18:32.293 fused_ordering(1004) 00:18:32.293 fused_ordering(1005) 00:18:32.293 fused_ordering(1006) 00:18:32.293 fused_ordering(1007) 00:18:32.293 fused_ordering(1008) 00:18:32.293 fused_ordering(1009) 00:18:32.293 fused_ordering(1010) 00:18:32.293 fused_ordering(1011) 00:18:32.293 fused_ordering(1012) 00:18:32.293 fused_ordering(1013) 00:18:32.293 fused_ordering(1014) 00:18:32.293 fused_ordering(1015) 00:18:32.293 fused_ordering(1016) 00:18:32.293 fused_ordering(1017) 00:18:32.293 fused_ordering(1018) 00:18:32.293 fused_ordering(1019) 00:18:32.293 fused_ordering(1020) 00:18:32.293 fused_ordering(1021) 00:18:32.293 fused_ordering(1022) 00:18:32.293 fused_ordering(1023) 00:18:32.293 09:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:18:32.293 09:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:18:32.293 09:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:32.293 09:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:18:32.293 09:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:32.293 09:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:18:32.293 09:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:32.293 09:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:32.293 rmmod nvme_tcp 00:18:32.293 rmmod nvme_fabrics 00:18:32.293 rmmod nvme_keyring 00:18:32.293 09:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:32.293 09:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:18:32.293 09:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:18:32.293 09:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 2755043 ']' 00:18:32.293 09:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 2755043 00:18:32.293 09:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 2755043 ']' 00:18:32.293 09:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 2755043 00:18:32.293 09:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:18:32.293 09:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:32.293 09:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2755043 00:18:32.293 09:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:32.293 09:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:32.293 09:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2755043' 00:18:32.293 killing process with pid 2755043 00:18:32.294 09:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 2755043 00:18:32.294 09:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 2755043 00:18:32.294 09:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:32.294 09:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:32.294 09:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:32.294 09:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:18:32.294 09:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:18:32.294 09:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:32.294 09:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:18:32.294 09:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:32.294 09:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:32.294 09:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:32.294 09:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:32.294 09:36:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:34.838 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:34.838 00:18:34.838 real 0m13.489s 00:18:34.838 user 0m7.171s 00:18:34.838 sys 0m7.191s 00:18:34.838 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:34.838 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:34.838 ************************************ 00:18:34.838 END TEST nvmf_fused_ordering 00:18:34.838 ************************************ 00:18:34.838 09:36:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:18:34.838 09:36:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:34.838 09:36:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:34.838 09:36:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:34.838 ************************************ 00:18:34.838 START TEST nvmf_ns_masking 00:18:34.838 ************************************ 00:18:34.838 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:18:34.838 * Looking for test storage... 00:18:34.838 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:34.838 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:34.838 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lcov --version 00:18:34.838 09:36:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:34.838 09:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:34.838 09:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:34.838 09:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:34.838 09:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:34.838 09:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:18:34.838 09:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:18:34.838 09:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:18:34.838 09:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:18:34.838 09:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:18:34.838 09:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:18:34.838 09:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:18:34.838 09:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:34.838 09:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:18:34.838 09:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:18:34.838 09:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:34.838 09:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:34.838 09:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:18:34.838 09:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:18:34.838 09:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:34.838 09:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:18:34.838 09:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:18:34.838 09:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:18:34.838 09:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:18:34.838 09:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:34.838 09:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:18:34.838 09:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:18:34.838 09:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:34.838 09:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:34.838 09:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:18:34.838 09:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:34.838 09:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:34.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:34.838 --rc genhtml_branch_coverage=1 00:18:34.838 --rc genhtml_function_coverage=1 00:18:34.838 --rc genhtml_legend=1 00:18:34.838 --rc geninfo_all_blocks=1 00:18:34.838 --rc geninfo_unexecuted_blocks=1 00:18:34.838 00:18:34.838 ' 00:18:34.838 09:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:34.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:34.838 --rc genhtml_branch_coverage=1 00:18:34.838 --rc genhtml_function_coverage=1 00:18:34.838 --rc genhtml_legend=1 00:18:34.838 --rc geninfo_all_blocks=1 00:18:34.838 --rc geninfo_unexecuted_blocks=1 00:18:34.838 00:18:34.838 ' 00:18:34.838 09:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:34.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:34.838 --rc genhtml_branch_coverage=1 00:18:34.838 --rc genhtml_function_coverage=1 00:18:34.838 --rc genhtml_legend=1 00:18:34.838 --rc geninfo_all_blocks=1 00:18:34.838 --rc geninfo_unexecuted_blocks=1 00:18:34.838 00:18:34.838 ' 00:18:34.838 09:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:34.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:34.838 --rc genhtml_branch_coverage=1 00:18:34.838 --rc genhtml_function_coverage=1 00:18:34.838 --rc genhtml_legend=1 00:18:34.838 --rc geninfo_all_blocks=1 00:18:34.838 --rc geninfo_unexecuted_blocks=1 00:18:34.838 00:18:34.838 ' 00:18:34.838 09:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:34.838 09:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:18:34.839 09:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:34.839 09:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:34.839 09:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:34.839 09:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:34.839 09:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:34.839 09:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:34.839 09:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:34.839 09:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:34.839 09:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:34.839 09:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:34.839 09:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:34.839 09:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:34.839 09:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:34.839 09:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:34.839 09:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:34.839 09:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:34.839 09:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:34.839 09:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:18:34.839 09:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:34.839 09:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:34.839 09:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:34.839 09:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.839 09:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.839 09:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.839 09:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:18:34.839 09:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.839 09:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:18:34.839 09:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:34.839 09:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:34.839 09:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:34.839 09:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:34.839 09:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:34.839 09:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:34.839 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:34.839 09:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:34.839 09:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:34.839 09:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:34.839 09:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:34.839 09:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:18:34.839 09:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:18:34.839 09:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:18:34.839 09:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=30831dfd-8651-445a-b486-f783c01dc3fe 00:18:34.839 09:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:18:34.839 09:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=da79ffce-2ef8-42f6-b5ab-466b7d4b5824 00:18:34.839 09:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:18:34.839 09:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:18:34.839 09:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:18:34.839 09:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:18:34.839 09:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=b2516a84-0de5-4227-a8e5-c56befc23e26 00:18:34.839 09:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:18:34.839 09:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:34.839 09:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:34.839 09:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:34.839 09:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:34.839 09:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:34.839 09:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:34.839 09:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:34.839 09:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:34.839 09:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:34.839 09:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:34.839 09:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:18:34.839 09:36:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:42.981 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:42.981 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:18:42.981 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:42.981 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:42.981 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:42.981 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:42.981 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:42.981 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:18:42.981 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:42.981 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:18:42.981 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:18:42.981 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:18:42.981 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:18:42.981 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:18:42.981 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:18:42.981 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:42.981 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:42.981 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:42.981 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:42.981 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:42.981 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:42.981 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:42.981 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:42.981 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:42.981 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:42.981 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:42.981 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:42.981 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:42.981 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:42.981 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:42.981 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:42.981 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:42.981 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:42.981 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:42.981 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:18:42.982 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:18:42.982 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:42.982 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:42.982 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:42.982 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:42.982 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:42.982 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:42.982 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:18:42.982 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:18:42.982 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:42.982 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:42.982 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:42.982 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:42.982 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:42.982 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:42.982 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:42.982 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:42.982 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:42.982 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:42.982 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:42.982 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:42.982 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:42.982 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:42.982 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:42.982 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:18:42.982 Found net devices under 0000:4b:00.0: cvl_0_0 00:18:42.982 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:42.982 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:42.982 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:42.982 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:42.982 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:42.982 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:42.982 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:42.982 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:42.982 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:18:42.982 Found net devices under 0000:4b:00.1: cvl_0_1 00:18:42.982 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:42.982 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:42.982 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:18:42.982 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:42.982 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:42.982 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:42.982 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:42.982 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:42.982 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:42.982 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:42.982 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:42.982 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:42.982 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:42.982 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:42.982 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:42.982 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:42.982 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:42.982 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:42.982 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:42.982 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:42.982 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:42.982 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:42.982 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:42.982 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:42.982 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:42.982 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:42.982 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:42.982 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:42.982 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:42.982 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:42.982 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.639 ms 00:18:42.982 00:18:42.982 --- 10.0.0.2 ping statistics --- 00:18:42.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:42.982 rtt min/avg/max/mdev = 0.639/0.639/0.639/0.000 ms 00:18:42.982 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:42.982 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:42.982 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.289 ms 00:18:42.982 00:18:42.982 --- 10.0.0.1 ping statistics --- 00:18:42.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:42.982 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:18:42.982 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:42.982 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:18:42.982 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:42.982 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:42.982 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:42.983 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:42.983 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:42.983 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:42.983 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:42.983 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:18:42.983 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:42.983 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:42.983 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:42.983 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=2760327 00:18:42.983 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 2760327 00:18:42.983 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:42.983 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 2760327 ']' 00:18:42.983 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:42.983 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:42.983 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:42.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:42.983 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:42.983 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:42.983 [2024-12-09 09:36:17.649071] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:18:42.983 [2024-12-09 09:36:17.649138] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:42.983 [2024-12-09 09:36:17.746961] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:42.983 [2024-12-09 09:36:17.773041] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:42.983 [2024-12-09 09:36:17.773091] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:42.983 [2024-12-09 09:36:17.773100] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:42.983 [2024-12-09 09:36:17.773108] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:42.983 [2024-12-09 09:36:17.773114] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:42.983 [2024-12-09 09:36:17.773820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:43.245 09:36:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:43.245 09:36:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:18:43.245 09:36:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:43.245 09:36:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:43.245 09:36:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:43.245 09:36:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:43.245 09:36:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:43.245 [2024-12-09 09:36:18.668591] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:43.245 09:36:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:18:43.245 09:36:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:18:43.245 09:36:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:43.507 Malloc1 00:18:43.507 09:36:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:43.769 Malloc2 00:18:43.769 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:44.031 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:18:44.031 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:44.291 [2024-12-09 09:36:19.624281] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:44.292 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:18:44.292 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I b2516a84-0de5-4227-a8e5-c56befc23e26 -a 10.0.0.2 -s 4420 -i 4 00:18:44.554 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:18:44.554 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:18:44.554 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:44.554 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:44.554 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:18:46.484 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:46.484 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:46.484 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:46.484 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:46.484 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:46.484 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:18:46.484 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:46.484 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:46.744 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:46.744 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:46.744 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:18:46.744 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:46.744 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:46.744 [ 0]:0x1 00:18:46.744 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:46.744 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:46.744 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3463fff9311348ae943735f03f4e36b6 00:18:46.744 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3463fff9311348ae943735f03f4e36b6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:46.745 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:18:47.004 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:18:47.004 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:47.004 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:47.004 [ 0]:0x1 00:18:47.004 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:47.004 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:47.004 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3463fff9311348ae943735f03f4e36b6 00:18:47.004 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3463fff9311348ae943735f03f4e36b6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:47.004 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:18:47.004 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:47.004 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:47.004 [ 1]:0x2 00:18:47.004 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:47.004 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:47.004 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=594fe531696744af9eb01d0a697a8840 00:18:47.004 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 594fe531696744af9eb01d0a697a8840 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:47.004 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:18:47.004 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:47.263 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:47.263 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:47.263 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:18:47.522 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:18:47.522 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I b2516a84-0de5-4227-a8e5-c56befc23e26 -a 10.0.0.2 -s 4420 -i 4 00:18:47.522 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:18:47.781 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:18:47.781 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:47.781 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:18:47.781 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:18:47.781 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:18:49.691 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:49.691 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:49.691 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:49.691 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:49.691 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:49.691 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:18:49.691 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:49.691 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:49.691 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:49.691 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:49.691 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:18:49.691 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:49.691 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:18:49.691 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:18:49.691 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:49.691 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:18:49.691 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:49.691 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:18:49.691 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:49.691 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:49.691 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:49.691 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:49.691 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:49.691 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:49.691 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:49.691 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:49.691 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:49.691 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:49.691 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:18:49.691 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:49.691 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:49.691 [ 0]:0x2 00:18:49.691 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:49.691 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:49.952 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=594fe531696744af9eb01d0a697a8840 00:18:49.952 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 594fe531696744af9eb01d0a697a8840 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:49.952 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:49.952 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:18:49.952 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:49.952 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:49.952 [ 0]:0x1 00:18:49.952 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:49.952 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:50.213 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3463fff9311348ae943735f03f4e36b6 00:18:50.213 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3463fff9311348ae943735f03f4e36b6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:50.213 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:18:50.213 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:50.213 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:50.213 [ 1]:0x2 00:18:50.213 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:50.213 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:50.213 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=594fe531696744af9eb01d0a697a8840 00:18:50.213 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 594fe531696744af9eb01d0a697a8840 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:50.213 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:50.474 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:18:50.474 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:50.474 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:18:50.474 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:18:50.474 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:50.474 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:18:50.474 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:50.474 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:18:50.474 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:50.474 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:50.474 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:50.474 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:50.474 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:50.474 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:50.474 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:50.474 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:50.474 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:50.474 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:50.474 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:18:50.474 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:50.474 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:50.474 [ 0]:0x2 00:18:50.474 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:50.474 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:50.474 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=594fe531696744af9eb01d0a697a8840 00:18:50.474 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 594fe531696744af9eb01d0a697a8840 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:50.474 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:18:50.474 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:50.474 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:50.474 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:50.734 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:18:50.734 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I b2516a84-0de5-4227-a8e5-c56befc23e26 -a 10.0.0.2 -s 4420 -i 4 00:18:50.994 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:18:50.995 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:18:50.995 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:50.995 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:18:50.995 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:18:50.995 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:18:52.905 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:52.905 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:52.905 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:52.905 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:18:52.905 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:52.905 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:18:52.905 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:52.905 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:52.905 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:52.905 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:52.905 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:18:52.905 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:52.905 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:52.905 [ 0]:0x1 00:18:52.905 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:52.905 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:53.165 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=3463fff9311348ae943735f03f4e36b6 00:18:53.165 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 3463fff9311348ae943735f03f4e36b6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:53.165 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:18:53.165 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:53.165 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:53.165 [ 1]:0x2 00:18:53.165 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:53.165 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:53.165 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=594fe531696744af9eb01d0a697a8840 00:18:53.165 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 594fe531696744af9eb01d0a697a8840 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:53.165 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:53.165 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:18:53.165 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:53.425 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:18:53.425 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:18:53.425 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:53.425 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:18:53.425 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:53.425 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:18:53.425 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:53.425 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:53.425 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:53.425 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:53.425 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:53.425 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:53.425 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:53.425 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:53.425 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:53.425 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:53.425 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:18:53.425 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:53.425 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:53.425 [ 0]:0x2 00:18:53.425 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:53.425 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:53.425 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=594fe531696744af9eb01d0a697a8840 00:18:53.425 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 594fe531696744af9eb01d0a697a8840 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:53.425 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:53.425 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:53.425 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:53.425 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:53.425 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:53.425 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:53.425 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:53.425 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:53.425 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:53.425 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:53.425 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:53.425 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:53.687 [2024-12-09 09:36:28.897773] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:18:53.687 request: 00:18:53.687 { 00:18:53.687 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:53.687 "nsid": 2, 00:18:53.687 "host": "nqn.2016-06.io.spdk:host1", 00:18:53.687 "method": "nvmf_ns_remove_host", 00:18:53.687 "req_id": 1 00:18:53.687 } 00:18:53.687 Got JSON-RPC error response 00:18:53.687 response: 00:18:53.687 { 00:18:53.687 "code": -32602, 00:18:53.687 "message": "Invalid parameters" 00:18:53.687 } 00:18:53.687 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:53.687 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:53.687 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:53.687 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:53.687 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:18:53.687 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:53.687 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:18:53.687 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:18:53.687 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:53.687 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:18:53.687 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:53.687 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:18:53.687 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:53.687 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:53.687 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:53.687 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:53.687 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:53.687 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:53.687 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:53.687 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:53.687 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:53.687 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:53.687 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:18:53.687 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:53.687 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:53.687 [ 0]:0x2 00:18:53.687 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:53.687 09:36:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:53.687 09:36:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=594fe531696744af9eb01d0a697a8840 00:18:53.687 09:36:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 594fe531696744af9eb01d0a697a8840 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:53.687 09:36:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:18:53.687 09:36:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:53.687 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:53.687 09:36:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2762777 00:18:53.687 09:36:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:18:53.687 09:36:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:18:53.687 09:36:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2762777 /var/tmp/host.sock 00:18:53.687 09:36:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 2762777 ']' 00:18:53.687 09:36:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:18:53.687 09:36:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:53.687 09:36:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:53.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:53.687 09:36:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:53.687 09:36:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:53.947 [2024-12-09 09:36:29.144468] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:18:53.947 [2024-12-09 09:36:29.144520] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2762777 ] 00:18:53.948 [2024-12-09 09:36:29.230882] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:53.948 [2024-12-09 09:36:29.248900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:54.518 09:36:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:54.518 09:36:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:18:54.518 09:36:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:54.777 09:36:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:18:55.038 09:36:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 30831dfd-8651-445a-b486-f783c01dc3fe 00:18:55.038 09:36:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:55.038 09:36:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 30831DFD8651445AB486F783C01DC3FE -i 00:18:55.038 09:36:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid da79ffce-2ef8-42f6-b5ab-466b7d4b5824 00:18:55.038 09:36:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:55.038 09:36:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g DA79FFCE2EF842F6B5AB466B7D4B5824 -i 00:18:55.298 09:36:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:55.558 09:36:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:18:55.558 09:36:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:18:55.558 09:36:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:18:55.817 nvme0n1 00:18:56.078 09:36:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:18:56.078 09:36:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:18:56.338 nvme1n2 00:18:56.338 09:36:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:18:56.338 09:36:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:18:56.338 09:36:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:18:56.338 09:36:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:18:56.338 09:36:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:18:56.338 09:36:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:18:56.338 09:36:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:18:56.338 09:36:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:18:56.338 09:36:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:18:56.597 09:36:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 30831dfd-8651-445a-b486-f783c01dc3fe == \3\0\8\3\1\d\f\d\-\8\6\5\1\-\4\4\5\a\-\b\4\8\6\-\f\7\8\3\c\0\1\d\c\3\f\e ]] 00:18:56.597 09:36:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:18:56.597 09:36:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:18:56.597 09:36:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:18:56.857 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ da79ffce-2ef8-42f6-b5ab-466b7d4b5824 == \d\a\7\9\f\f\c\e\-\2\e\f\8\-\4\2\f\6\-\b\5\a\b\-\4\6\6\b\7\d\4\b\5\8\2\4 ]] 00:18:56.857 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:56.857 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:18:57.119 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 30831dfd-8651-445a-b486-f783c01dc3fe 00:18:57.119 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:57.119 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 30831DFD8651445AB486F783C01DC3FE 00:18:57.119 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:57.119 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 30831DFD8651445AB486F783C01DC3FE 00:18:57.119 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:57.119 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:57.119 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:57.119 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:57.119 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:57.119 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:57.119 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:57.119 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:57.119 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 30831DFD8651445AB486F783C01DC3FE 00:18:57.380 [2024-12-09 09:36:32.575461] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:18:57.380 [2024-12-09 09:36:32.575494] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:18:57.380 [2024-12-09 09:36:32.575501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.380 request: 00:18:57.380 { 00:18:57.380 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:57.380 "namespace": { 00:18:57.380 "bdev_name": "invalid", 00:18:57.380 "nsid": 1, 00:18:57.380 "nguid": "30831DFD8651445AB486F783C01DC3FE", 00:18:57.380 "no_auto_visible": false, 00:18:57.380 "hide_metadata": false 00:18:57.380 }, 00:18:57.380 "method": "nvmf_subsystem_add_ns", 00:18:57.380 "req_id": 1 00:18:57.380 } 00:18:57.380 Got JSON-RPC error response 00:18:57.380 response: 00:18:57.380 { 00:18:57.380 "code": -32602, 00:18:57.380 "message": "Invalid parameters" 00:18:57.380 } 00:18:57.380 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:57.380 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:57.380 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:57.380 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:57.380 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 30831dfd-8651-445a-b486-f783c01dc3fe 00:18:57.380 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:57.380 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 30831DFD8651445AB486F783C01DC3FE -i 00:18:57.380 09:36:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:18:59.923 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:18:59.923 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:18:59.923 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:18:59.923 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:18:59.923 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 2762777 00:18:59.923 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 2762777 ']' 00:18:59.923 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 2762777 00:18:59.923 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:18:59.923 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:59.923 09:36:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2762777 00:18:59.923 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:59.923 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:59.923 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2762777' 00:18:59.923 killing process with pid 2762777 00:18:59.923 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 2762777 00:18:59.923 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 2762777 00:18:59.923 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:59.923 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:18:59.923 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:18:59.923 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:59.923 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:18:59.923 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:00.183 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:19:00.183 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:00.183 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:00.183 rmmod nvme_tcp 00:19:00.183 rmmod nvme_fabrics 00:19:00.183 rmmod nvme_keyring 00:19:00.183 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:00.183 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:19:00.183 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:19:00.183 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 2760327 ']' 00:19:00.183 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 2760327 00:19:00.183 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 2760327 ']' 00:19:00.183 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 2760327 00:19:00.183 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:19:00.183 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:00.183 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2760327 00:19:00.183 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:00.183 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:00.183 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2760327' 00:19:00.183 killing process with pid 2760327 00:19:00.183 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 2760327 00:19:00.183 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 2760327 00:19:00.183 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:00.183 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:00.183 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:00.183 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:19:00.183 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:19:00.183 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:00.183 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:19:00.444 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:00.444 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:00.444 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:00.444 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:00.444 09:36:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:02.357 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:02.357 00:19:02.357 real 0m27.834s 00:19:02.357 user 0m31.377s 00:19:02.357 sys 0m8.145s 00:19:02.357 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:02.357 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:02.357 ************************************ 00:19:02.357 END TEST nvmf_ns_masking 00:19:02.357 ************************************ 00:19:02.357 09:36:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:19:02.357 09:36:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:19:02.357 09:36:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:02.358 09:36:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:02.358 09:36:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:02.358 ************************************ 00:19:02.358 START TEST nvmf_nvme_cli 00:19:02.358 ************************************ 00:19:02.358 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:19:02.619 * Looking for test storage... 00:19:02.619 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:02.619 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:02.619 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lcov --version 00:19:02.619 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:02.619 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:02.619 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:02.619 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:02.619 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:02.619 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:19:02.619 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:19:02.619 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:19:02.619 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:19:02.619 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:19:02.619 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:19:02.619 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:19:02.619 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:02.619 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:19:02.619 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:19:02.619 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:02.620 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:02.620 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:19:02.620 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:19:02.620 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:02.620 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:19:02.620 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:19:02.620 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:19:02.620 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:19:02.620 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:02.620 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:19:02.620 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:19:02.620 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:02.620 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:02.620 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:19:02.620 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:02.620 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:02.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:02.620 --rc genhtml_branch_coverage=1 00:19:02.620 --rc genhtml_function_coverage=1 00:19:02.620 --rc genhtml_legend=1 00:19:02.620 --rc geninfo_all_blocks=1 00:19:02.620 --rc geninfo_unexecuted_blocks=1 00:19:02.620 00:19:02.620 ' 00:19:02.620 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:02.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:02.620 --rc genhtml_branch_coverage=1 00:19:02.620 --rc genhtml_function_coverage=1 00:19:02.620 --rc genhtml_legend=1 00:19:02.620 --rc geninfo_all_blocks=1 00:19:02.620 --rc geninfo_unexecuted_blocks=1 00:19:02.620 00:19:02.620 ' 00:19:02.620 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:02.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:02.620 --rc genhtml_branch_coverage=1 00:19:02.620 --rc genhtml_function_coverage=1 00:19:02.620 --rc genhtml_legend=1 00:19:02.620 --rc geninfo_all_blocks=1 00:19:02.620 --rc geninfo_unexecuted_blocks=1 00:19:02.620 00:19:02.620 ' 00:19:02.620 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:02.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:02.620 --rc genhtml_branch_coverage=1 00:19:02.620 --rc genhtml_function_coverage=1 00:19:02.620 --rc genhtml_legend=1 00:19:02.620 --rc geninfo_all_blocks=1 00:19:02.620 --rc geninfo_unexecuted_blocks=1 00:19:02.620 00:19:02.620 ' 00:19:02.620 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:02.620 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:19:02.620 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:02.620 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:02.620 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:02.620 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:02.620 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:02.620 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:02.620 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:02.620 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:02.620 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:02.620 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:02.620 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:02.620 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:02.620 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:02.620 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:02.620 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:02.620 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:02.620 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:02.620 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:19:02.620 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:02.620 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:02.620 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:02.620 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.620 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.620 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.620 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:19:02.620 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.620 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:19:02.620 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:02.620 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:02.620 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:02.620 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:02.620 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:02.620 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:02.620 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:02.620 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:02.620 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:02.620 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:02.620 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:02.620 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:02.620 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:19:02.620 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:19:02.620 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:02.620 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:02.620 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:02.620 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:02.620 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:02.620 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:02.620 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:02.620 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:02.620 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:02.620 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:02.620 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:19:02.620 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:10.766 09:36:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:10.766 09:36:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:19:10.766 09:36:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:10.766 09:36:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:10.766 09:36:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:10.766 09:36:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:10.766 09:36:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:10.766 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:19:10.766 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:10.766 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:19:10.766 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:19:10.766 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:19:10.766 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:19:10.766 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:19:10.766 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:19:10.766 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:10.766 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:10.766 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:10.766 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:10.766 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:10.766 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:10.766 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:10.766 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:10.766 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:10.766 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:10.766 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:10.766 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:10.766 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:10.766 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:10.766 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:10.766 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:10.766 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:10.766 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:10.766 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:10.766 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:19:10.766 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:19:10.766 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:10.766 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:10.766 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:10.766 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:10.766 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:10.766 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:10.766 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:19:10.766 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:19:10.766 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:10.766 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:10.766 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:10.766 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:10.766 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:10.766 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:10.766 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:10.766 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:10.766 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:10.766 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:10.766 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:10.766 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:10.766 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:10.766 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:10.766 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:10.766 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:19:10.766 Found net devices under 0000:4b:00.0: cvl_0_0 00:19:10.766 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:10.766 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:10.766 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:10.766 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:10.766 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:10.766 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:10.766 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:10.766 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:10.766 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:19:10.766 Found net devices under 0000:4b:00.1: cvl_0_1 00:19:10.766 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:10.766 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:10.766 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:19:10.766 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:10.766 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:10.766 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:10.766 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:10.766 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:10.766 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:10.766 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:10.766 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:10.766 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:10.766 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:10.766 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:10.766 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:10.766 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:10.766 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:10.766 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:10.766 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:10.766 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:10.766 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:10.766 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:10.766 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:10.766 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:10.766 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:10.766 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:10.766 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:10.766 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:10.766 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:10.766 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:10.766 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.688 ms 00:19:10.766 00:19:10.766 --- 10.0.0.2 ping statistics --- 00:19:10.766 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:10.767 rtt min/avg/max/mdev = 0.688/0.688/0.688/0.000 ms 00:19:10.767 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:10.767 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:10.767 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:19:10.767 00:19:10.767 --- 10.0.0.1 ping statistics --- 00:19:10.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:10.767 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:19:10.767 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:10.767 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:19:10.767 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:10.767 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:10.767 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:10.767 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:10.767 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:10.767 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:10.767 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:10.767 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:19:10.767 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:10.767 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:10.767 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:10.767 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=2768221 00:19:10.767 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 2768221 00:19:10.767 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:10.767 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 2768221 ']' 00:19:10.767 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:10.767 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:10.767 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:10.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:10.767 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:10.767 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:10.767 [2024-12-09 09:36:45.405036] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:19:10.767 [2024-12-09 09:36:45.405088] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:10.767 [2024-12-09 09:36:45.493283] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:10.767 [2024-12-09 09:36:45.523170] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:10.767 [2024-12-09 09:36:45.523221] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:10.767 [2024-12-09 09:36:45.523230] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:10.767 [2024-12-09 09:36:45.523237] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:10.767 [2024-12-09 09:36:45.523245] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:10.767 [2024-12-09 09:36:45.525408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:10.767 [2024-12-09 09:36:45.525540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:10.767 [2024-12-09 09:36:45.525710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:10.767 [2024-12-09 09:36:45.525711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:10.767 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:10.767 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:19:10.767 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:10.767 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:10.767 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:10.767 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:10.767 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:10.767 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.767 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:10.767 [2024-12-09 09:36:45.674384] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:10.767 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.767 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:10.767 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.767 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:10.767 Malloc0 00:19:10.767 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.767 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:10.767 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.767 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:10.767 Malloc1 00:19:10.767 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.767 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:19:10.767 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.767 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:10.767 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.767 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:10.767 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.767 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:10.767 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.767 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:10.767 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.767 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:10.767 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.767 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:10.767 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.767 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:10.767 [2024-12-09 09:36:45.770562] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:10.767 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.767 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:10.767 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.767 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:10.767 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.767 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:19:10.767 00:19:10.767 Discovery Log Number of Records 2, Generation counter 2 00:19:10.767 =====Discovery Log Entry 0====== 00:19:10.767 trtype: tcp 00:19:10.767 adrfam: ipv4 00:19:10.767 subtype: current discovery subsystem 00:19:10.767 treq: not required 00:19:10.767 portid: 0 00:19:10.767 trsvcid: 4420 00:19:10.767 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:19:10.767 traddr: 10.0.0.2 00:19:10.767 eflags: explicit discovery connections, duplicate discovery information 00:19:10.767 sectype: none 00:19:10.767 =====Discovery Log Entry 1====== 00:19:10.767 trtype: tcp 00:19:10.767 adrfam: ipv4 00:19:10.767 subtype: nvme subsystem 00:19:10.767 treq: not required 00:19:10.767 portid: 0 00:19:10.767 trsvcid: 4420 00:19:10.767 subnqn: nqn.2016-06.io.spdk:cnode1 00:19:10.767 traddr: 10.0.0.2 00:19:10.767 eflags: none 00:19:10.767 sectype: none 00:19:10.767 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:19:10.767 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:19:10.767 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:19:10.767 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:10.767 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:19:10.767 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:19:10.767 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:10.767 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:19:10.767 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:10.767 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:19:10.768 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:12.152 09:36:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:19:12.152 09:36:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:19:12.152 09:36:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:19:12.152 09:36:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:19:12.152 09:36:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:19:12.152 09:36:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:19:14.063 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:19:14.063 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:19:14.063 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:19:14.063 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:19:14.063 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:19:14.063 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:19:14.063 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:19:14.063 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:19:14.063 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:14.063 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:19:14.322 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:19:14.322 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:14.322 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:19:14.322 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:14.322 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:19:14.322 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:19:14.322 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:14.322 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:19:14.322 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:19:14.322 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:14.322 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:19:14.322 /dev/nvme0n2 ]] 00:19:14.322 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:19:14.322 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:19:14.322 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:19:14.322 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:14.322 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:19:14.582 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:19:14.582 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:14.582 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:19:14.582 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:14.582 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:19:14.582 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:19:14.583 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:14.583 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:19:14.583 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:19:14.583 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:19:14.583 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:19:14.583 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:14.843 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:14.843 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:14.843 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:19:14.843 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:14.843 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:14.843 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:14.843 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:14.843 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:19:14.844 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:19:14.844 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:14.844 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.844 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:14.844 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.844 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:19:14.844 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:19:14.844 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:14.844 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:19:14.844 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:14.844 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:19:14.844 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:14.844 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:14.844 rmmod nvme_tcp 00:19:14.844 rmmod nvme_fabrics 00:19:14.844 rmmod nvme_keyring 00:19:14.844 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:14.844 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:19:14.844 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:19:14.844 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 2768221 ']' 00:19:14.844 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 2768221 00:19:14.844 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 2768221 ']' 00:19:14.844 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 2768221 00:19:14.844 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:19:14.844 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:14.844 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2768221 00:19:14.844 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:14.844 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:14.844 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2768221' 00:19:14.844 killing process with pid 2768221 00:19:14.844 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 2768221 00:19:14.844 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 2768221 00:19:15.104 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:15.104 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:15.104 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:15.104 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:19:15.105 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:19:15.105 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:15.105 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:19:15.105 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:15.105 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:15.105 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:15.105 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:15.105 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:17.033 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:17.033 00:19:17.033 real 0m14.683s 00:19:17.033 user 0m21.509s 00:19:17.033 sys 0m6.164s 00:19:17.033 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:17.033 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:17.033 ************************************ 00:19:17.033 END TEST nvmf_nvme_cli 00:19:17.033 ************************************ 00:19:17.293 09:36:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:19:17.293 09:36:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:19:17.293 09:36:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:17.293 09:36:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:17.293 09:36:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:17.293 ************************************ 00:19:17.293 START TEST nvmf_vfio_user 00:19:17.293 ************************************ 00:19:17.293 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:19:17.293 * Looking for test storage... 00:19:17.293 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:17.293 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:17.293 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lcov --version 00:19:17.293 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:17.293 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:17.293 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:17.553 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:17.553 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:17.553 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:19:17.553 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:19:17.553 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:19:17.553 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:19:17.553 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:19:17.553 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:19:17.553 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:19:17.553 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:17.553 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:19:17.553 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:19:17.553 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:17.553 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:17.553 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:19:17.554 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:19:17.554 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:17.554 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:19:17.554 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:19:17.554 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:19:17.554 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:19:17.554 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:17.554 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:19:17.554 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:19:17.554 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:17.554 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:17.554 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:19:17.554 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:17.554 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:17.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:17.554 --rc genhtml_branch_coverage=1 00:19:17.554 --rc genhtml_function_coverage=1 00:19:17.554 --rc genhtml_legend=1 00:19:17.554 --rc geninfo_all_blocks=1 00:19:17.554 --rc geninfo_unexecuted_blocks=1 00:19:17.554 00:19:17.554 ' 00:19:17.554 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:17.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:17.554 --rc genhtml_branch_coverage=1 00:19:17.554 --rc genhtml_function_coverage=1 00:19:17.554 --rc genhtml_legend=1 00:19:17.554 --rc geninfo_all_blocks=1 00:19:17.554 --rc geninfo_unexecuted_blocks=1 00:19:17.554 00:19:17.554 ' 00:19:17.554 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:17.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:17.554 --rc genhtml_branch_coverage=1 00:19:17.554 --rc genhtml_function_coverage=1 00:19:17.554 --rc genhtml_legend=1 00:19:17.554 --rc geninfo_all_blocks=1 00:19:17.554 --rc geninfo_unexecuted_blocks=1 00:19:17.554 00:19:17.554 ' 00:19:17.554 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:17.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:17.554 --rc genhtml_branch_coverage=1 00:19:17.554 --rc genhtml_function_coverage=1 00:19:17.554 --rc genhtml_legend=1 00:19:17.554 --rc geninfo_all_blocks=1 00:19:17.554 --rc geninfo_unexecuted_blocks=1 00:19:17.554 00:19:17.554 ' 00:19:17.554 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:17.554 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:19:17.554 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:17.554 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:17.554 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:17.554 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:17.554 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:17.554 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:17.554 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:17.554 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:17.554 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:17.554 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:17.554 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:17.554 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:17.554 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:17.554 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:17.554 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:17.554 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:17.554 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:17.554 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:19:17.554 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:17.554 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:17.554 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:17.554 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.554 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.554 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.554 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:19:17.554 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.554 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:19:17.554 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:17.554 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:17.554 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:17.554 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:17.554 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:17.554 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:17.554 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:17.554 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:17.554 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:17.554 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:17.554 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:19:17.554 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:17.554 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:19:17.554 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:17.554 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:19:17.554 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:19:17.554 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:19:17.554 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:19:17.554 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:19:17.554 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:19:17.554 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2769717 00:19:17.554 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2769717' 00:19:17.554 Process pid: 2769717 00:19:17.555 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:19:17.555 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2769717 00:19:17.555 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:19:17.555 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 2769717 ']' 00:19:17.555 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:17.555 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:17.555 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:17.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:17.555 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:17.555 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:19:17.555 [2024-12-09 09:36:52.868792] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:19:17.555 [2024-12-09 09:36:52.868875] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:17.555 [2024-12-09 09:36:52.955995] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:17.555 [2024-12-09 09:36:52.972329] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:17.555 [2024-12-09 09:36:52.972367] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:17.555 [2024-12-09 09:36:52.972373] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:17.555 [2024-12-09 09:36:52.972378] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:17.555 [2024-12-09 09:36:52.972382] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:17.555 [2024-12-09 09:36:52.973716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:17.555 [2024-12-09 09:36:52.974101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:17.555 [2024-12-09 09:36:52.974263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:17.555 [2024-12-09 09:36:52.974264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:18.494 09:36:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:18.494 09:36:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:19:18.494 09:36:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:19:19.436 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:19:19.436 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:19:19.436 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:19:19.436 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:19.436 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:19:19.436 09:36:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:19:19.696 Malloc1 00:19:19.696 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:19:19.956 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:19:20.216 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:19:20.216 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:20.216 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:19:20.216 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:19:20.475 Malloc2 00:19:20.475 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:19:20.736 09:36:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:19:20.736 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:19:20.996 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:19:20.996 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:19:20.996 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:20.996 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:19:20.996 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:19:20.996 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:19:20.996 [2024-12-09 09:36:56.374022] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:19:20.996 [2024-12-09 09:36:56.374072] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2770408 ] 00:19:20.996 [2024-12-09 09:36:56.415657] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:19:20.996 [2024-12-09 09:36:56.423926] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:19:20.996 [2024-12-09 09:36:56.423941] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f68927ff000 00:19:20.996 [2024-12-09 09:36:56.424921] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:20.996 [2024-12-09 09:36:56.425927] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:20.996 [2024-12-09 09:36:56.426933] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:20.996 [2024-12-09 09:36:56.427942] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:20.996 [2024-12-09 09:36:56.428944] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:20.996 [2024-12-09 09:36:56.429951] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:20.996 [2024-12-09 09:36:56.430954] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:20.996 [2024-12-09 09:36:56.431969] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:20.996 [2024-12-09 09:36:56.432974] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:19:20.996 [2024-12-09 09:36:56.432981] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f6891509000 00:19:20.997 [2024-12-09 09:36:56.433894] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:19:20.997 [2024-12-09 09:36:56.447917] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:19:20.997 [2024-12-09 09:36:56.447942] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:19:21.260 [2024-12-09 09:36:56.450073] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:19:21.260 [2024-12-09 09:36:56.450105] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:19:21.260 [2024-12-09 09:36:56.450161] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:19:21.260 [2024-12-09 09:36:56.450173] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:19:21.260 [2024-12-09 09:36:56.450176] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:19:21.260 [2024-12-09 09:36:56.451071] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:19:21.260 [2024-12-09 09:36:56.451080] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:19:21.260 [2024-12-09 09:36:56.451085] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:19:21.260 [2024-12-09 09:36:56.452075] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:19:21.260 [2024-12-09 09:36:56.452082] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:19:21.260 [2024-12-09 09:36:56.452090] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:19:21.260 [2024-12-09 09:36:56.453076] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:19:21.260 [2024-12-09 09:36:56.453082] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:19:21.260 [2024-12-09 09:36:56.454083] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:19:21.260 [2024-12-09 09:36:56.454088] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:19:21.260 [2024-12-09 09:36:56.454091] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:19:21.260 [2024-12-09 09:36:56.454096] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:19:21.260 [2024-12-09 09:36:56.454202] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:19:21.260 [2024-12-09 09:36:56.454205] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:19:21.260 [2024-12-09 09:36:56.454209] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:19:21.260 [2024-12-09 09:36:56.455091] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:19:21.260 [2024-12-09 09:36:56.456095] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:19:21.260 [2024-12-09 09:36:56.457101] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:19:21.260 [2024-12-09 09:36:56.458102] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:21.260 [2024-12-09 09:36:56.458155] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:19:21.260 [2024-12-09 09:36:56.459116] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:19:21.260 [2024-12-09 09:36:56.459122] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:19:21.260 [2024-12-09 09:36:56.459125] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:19:21.260 [2024-12-09 09:36:56.459140] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:19:21.260 [2024-12-09 09:36:56.459145] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:19:21.260 [2024-12-09 09:36:56.459158] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:21.260 [2024-12-09 09:36:56.459161] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:21.260 [2024-12-09 09:36:56.459164] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:21.260 [2024-12-09 09:36:56.459174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:21.260 [2024-12-09 09:36:56.459202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:19:21.260 [2024-12-09 09:36:56.459210] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:19:21.260 [2024-12-09 09:36:56.459214] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:19:21.260 [2024-12-09 09:36:56.459217] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:19:21.260 [2024-12-09 09:36:56.459220] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:19:21.260 [2024-12-09 09:36:56.459223] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:19:21.260 [2024-12-09 09:36:56.459227] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:19:21.260 [2024-12-09 09:36:56.459230] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:19:21.260 [2024-12-09 09:36:56.459235] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:19:21.260 [2024-12-09 09:36:56.459243] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:19:21.261 [2024-12-09 09:36:56.459254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:19:21.261 [2024-12-09 09:36:56.459261] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:19:21.261 [2024-12-09 09:36:56.459267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:19:21.261 [2024-12-09 09:36:56.459273] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:19:21.261 [2024-12-09 09:36:56.459279] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:19:21.261 [2024-12-09 09:36:56.459283] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:19:21.261 [2024-12-09 09:36:56.459289] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:19:21.261 [2024-12-09 09:36:56.459295] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:19:21.261 [2024-12-09 09:36:56.459303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:19:21.261 [2024-12-09 09:36:56.459307] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:19:21.261 [2024-12-09 09:36:56.459311] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:19:21.261 [2024-12-09 09:36:56.459315] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:19:21.261 [2024-12-09 09:36:56.459320] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:19:21.261 [2024-12-09 09:36:56.459326] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:19:21.261 [2024-12-09 09:36:56.459338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:19:21.261 [2024-12-09 09:36:56.459382] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:19:21.261 [2024-12-09 09:36:56.459388] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:19:21.261 [2024-12-09 09:36:56.459393] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:19:21.261 [2024-12-09 09:36:56.459396] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:19:21.261 [2024-12-09 09:36:56.459398] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:21.261 [2024-12-09 09:36:56.459403] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:19:21.261 [2024-12-09 09:36:56.459417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:19:21.261 [2024-12-09 09:36:56.459423] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:19:21.261 [2024-12-09 09:36:56.459430] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:19:21.261 [2024-12-09 09:36:56.459435] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:19:21.261 [2024-12-09 09:36:56.459440] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:21.261 [2024-12-09 09:36:56.459443] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:21.261 [2024-12-09 09:36:56.459445] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:21.261 [2024-12-09 09:36:56.459450] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:21.261 [2024-12-09 09:36:56.459468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:19:21.261 [2024-12-09 09:36:56.459477] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:19:21.261 [2024-12-09 09:36:56.459482] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:19:21.261 [2024-12-09 09:36:56.459487] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:21.261 [2024-12-09 09:36:56.459490] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:21.261 [2024-12-09 09:36:56.459492] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:21.261 [2024-12-09 09:36:56.459496] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:21.261 [2024-12-09 09:36:56.459504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:19:21.261 [2024-12-09 09:36:56.459510] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:19:21.261 [2024-12-09 09:36:56.459515] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:19:21.261 [2024-12-09 09:36:56.459520] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:19:21.261 [2024-12-09 09:36:56.459525] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:19:21.261 [2024-12-09 09:36:56.459530] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:19:21.261 [2024-12-09 09:36:56.459534] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:19:21.261 [2024-12-09 09:36:56.459538] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:19:21.261 [2024-12-09 09:36:56.459541] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:19:21.261 [2024-12-09 09:36:56.459544] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:19:21.261 [2024-12-09 09:36:56.459558] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:19:21.261 [2024-12-09 09:36:56.459565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:19:21.261 [2024-12-09 09:36:56.459573] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:19:21.261 [2024-12-09 09:36:56.459583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:19:21.261 [2024-12-09 09:36:56.459591] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:19:21.261 [2024-12-09 09:36:56.459601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:19:21.261 [2024-12-09 09:36:56.459609] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:19:21.261 [2024-12-09 09:36:56.459619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:19:21.261 [2024-12-09 09:36:56.459629] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:19:21.261 [2024-12-09 09:36:56.459632] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:19:21.261 [2024-12-09 09:36:56.459634] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:19:21.261 [2024-12-09 09:36:56.459640] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:19:21.261 [2024-12-09 09:36:56.459642] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:19:21.261 [2024-12-09 09:36:56.459647] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:19:21.262 [2024-12-09 09:36:56.459652] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:19:21.262 [2024-12-09 09:36:56.459655] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:19:21.262 [2024-12-09 09:36:56.459657] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:21.262 [2024-12-09 09:36:56.459662] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:19:21.262 [2024-12-09 09:36:56.459667] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:19:21.262 [2024-12-09 09:36:56.459669] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:21.262 [2024-12-09 09:36:56.459672] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:21.262 [2024-12-09 09:36:56.459676] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:21.262 [2024-12-09 09:36:56.459682] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:19:21.262 [2024-12-09 09:36:56.459686] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:19:21.262 [2024-12-09 09:36:56.459688] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:21.262 [2024-12-09 09:36:56.459692] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:19:21.262 [2024-12-09 09:36:56.459697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:19:21.262 [2024-12-09 09:36:56.459707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:19:21.262 [2024-12-09 09:36:56.459714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:19:21.262 [2024-12-09 09:36:56.459719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:19:21.262 ===================================================== 00:19:21.262 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:19:21.262 ===================================================== 00:19:21.262 Controller Capabilities/Features 00:19:21.262 ================================ 00:19:21.262 Vendor ID: 4e58 00:19:21.262 Subsystem Vendor ID: 4e58 00:19:21.262 Serial Number: SPDK1 00:19:21.262 Model Number: SPDK bdev Controller 00:19:21.262 Firmware Version: 25.01 00:19:21.262 Recommended Arb Burst: 6 00:19:21.262 IEEE OUI Identifier: 8d 6b 50 00:19:21.262 Multi-path I/O 00:19:21.262 May have multiple subsystem ports: Yes 00:19:21.262 May have multiple controllers: Yes 00:19:21.262 Associated with SR-IOV VF: No 00:19:21.262 Max Data Transfer Size: 131072 00:19:21.262 Max Number of Namespaces: 32 00:19:21.262 Max Number of I/O Queues: 127 00:19:21.262 NVMe Specification Version (VS): 1.3 00:19:21.262 NVMe Specification Version (Identify): 1.3 00:19:21.262 Maximum Queue Entries: 256 00:19:21.262 Contiguous Queues Required: Yes 00:19:21.262 Arbitration Mechanisms Supported 00:19:21.262 Weighted Round Robin: Not Supported 00:19:21.262 Vendor Specific: Not Supported 00:19:21.262 Reset Timeout: 15000 ms 00:19:21.262 Doorbell Stride: 4 bytes 00:19:21.262 NVM Subsystem Reset: Not Supported 00:19:21.262 Command Sets Supported 00:19:21.262 NVM Command Set: Supported 00:19:21.262 Boot Partition: Not Supported 00:19:21.262 Memory Page Size Minimum: 4096 bytes 00:19:21.262 Memory Page Size Maximum: 4096 bytes 00:19:21.262 Persistent Memory Region: Not Supported 00:19:21.262 Optional Asynchronous Events Supported 00:19:21.262 Namespace Attribute Notices: Supported 00:19:21.262 Firmware Activation Notices: Not Supported 00:19:21.262 ANA Change Notices: Not Supported 00:19:21.262 PLE Aggregate Log Change Notices: Not Supported 00:19:21.262 LBA Status Info Alert Notices: Not Supported 00:19:21.262 EGE Aggregate Log Change Notices: Not Supported 00:19:21.262 Normal NVM Subsystem Shutdown event: Not Supported 00:19:21.262 Zone Descriptor Change Notices: Not Supported 00:19:21.262 Discovery Log Change Notices: Not Supported 00:19:21.262 Controller Attributes 00:19:21.262 128-bit Host Identifier: Supported 00:19:21.262 Non-Operational Permissive Mode: Not Supported 00:19:21.262 NVM Sets: Not Supported 00:19:21.262 Read Recovery Levels: Not Supported 00:19:21.262 Endurance Groups: Not Supported 00:19:21.262 Predictable Latency Mode: Not Supported 00:19:21.262 Traffic Based Keep ALive: Not Supported 00:19:21.262 Namespace Granularity: Not Supported 00:19:21.262 SQ Associations: Not Supported 00:19:21.262 UUID List: Not Supported 00:19:21.262 Multi-Domain Subsystem: Not Supported 00:19:21.262 Fixed Capacity Management: Not Supported 00:19:21.262 Variable Capacity Management: Not Supported 00:19:21.262 Delete Endurance Group: Not Supported 00:19:21.262 Delete NVM Set: Not Supported 00:19:21.262 Extended LBA Formats Supported: Not Supported 00:19:21.262 Flexible Data Placement Supported: Not Supported 00:19:21.262 00:19:21.262 Controller Memory Buffer Support 00:19:21.262 ================================ 00:19:21.262 Supported: No 00:19:21.262 00:19:21.262 Persistent Memory Region Support 00:19:21.262 ================================ 00:19:21.262 Supported: No 00:19:21.262 00:19:21.262 Admin Command Set Attributes 00:19:21.262 ============================ 00:19:21.262 Security Send/Receive: Not Supported 00:19:21.262 Format NVM: Not Supported 00:19:21.262 Firmware Activate/Download: Not Supported 00:19:21.262 Namespace Management: Not Supported 00:19:21.262 Device Self-Test: Not Supported 00:19:21.262 Directives: Not Supported 00:19:21.262 NVMe-MI: Not Supported 00:19:21.262 Virtualization Management: Not Supported 00:19:21.262 Doorbell Buffer Config: Not Supported 00:19:21.262 Get LBA Status Capability: Not Supported 00:19:21.262 Command & Feature Lockdown Capability: Not Supported 00:19:21.262 Abort Command Limit: 4 00:19:21.262 Async Event Request Limit: 4 00:19:21.262 Number of Firmware Slots: N/A 00:19:21.262 Firmware Slot 1 Read-Only: N/A 00:19:21.262 Firmware Activation Without Reset: N/A 00:19:21.262 Multiple Update Detection Support: N/A 00:19:21.262 Firmware Update Granularity: No Information Provided 00:19:21.262 Per-Namespace SMART Log: No 00:19:21.262 Asymmetric Namespace Access Log Page: Not Supported 00:19:21.262 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:19:21.262 Command Effects Log Page: Supported 00:19:21.262 Get Log Page Extended Data: Supported 00:19:21.262 Telemetry Log Pages: Not Supported 00:19:21.262 Persistent Event Log Pages: Not Supported 00:19:21.262 Supported Log Pages Log Page: May Support 00:19:21.262 Commands Supported & Effects Log Page: Not Supported 00:19:21.262 Feature Identifiers & Effects Log Page:May Support 00:19:21.262 NVMe-MI Commands & Effects Log Page: May Support 00:19:21.262 Data Area 4 for Telemetry Log: Not Supported 00:19:21.262 Error Log Page Entries Supported: 128 00:19:21.262 Keep Alive: Supported 00:19:21.262 Keep Alive Granularity: 10000 ms 00:19:21.262 00:19:21.262 NVM Command Set Attributes 00:19:21.262 ========================== 00:19:21.262 Submission Queue Entry Size 00:19:21.263 Max: 64 00:19:21.263 Min: 64 00:19:21.263 Completion Queue Entry Size 00:19:21.263 Max: 16 00:19:21.263 Min: 16 00:19:21.263 Number of Namespaces: 32 00:19:21.263 Compare Command: Supported 00:19:21.263 Write Uncorrectable Command: Not Supported 00:19:21.263 Dataset Management Command: Supported 00:19:21.263 Write Zeroes Command: Supported 00:19:21.263 Set Features Save Field: Not Supported 00:19:21.263 Reservations: Not Supported 00:19:21.263 Timestamp: Not Supported 00:19:21.263 Copy: Supported 00:19:21.263 Volatile Write Cache: Present 00:19:21.263 Atomic Write Unit (Normal): 1 00:19:21.263 Atomic Write Unit (PFail): 1 00:19:21.263 Atomic Compare & Write Unit: 1 00:19:21.263 Fused Compare & Write: Supported 00:19:21.263 Scatter-Gather List 00:19:21.263 SGL Command Set: Supported (Dword aligned) 00:19:21.263 SGL Keyed: Not Supported 00:19:21.263 SGL Bit Bucket Descriptor: Not Supported 00:19:21.263 SGL Metadata Pointer: Not Supported 00:19:21.263 Oversized SGL: Not Supported 00:19:21.263 SGL Metadata Address: Not Supported 00:19:21.263 SGL Offset: Not Supported 00:19:21.263 Transport SGL Data Block: Not Supported 00:19:21.263 Replay Protected Memory Block: Not Supported 00:19:21.263 00:19:21.263 Firmware Slot Information 00:19:21.263 ========================= 00:19:21.263 Active slot: 1 00:19:21.263 Slot 1 Firmware Revision: 25.01 00:19:21.263 00:19:21.263 00:19:21.263 Commands Supported and Effects 00:19:21.263 ============================== 00:19:21.263 Admin Commands 00:19:21.263 -------------- 00:19:21.263 Get Log Page (02h): Supported 00:19:21.263 Identify (06h): Supported 00:19:21.263 Abort (08h): Supported 00:19:21.263 Set Features (09h): Supported 00:19:21.263 Get Features (0Ah): Supported 00:19:21.263 Asynchronous Event Request (0Ch): Supported 00:19:21.263 Keep Alive (18h): Supported 00:19:21.263 I/O Commands 00:19:21.263 ------------ 00:19:21.263 Flush (00h): Supported LBA-Change 00:19:21.263 Write (01h): Supported LBA-Change 00:19:21.263 Read (02h): Supported 00:19:21.263 Compare (05h): Supported 00:19:21.263 Write Zeroes (08h): Supported LBA-Change 00:19:21.263 Dataset Management (09h): Supported LBA-Change 00:19:21.263 Copy (19h): Supported LBA-Change 00:19:21.263 00:19:21.263 Error Log 00:19:21.263 ========= 00:19:21.263 00:19:21.263 Arbitration 00:19:21.263 =========== 00:19:21.263 Arbitration Burst: 1 00:19:21.263 00:19:21.263 Power Management 00:19:21.263 ================ 00:19:21.263 Number of Power States: 1 00:19:21.263 Current Power State: Power State #0 00:19:21.263 Power State #0: 00:19:21.263 Max Power: 0.00 W 00:19:21.263 Non-Operational State: Operational 00:19:21.263 Entry Latency: Not Reported 00:19:21.263 Exit Latency: Not Reported 00:19:21.263 Relative Read Throughput: 0 00:19:21.263 Relative Read Latency: 0 00:19:21.263 Relative Write Throughput: 0 00:19:21.263 Relative Write Latency: 0 00:19:21.263 Idle Power: Not Reported 00:19:21.263 Active Power: Not Reported 00:19:21.263 Non-Operational Permissive Mode: Not Supported 00:19:21.263 00:19:21.263 Health Information 00:19:21.263 ================== 00:19:21.263 Critical Warnings: 00:19:21.263 Available Spare Space: OK 00:19:21.263 Temperature: OK 00:19:21.263 Device Reliability: OK 00:19:21.263 Read Only: No 00:19:21.263 Volatile Memory Backup: OK 00:19:21.263 Current Temperature: 0 Kelvin (-273 Celsius) 00:19:21.263 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:19:21.263 Available Spare: 0% 00:19:21.263 Available Sp[2024-12-09 09:36:56.459792] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:19:21.263 [2024-12-09 09:36:56.459798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:19:21.263 [2024-12-09 09:36:56.459820] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:19:21.263 [2024-12-09 09:36:56.459826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.263 [2024-12-09 09:36:56.459831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.263 [2024-12-09 09:36:56.459835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.263 [2024-12-09 09:36:56.459840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.263 [2024-12-09 09:36:56.462642] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:19:21.263 [2024-12-09 09:36:56.462650] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:19:21.263 [2024-12-09 09:36:56.463134] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:21.263 [2024-12-09 09:36:56.463171] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:19:21.263 [2024-12-09 09:36:56.463176] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:19:21.263 [2024-12-09 09:36:56.464144] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:19:21.263 [2024-12-09 09:36:56.464152] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:19:21.263 [2024-12-09 09:36:56.464201] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:19:21.263 [2024-12-09 09:36:56.465157] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:19:21.263 are Threshold: 0% 00:19:21.263 Life Percentage Used: 0% 00:19:21.263 Data Units Read: 0 00:19:21.263 Data Units Written: 0 00:19:21.263 Host Read Commands: 0 00:19:21.263 Host Write Commands: 0 00:19:21.263 Controller Busy Time: 0 minutes 00:19:21.263 Power Cycles: 0 00:19:21.263 Power On Hours: 0 hours 00:19:21.263 Unsafe Shutdowns: 0 00:19:21.263 Unrecoverable Media Errors: 0 00:19:21.263 Lifetime Error Log Entries: 0 00:19:21.263 Warning Temperature Time: 0 minutes 00:19:21.263 Critical Temperature Time: 0 minutes 00:19:21.263 00:19:21.263 Number of Queues 00:19:21.263 ================ 00:19:21.263 Number of I/O Submission Queues: 127 00:19:21.263 Number of I/O Completion Queues: 127 00:19:21.264 00:19:21.264 Active Namespaces 00:19:21.264 ================= 00:19:21.264 Namespace ID:1 00:19:21.264 Error Recovery Timeout: Unlimited 00:19:21.264 Command Set Identifier: NVM (00h) 00:19:21.264 Deallocate: Supported 00:19:21.264 Deallocated/Unwritten Error: Not Supported 00:19:21.264 Deallocated Read Value: Unknown 00:19:21.264 Deallocate in Write Zeroes: Not Supported 00:19:21.264 Deallocated Guard Field: 0xFFFF 00:19:21.264 Flush: Supported 00:19:21.264 Reservation: Supported 00:19:21.264 Namespace Sharing Capabilities: Multiple Controllers 00:19:21.264 Size (in LBAs): 131072 (0GiB) 00:19:21.264 Capacity (in LBAs): 131072 (0GiB) 00:19:21.264 Utilization (in LBAs): 131072 (0GiB) 00:19:21.264 NGUID: 99CB4A51689246769C4B17097C200C53 00:19:21.264 UUID: 99cb4a51-6892-4676-9c4b-17097c200c53 00:19:21.264 Thin Provisioning: Not Supported 00:19:21.264 Per-NS Atomic Units: Yes 00:19:21.264 Atomic Boundary Size (Normal): 0 00:19:21.264 Atomic Boundary Size (PFail): 0 00:19:21.264 Atomic Boundary Offset: 0 00:19:21.264 Maximum Single Source Range Length: 65535 00:19:21.264 Maximum Copy Length: 65535 00:19:21.264 Maximum Source Range Count: 1 00:19:21.264 NGUID/EUI64 Never Reused: No 00:19:21.264 Namespace Write Protected: No 00:19:21.264 Number of LBA Formats: 1 00:19:21.264 Current LBA Format: LBA Format #00 00:19:21.264 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:21.264 00:19:21.264 09:36:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:19:21.264 [2024-12-09 09:36:56.654312] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:26.556 Initializing NVMe Controllers 00:19:26.556 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:19:26.556 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:19:26.556 Initialization complete. Launching workers. 00:19:26.556 ======================================================== 00:19:26.556 Latency(us) 00:19:26.556 Device Information : IOPS MiB/s Average min max 00:19:26.556 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39902.45 155.87 3207.50 871.03 10744.57 00:19:26.556 ======================================================== 00:19:26.556 Total : 39902.45 155.87 3207.50 871.03 10744.57 00:19:26.556 00:19:26.556 [2024-12-09 09:37:01.673854] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:26.556 09:37:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:19:26.556 [2024-12-09 09:37:01.867723] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:31.948 Initializing NVMe Controllers 00:19:31.948 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:19:31.948 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:19:31.948 Initialization complete. Launching workers. 00:19:31.948 ======================================================== 00:19:31.948 Latency(us) 00:19:31.948 Device Information : IOPS MiB/s Average min max 00:19:31.948 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16014.00 62.55 8004.06 5569.81 15961.74 00:19:31.948 ======================================================== 00:19:31.948 Total : 16014.00 62.55 8004.06 5569.81 15961.74 00:19:31.948 00:19:31.948 [2024-12-09 09:37:06.907004] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:31.948 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:19:31.948 [2024-12-09 09:37:07.105862] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:37.232 [2024-12-09 09:37:12.169827] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:37.232 Initializing NVMe Controllers 00:19:37.232 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:19:37.232 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:19:37.232 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:19:37.232 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:19:37.232 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:19:37.232 Initialization complete. Launching workers. 00:19:37.232 Starting thread on core 2 00:19:37.232 Starting thread on core 3 00:19:37.232 Starting thread on core 1 00:19:37.232 09:37:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:19:37.232 [2024-12-09 09:37:12.420846] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:40.527 [2024-12-09 09:37:15.489271] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:40.527 Initializing NVMe Controllers 00:19:40.527 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:19:40.527 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:19:40.527 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:19:40.527 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:19:40.527 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:19:40.527 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:19:40.527 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:19:40.527 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:19:40.527 Initialization complete. Launching workers. 00:19:40.527 Starting thread on core 1 with urgent priority queue 00:19:40.527 Starting thread on core 2 with urgent priority queue 00:19:40.527 Starting thread on core 3 with urgent priority queue 00:19:40.527 Starting thread on core 0 with urgent priority queue 00:19:40.527 SPDK bdev Controller (SPDK1 ) core 0: 10065.33 IO/s 9.94 secs/100000 ios 00:19:40.527 SPDK bdev Controller (SPDK1 ) core 1: 14023.33 IO/s 7.13 secs/100000 ios 00:19:40.527 SPDK bdev Controller (SPDK1 ) core 2: 12281.00 IO/s 8.14 secs/100000 ios 00:19:40.527 SPDK bdev Controller (SPDK1 ) core 3: 16067.33 IO/s 6.22 secs/100000 ios 00:19:40.527 ======================================================== 00:19:40.527 00:19:40.527 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:19:40.527 [2024-12-09 09:37:15.731014] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:40.527 Initializing NVMe Controllers 00:19:40.527 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:19:40.527 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:19:40.527 Namespace ID: 1 size: 0GB 00:19:40.527 Initialization complete. 00:19:40.527 INFO: using host memory buffer for IO 00:19:40.527 Hello world! 00:19:40.527 [2024-12-09 09:37:15.765222] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:40.527 09:37:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:19:40.787 [2024-12-09 09:37:16.008025] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:41.727 Initializing NVMe Controllers 00:19:41.727 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:19:41.727 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:19:41.727 Initialization complete. Launching workers. 00:19:41.727 submit (in ns) avg, min, max = 5388.4, 2823.3, 3997614.2 00:19:41.727 complete (in ns) avg, min, max = 17009.8, 1630.0, 4001690.8 00:19:41.727 00:19:41.727 Submit histogram 00:19:41.727 ================ 00:19:41.727 Range in us Cumulative Count 00:19:41.727 2.813 - 2.827: 0.0252% ( 5) 00:19:41.727 2.827 - 2.840: 0.4783% ( 90) 00:19:41.727 2.840 - 2.853: 1.9939% ( 301) 00:19:41.727 2.853 - 2.867: 5.1860% ( 634) 00:19:41.727 2.867 - 2.880: 10.1757% ( 991) 00:19:41.727 2.880 - 2.893: 15.1604% ( 990) 00:19:41.727 2.893 - 2.907: 21.0765% ( 1175) 00:19:41.727 2.907 - 2.920: 26.9976% ( 1176) 00:19:41.727 2.920 - 2.933: 32.6570% ( 1124) 00:19:41.727 2.933 - 2.947: 38.4321% ( 1147) 00:19:41.727 2.947 - 2.960: 44.0159% ( 1109) 00:19:41.727 2.960 - 2.973: 49.0559% ( 1001) 00:19:41.727 2.973 - 2.987: 55.6316% ( 1306) 00:19:41.727 2.987 - 3.000: 64.5889% ( 1779) 00:19:41.727 3.000 - 3.013: 73.8936% ( 1848) 00:19:41.727 3.013 - 3.027: 81.5216% ( 1515) 00:19:41.727 3.027 - 3.040: 87.8908% ( 1265) 00:19:41.727 3.040 - 3.053: 93.0467% ( 1024) 00:19:41.727 3.053 - 3.067: 96.2439% ( 635) 00:19:41.727 3.067 - 3.080: 97.9407% ( 337) 00:19:41.727 3.080 - 3.093: 98.7765% ( 166) 00:19:41.727 3.093 - 3.107: 99.2246% ( 89) 00:19:41.727 3.107 - 3.120: 99.3706% ( 29) 00:19:41.727 3.120 - 3.133: 99.4915% ( 24) 00:19:41.727 3.133 - 3.147: 99.5317% ( 8) 00:19:41.727 3.147 - 3.160: 99.5519% ( 4) 00:19:41.727 3.160 - 3.173: 99.5720% ( 4) 00:19:41.727 3.173 - 3.187: 99.5771% ( 1) 00:19:41.727 3.387 - 3.400: 99.5821% ( 1) 00:19:41.727 3.413 - 3.440: 99.5871% ( 1) 00:19:41.727 3.520 - 3.547: 99.5922% ( 1) 00:19:41.727 3.547 - 3.573: 99.5972% ( 1) 00:19:41.727 3.707 - 3.733: 99.6022% ( 1) 00:19:41.727 3.813 - 3.840: 99.6073% ( 1) 00:19:41.727 4.000 - 4.027: 99.6123% ( 1) 00:19:41.727 4.053 - 4.080: 99.6224% ( 2) 00:19:41.727 4.373 - 4.400: 99.6274% ( 1) 00:19:41.727 4.507 - 4.533: 99.6425% ( 3) 00:19:41.727 4.533 - 4.560: 99.6576% ( 3) 00:19:41.727 4.560 - 4.587: 99.6627% ( 1) 00:19:41.727 4.587 - 4.613: 99.6727% ( 2) 00:19:41.727 4.640 - 4.667: 99.6878% ( 3) 00:19:41.727 4.667 - 4.693: 99.6929% ( 1) 00:19:41.727 4.800 - 4.827: 99.6979% ( 1) 00:19:41.727 4.853 - 4.880: 99.7029% ( 1) 00:19:41.727 4.880 - 4.907: 99.7130% ( 2) 00:19:41.727 4.907 - 4.933: 99.7180% ( 1) 00:19:41.727 4.933 - 4.960: 99.7281% ( 2) 00:19:41.727 4.960 - 4.987: 99.7331% ( 1) 00:19:41.727 4.987 - 5.013: 99.7382% ( 1) 00:19:41.727 5.013 - 5.040: 99.7432% ( 1) 00:19:41.727 5.067 - 5.093: 99.7533% ( 2) 00:19:41.727 5.093 - 5.120: 99.7583% ( 1) 00:19:41.727 5.147 - 5.173: 99.7684% ( 2) 00:19:41.727 5.173 - 5.200: 99.7734% ( 1) 00:19:41.727 5.227 - 5.253: 99.7785% ( 1) 00:19:41.727 5.333 - 5.360: 99.7835% ( 1) 00:19:41.727 5.413 - 5.440: 99.7936% ( 2) 00:19:41.727 5.440 - 5.467: 99.7986% ( 1) 00:19:41.727 5.493 - 5.520: 99.8036% ( 1) 00:19:41.727 5.520 - 5.547: 99.8187% ( 3) 00:19:41.727 5.547 - 5.573: 99.8238% ( 1) 00:19:41.727 5.627 - 5.653: 99.8338% ( 2) 00:19:41.727 5.653 - 5.680: 99.8389% ( 1) 00:19:41.727 5.680 - 5.707: 99.8439% ( 1) 00:19:41.727 5.707 - 5.733: 99.8540% ( 2) 00:19:41.727 5.787 - 5.813: 99.8590% ( 1) 00:19:41.727 6.080 - 6.107: 99.8691% ( 2) 00:19:41.727 6.267 - 6.293: 99.8741% ( 1) 00:19:41.727 6.320 - 6.347: 99.8892% ( 3) 00:19:41.727 6.453 - 6.480: 99.8943% ( 1) 00:19:41.727 6.560 - 6.587: 99.8993% ( 1) 00:19:41.727 6.613 - 6.640: 99.9043% ( 1) 00:19:41.727 7.093 - 7.147: 99.9094% ( 1) 00:19:41.727 7.200 - 7.253: 99.9144% ( 1) 00:19:41.727 7.307 - 7.360: 99.9245% ( 2) 00:19:41.727 7.360 - 7.413: 99.9295% ( 1) 00:19:41.727 12.693 - 12.747: 99.9345% ( 1) 00:19:41.727 13.440 - 13.493: 99.9396% ( 1) 00:19:41.727 [2024-12-09 09:37:17.026652] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:41.727 3986.773 - 4014.080: 100.0000% ( 12) 00:19:41.727 00:19:41.727 Complete histogram 00:19:41.727 ================== 00:19:41.727 Range in us Cumulative Count 00:19:41.727 1.627 - 1.633: 0.0050% ( 1) 00:19:41.727 1.640 - 1.647: 0.7653% ( 151) 00:19:41.727 1.647 - 1.653: 1.2436% ( 95) 00:19:41.727 1.653 - 1.660: 1.2739% ( 6) 00:19:41.727 1.660 - 1.667: 1.4702% ( 39) 00:19:41.727 1.667 - 1.673: 1.5709% ( 20) 00:19:41.727 1.673 - 1.680: 1.5860% ( 3) 00:19:41.727 1.680 - 1.687: 1.6011% ( 3) 00:19:41.727 1.687 - 1.693: 1.6162% ( 3) 00:19:41.727 1.693 - 1.700: 9.1335% ( 1493) 00:19:41.727 1.700 - 1.707: 38.1703% ( 5767) 00:19:41.727 1.707 - 1.720: 57.4392% ( 3827) 00:19:41.727 1.720 - 1.733: 78.5811% ( 4199) 00:19:41.727 1.733 - 1.747: 83.6111% ( 999) 00:19:41.727 1.747 - 1.760: 84.8195% ( 240) 00:19:41.727 1.760 - 1.773: 88.7770% ( 786) 00:19:41.727 1.773 - 1.787: 94.0889% ( 1055) 00:19:41.727 1.787 - 1.800: 97.8299% ( 743) 00:19:41.727 1.800 - 1.813: 99.1894% ( 270) 00:19:41.727 1.813 - 1.827: 99.4814% ( 58) 00:19:41.727 1.827 - 1.840: 99.5418% ( 12) 00:19:41.727 1.853 - 1.867: 99.5469% ( 1) 00:19:41.727 2.027 - 2.040: 99.5519% ( 1) 00:19:41.727 3.267 - 3.280: 99.5569% ( 1) 00:19:41.727 3.360 - 3.373: 99.5620% ( 1) 00:19:41.727 3.400 - 3.413: 99.5670% ( 1) 00:19:41.727 3.787 - 3.813: 99.5720% ( 1) 00:19:41.727 3.813 - 3.840: 99.5771% ( 1) 00:19:41.727 3.867 - 3.893: 99.5821% ( 1) 00:19:41.727 4.480 - 4.507: 99.5922% ( 2) 00:19:41.727 4.800 - 4.827: 99.5972% ( 1) 00:19:41.727 8.427 - 8.480: 99.6022% ( 1) 00:19:41.727 9.013 - 9.067: 99.6073% ( 1) 00:19:41.727 11.200 - 11.253: 99.6123% ( 1) 00:19:41.727 34.133 - 34.347: 99.6173% ( 1) 00:19:41.727 3986.773 - 4014.080: 100.0000% ( 76) 00:19:41.727 00:19:41.727 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:19:41.727 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:19:41.727 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:19:41.727 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:19:41.727 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:19:41.986 [ 00:19:41.986 { 00:19:41.986 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:41.986 "subtype": "Discovery", 00:19:41.986 "listen_addresses": [], 00:19:41.986 "allow_any_host": true, 00:19:41.986 "hosts": [] 00:19:41.986 }, 00:19:41.986 { 00:19:41.986 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:19:41.986 "subtype": "NVMe", 00:19:41.986 "listen_addresses": [ 00:19:41.986 { 00:19:41.986 "trtype": "VFIOUSER", 00:19:41.986 "adrfam": "IPv4", 00:19:41.986 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:19:41.986 "trsvcid": "0" 00:19:41.986 } 00:19:41.986 ], 00:19:41.986 "allow_any_host": true, 00:19:41.986 "hosts": [], 00:19:41.986 "serial_number": "SPDK1", 00:19:41.986 "model_number": "SPDK bdev Controller", 00:19:41.986 "max_namespaces": 32, 00:19:41.986 "min_cntlid": 1, 00:19:41.986 "max_cntlid": 65519, 00:19:41.986 "namespaces": [ 00:19:41.986 { 00:19:41.986 "nsid": 1, 00:19:41.986 "bdev_name": "Malloc1", 00:19:41.986 "name": "Malloc1", 00:19:41.986 "nguid": "99CB4A51689246769C4B17097C200C53", 00:19:41.986 "uuid": "99cb4a51-6892-4676-9c4b-17097c200c53" 00:19:41.986 } 00:19:41.986 ] 00:19:41.986 }, 00:19:41.986 { 00:19:41.986 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:19:41.986 "subtype": "NVMe", 00:19:41.986 "listen_addresses": [ 00:19:41.986 { 00:19:41.986 "trtype": "VFIOUSER", 00:19:41.986 "adrfam": "IPv4", 00:19:41.986 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:19:41.986 "trsvcid": "0" 00:19:41.986 } 00:19:41.986 ], 00:19:41.986 "allow_any_host": true, 00:19:41.986 "hosts": [], 00:19:41.986 "serial_number": "SPDK2", 00:19:41.986 "model_number": "SPDK bdev Controller", 00:19:41.986 "max_namespaces": 32, 00:19:41.986 "min_cntlid": 1, 00:19:41.986 "max_cntlid": 65519, 00:19:41.986 "namespaces": [ 00:19:41.986 { 00:19:41.986 "nsid": 1, 00:19:41.986 "bdev_name": "Malloc2", 00:19:41.986 "name": "Malloc2", 00:19:41.986 "nguid": "35C07BCD0B754F3D84E3E0DB81C64438", 00:19:41.986 "uuid": "35c07bcd-0b75-4f3d-84e3-e0db81c64438" 00:19:41.986 } 00:19:41.986 ] 00:19:41.986 } 00:19:41.986 ] 00:19:41.986 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:19:41.986 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:19:41.986 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2774434 00:19:41.986 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:19:41.986 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:19:41.986 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:41.986 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:41.986 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:19:41.986 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:19:41.986 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:19:41.986 [2024-12-09 09:37:17.405091] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:41.986 Malloc3 00:19:42.244 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:19:42.244 [2024-12-09 09:37:17.607484] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:42.244 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:19:42.244 Asynchronous Event Request test 00:19:42.244 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:19:42.244 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:19:42.244 Registering asynchronous event callbacks... 00:19:42.244 Starting namespace attribute notice tests for all controllers... 00:19:42.244 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:19:42.244 aer_cb - Changed Namespace 00:19:42.244 Cleaning up... 00:19:42.503 [ 00:19:42.503 { 00:19:42.503 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:42.503 "subtype": "Discovery", 00:19:42.503 "listen_addresses": [], 00:19:42.503 "allow_any_host": true, 00:19:42.503 "hosts": [] 00:19:42.503 }, 00:19:42.503 { 00:19:42.503 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:19:42.503 "subtype": "NVMe", 00:19:42.503 "listen_addresses": [ 00:19:42.503 { 00:19:42.503 "trtype": "VFIOUSER", 00:19:42.503 "adrfam": "IPv4", 00:19:42.503 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:19:42.503 "trsvcid": "0" 00:19:42.503 } 00:19:42.503 ], 00:19:42.503 "allow_any_host": true, 00:19:42.503 "hosts": [], 00:19:42.503 "serial_number": "SPDK1", 00:19:42.503 "model_number": "SPDK bdev Controller", 00:19:42.503 "max_namespaces": 32, 00:19:42.503 "min_cntlid": 1, 00:19:42.503 "max_cntlid": 65519, 00:19:42.503 "namespaces": [ 00:19:42.503 { 00:19:42.503 "nsid": 1, 00:19:42.503 "bdev_name": "Malloc1", 00:19:42.503 "name": "Malloc1", 00:19:42.503 "nguid": "99CB4A51689246769C4B17097C200C53", 00:19:42.503 "uuid": "99cb4a51-6892-4676-9c4b-17097c200c53" 00:19:42.503 }, 00:19:42.503 { 00:19:42.503 "nsid": 2, 00:19:42.503 "bdev_name": "Malloc3", 00:19:42.503 "name": "Malloc3", 00:19:42.503 "nguid": "6A10836A7CE048258FBF7313C671276F", 00:19:42.503 "uuid": "6a10836a-7ce0-4825-8fbf-7313c671276f" 00:19:42.503 } 00:19:42.503 ] 00:19:42.503 }, 00:19:42.503 { 00:19:42.503 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:19:42.503 "subtype": "NVMe", 00:19:42.503 "listen_addresses": [ 00:19:42.503 { 00:19:42.503 "trtype": "VFIOUSER", 00:19:42.503 "adrfam": "IPv4", 00:19:42.503 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:19:42.503 "trsvcid": "0" 00:19:42.503 } 00:19:42.503 ], 00:19:42.503 "allow_any_host": true, 00:19:42.503 "hosts": [], 00:19:42.503 "serial_number": "SPDK2", 00:19:42.503 "model_number": "SPDK bdev Controller", 00:19:42.503 "max_namespaces": 32, 00:19:42.503 "min_cntlid": 1, 00:19:42.503 "max_cntlid": 65519, 00:19:42.503 "namespaces": [ 00:19:42.503 { 00:19:42.503 "nsid": 1, 00:19:42.503 "bdev_name": "Malloc2", 00:19:42.503 "name": "Malloc2", 00:19:42.503 "nguid": "35C07BCD0B754F3D84E3E0DB81C64438", 00:19:42.503 "uuid": "35c07bcd-0b75-4f3d-84e3-e0db81c64438" 00:19:42.503 } 00:19:42.503 ] 00:19:42.503 } 00:19:42.503 ] 00:19:42.503 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2774434 00:19:42.503 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:42.503 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:19:42.503 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:19:42.503 09:37:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:19:42.503 [2024-12-09 09:37:17.840324] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:19:42.503 [2024-12-09 09:37:17.840367] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2774612 ] 00:19:42.503 [2024-12-09 09:37:17.881852] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:19:42.503 [2024-12-09 09:37:17.884029] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:19:42.503 [2024-12-09 09:37:17.884045] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7ffb06b7b000 00:19:42.503 [2024-12-09 09:37:17.885036] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:42.503 [2024-12-09 09:37:17.886041] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:42.503 [2024-12-09 09:37:17.887042] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:42.503 [2024-12-09 09:37:17.888052] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:42.503 [2024-12-09 09:37:17.889055] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:42.503 [2024-12-09 09:37:17.890062] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:42.503 [2024-12-09 09:37:17.891072] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:42.503 [2024-12-09 09:37:17.892085] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:42.503 [2024-12-09 09:37:17.893089] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:19:42.504 [2024-12-09 09:37:17.893097] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7ffb05885000 00:19:42.504 [2024-12-09 09:37:17.894009] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:19:42.504 [2024-12-09 09:37:17.906917] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:19:42.504 [2024-12-09 09:37:17.906941] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:19:42.504 [2024-12-09 09:37:17.912013] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:19:42.504 [2024-12-09 09:37:17.912044] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:19:42.504 [2024-12-09 09:37:17.912099] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:19:42.504 [2024-12-09 09:37:17.912109] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:19:42.504 [2024-12-09 09:37:17.912113] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:19:42.504 [2024-12-09 09:37:17.913015] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:19:42.504 [2024-12-09 09:37:17.913024] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:19:42.504 [2024-12-09 09:37:17.913029] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:19:42.504 [2024-12-09 09:37:17.914020] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:19:42.504 [2024-12-09 09:37:17.914027] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:19:42.504 [2024-12-09 09:37:17.914032] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:19:42.504 [2024-12-09 09:37:17.915031] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:19:42.504 [2024-12-09 09:37:17.915037] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:19:42.504 [2024-12-09 09:37:17.916036] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:19:42.504 [2024-12-09 09:37:17.916042] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:19:42.504 [2024-12-09 09:37:17.916046] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:19:42.504 [2024-12-09 09:37:17.916051] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:19:42.504 [2024-12-09 09:37:17.916156] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:19:42.504 [2024-12-09 09:37:17.916160] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:19:42.504 [2024-12-09 09:37:17.916163] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:19:42.504 [2024-12-09 09:37:17.917047] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:19:42.504 [2024-12-09 09:37:17.918050] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:19:42.504 [2024-12-09 09:37:17.919058] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:19:42.504 [2024-12-09 09:37:17.920061] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:42.504 [2024-12-09 09:37:17.920093] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:19:42.504 [2024-12-09 09:37:17.921073] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:19:42.504 [2024-12-09 09:37:17.921079] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:19:42.504 [2024-12-09 09:37:17.921083] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:19:42.504 [2024-12-09 09:37:17.921097] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:19:42.504 [2024-12-09 09:37:17.921106] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:19:42.504 [2024-12-09 09:37:17.921118] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:42.504 [2024-12-09 09:37:17.921122] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:42.504 [2024-12-09 09:37:17.921125] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:42.504 [2024-12-09 09:37:17.921134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:42.504 [2024-12-09 09:37:17.928644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:19:42.504 [2024-12-09 09:37:17.928655] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:19:42.504 [2024-12-09 09:37:17.928659] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:19:42.504 [2024-12-09 09:37:17.928662] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:19:42.504 [2024-12-09 09:37:17.928665] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:19:42.504 [2024-12-09 09:37:17.928668] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:19:42.504 [2024-12-09 09:37:17.928671] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:19:42.504 [2024-12-09 09:37:17.928675] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:19:42.504 [2024-12-09 09:37:17.928680] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:19:42.504 [2024-12-09 09:37:17.928687] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:19:42.504 [2024-12-09 09:37:17.936642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:19:42.504 [2024-12-09 09:37:17.936652] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:19:42.504 [2024-12-09 09:37:17.936658] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:19:42.504 [2024-12-09 09:37:17.936664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:19:42.504 [2024-12-09 09:37:17.936670] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:19:42.504 [2024-12-09 09:37:17.936675] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:19:42.504 [2024-12-09 09:37:17.936682] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:19:42.504 [2024-12-09 09:37:17.936689] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:19:42.504 [2024-12-09 09:37:17.944642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:19:42.504 [2024-12-09 09:37:17.944648] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:19:42.504 [2024-12-09 09:37:17.944652] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:19:42.504 [2024-12-09 09:37:17.944657] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:19:42.504 [2024-12-09 09:37:17.944661] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:19:42.504 [2024-12-09 09:37:17.944667] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:19:42.504 [2024-12-09 09:37:17.952642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:19:42.504 [2024-12-09 09:37:17.952690] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:19:42.504 [2024-12-09 09:37:17.952696] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:19:42.504 [2024-12-09 09:37:17.952702] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:19:42.504 [2024-12-09 09:37:17.952705] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:19:42.504 [2024-12-09 09:37:17.952707] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:42.504 [2024-12-09 09:37:17.952712] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:19:42.763 [2024-12-09 09:37:17.960643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:19:42.764 [2024-12-09 09:37:17.960651] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:19:42.764 [2024-12-09 09:37:17.960661] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:19:42.764 [2024-12-09 09:37:17.960666] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:19:42.764 [2024-12-09 09:37:17.960671] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:42.764 [2024-12-09 09:37:17.960674] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:42.764 [2024-12-09 09:37:17.960676] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:42.764 [2024-12-09 09:37:17.960681] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:42.764 [2024-12-09 09:37:17.968643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:19:42.764 [2024-12-09 09:37:17.968656] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:19:42.764 [2024-12-09 09:37:17.968662] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:19:42.764 [2024-12-09 09:37:17.968667] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:42.764 [2024-12-09 09:37:17.968670] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:42.764 [2024-12-09 09:37:17.968672] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:42.764 [2024-12-09 09:37:17.968677] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:42.764 [2024-12-09 09:37:17.976643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:19:42.764 [2024-12-09 09:37:17.976650] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:19:42.764 [2024-12-09 09:37:17.976655] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:19:42.764 [2024-12-09 09:37:17.976661] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:19:42.764 [2024-12-09 09:37:17.976666] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:19:42.764 [2024-12-09 09:37:17.976670] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:19:42.764 [2024-12-09 09:37:17.976673] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:19:42.764 [2024-12-09 09:37:17.976677] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:19:42.764 [2024-12-09 09:37:17.976680] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:19:42.764 [2024-12-09 09:37:17.976684] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:19:42.764 [2024-12-09 09:37:17.976696] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:19:42.764 [2024-12-09 09:37:17.984643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:19:42.764 [2024-12-09 09:37:17.984654] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:19:42.764 [2024-12-09 09:37:17.992644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:19:42.764 [2024-12-09 09:37:17.992654] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:19:42.764 [2024-12-09 09:37:18.000643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:19:42.764 [2024-12-09 09:37:18.000653] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:19:42.764 [2024-12-09 09:37:18.008643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:19:42.764 [2024-12-09 09:37:18.008655] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:19:42.764 [2024-12-09 09:37:18.008660] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:19:42.764 [2024-12-09 09:37:18.008663] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:19:42.764 [2024-12-09 09:37:18.008665] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:19:42.764 [2024-12-09 09:37:18.008667] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:19:42.764 [2024-12-09 09:37:18.008672] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:19:42.764 [2024-12-09 09:37:18.008677] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:19:42.764 [2024-12-09 09:37:18.008680] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:19:42.764 [2024-12-09 09:37:18.008683] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:42.764 [2024-12-09 09:37:18.008687] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:19:42.764 [2024-12-09 09:37:18.008692] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:19:42.764 [2024-12-09 09:37:18.008695] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:42.764 [2024-12-09 09:37:18.008697] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:42.764 [2024-12-09 09:37:18.008702] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:42.764 [2024-12-09 09:37:18.008707] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:19:42.764 [2024-12-09 09:37:18.008710] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:19:42.764 [2024-12-09 09:37:18.008712] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:42.764 [2024-12-09 09:37:18.008717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:19:42.764 [2024-12-09 09:37:18.016645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:19:42.764 [2024-12-09 09:37:18.016656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:19:42.764 [2024-12-09 09:37:18.016663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:19:42.764 [2024-12-09 09:37:18.016668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:19:42.764 ===================================================== 00:19:42.764 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:42.764 ===================================================== 00:19:42.764 Controller Capabilities/Features 00:19:42.764 ================================ 00:19:42.764 Vendor ID: 4e58 00:19:42.764 Subsystem Vendor ID: 4e58 00:19:42.764 Serial Number: SPDK2 00:19:42.764 Model Number: SPDK bdev Controller 00:19:42.764 Firmware Version: 25.01 00:19:42.764 Recommended Arb Burst: 6 00:19:42.764 IEEE OUI Identifier: 8d 6b 50 00:19:42.764 Multi-path I/O 00:19:42.764 May have multiple subsystem ports: Yes 00:19:42.764 May have multiple controllers: Yes 00:19:42.764 Associated with SR-IOV VF: No 00:19:42.764 Max Data Transfer Size: 131072 00:19:42.764 Max Number of Namespaces: 32 00:19:42.764 Max Number of I/O Queues: 127 00:19:42.764 NVMe Specification Version (VS): 1.3 00:19:42.764 NVMe Specification Version (Identify): 1.3 00:19:42.764 Maximum Queue Entries: 256 00:19:42.764 Contiguous Queues Required: Yes 00:19:42.764 Arbitration Mechanisms Supported 00:19:42.764 Weighted Round Robin: Not Supported 00:19:42.764 Vendor Specific: Not Supported 00:19:42.764 Reset Timeout: 15000 ms 00:19:42.764 Doorbell Stride: 4 bytes 00:19:42.764 NVM Subsystem Reset: Not Supported 00:19:42.764 Command Sets Supported 00:19:42.764 NVM Command Set: Supported 00:19:42.764 Boot Partition: Not Supported 00:19:42.764 Memory Page Size Minimum: 4096 bytes 00:19:42.764 Memory Page Size Maximum: 4096 bytes 00:19:42.764 Persistent Memory Region: Not Supported 00:19:42.764 Optional Asynchronous Events Supported 00:19:42.764 Namespace Attribute Notices: Supported 00:19:42.764 Firmware Activation Notices: Not Supported 00:19:42.764 ANA Change Notices: Not Supported 00:19:42.764 PLE Aggregate Log Change Notices: Not Supported 00:19:42.764 LBA Status Info Alert Notices: Not Supported 00:19:42.764 EGE Aggregate Log Change Notices: Not Supported 00:19:42.764 Normal NVM Subsystem Shutdown event: Not Supported 00:19:42.764 Zone Descriptor Change Notices: Not Supported 00:19:42.764 Discovery Log Change Notices: Not Supported 00:19:42.764 Controller Attributes 00:19:42.764 128-bit Host Identifier: Supported 00:19:42.764 Non-Operational Permissive Mode: Not Supported 00:19:42.764 NVM Sets: Not Supported 00:19:42.764 Read Recovery Levels: Not Supported 00:19:42.764 Endurance Groups: Not Supported 00:19:42.764 Predictable Latency Mode: Not Supported 00:19:42.764 Traffic Based Keep ALive: Not Supported 00:19:42.764 Namespace Granularity: Not Supported 00:19:42.764 SQ Associations: Not Supported 00:19:42.764 UUID List: Not Supported 00:19:42.764 Multi-Domain Subsystem: Not Supported 00:19:42.764 Fixed Capacity Management: Not Supported 00:19:42.764 Variable Capacity Management: Not Supported 00:19:42.764 Delete Endurance Group: Not Supported 00:19:42.765 Delete NVM Set: Not Supported 00:19:42.765 Extended LBA Formats Supported: Not Supported 00:19:42.765 Flexible Data Placement Supported: Not Supported 00:19:42.765 00:19:42.765 Controller Memory Buffer Support 00:19:42.765 ================================ 00:19:42.765 Supported: No 00:19:42.765 00:19:42.765 Persistent Memory Region Support 00:19:42.765 ================================ 00:19:42.765 Supported: No 00:19:42.765 00:19:42.765 Admin Command Set Attributes 00:19:42.765 ============================ 00:19:42.765 Security Send/Receive: Not Supported 00:19:42.765 Format NVM: Not Supported 00:19:42.765 Firmware Activate/Download: Not Supported 00:19:42.765 Namespace Management: Not Supported 00:19:42.765 Device Self-Test: Not Supported 00:19:42.765 Directives: Not Supported 00:19:42.765 NVMe-MI: Not Supported 00:19:42.765 Virtualization Management: Not Supported 00:19:42.765 Doorbell Buffer Config: Not Supported 00:19:42.765 Get LBA Status Capability: Not Supported 00:19:42.765 Command & Feature Lockdown Capability: Not Supported 00:19:42.765 Abort Command Limit: 4 00:19:42.765 Async Event Request Limit: 4 00:19:42.765 Number of Firmware Slots: N/A 00:19:42.765 Firmware Slot 1 Read-Only: N/A 00:19:42.765 Firmware Activation Without Reset: N/A 00:19:42.765 Multiple Update Detection Support: N/A 00:19:42.765 Firmware Update Granularity: No Information Provided 00:19:42.765 Per-Namespace SMART Log: No 00:19:42.765 Asymmetric Namespace Access Log Page: Not Supported 00:19:42.765 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:19:42.765 Command Effects Log Page: Supported 00:19:42.765 Get Log Page Extended Data: Supported 00:19:42.765 Telemetry Log Pages: Not Supported 00:19:42.765 Persistent Event Log Pages: Not Supported 00:19:42.765 Supported Log Pages Log Page: May Support 00:19:42.765 Commands Supported & Effects Log Page: Not Supported 00:19:42.765 Feature Identifiers & Effects Log Page:May Support 00:19:42.765 NVMe-MI Commands & Effects Log Page: May Support 00:19:42.765 Data Area 4 for Telemetry Log: Not Supported 00:19:42.765 Error Log Page Entries Supported: 128 00:19:42.765 Keep Alive: Supported 00:19:42.765 Keep Alive Granularity: 10000 ms 00:19:42.765 00:19:42.765 NVM Command Set Attributes 00:19:42.765 ========================== 00:19:42.765 Submission Queue Entry Size 00:19:42.765 Max: 64 00:19:42.765 Min: 64 00:19:42.765 Completion Queue Entry Size 00:19:42.765 Max: 16 00:19:42.765 Min: 16 00:19:42.765 Number of Namespaces: 32 00:19:42.765 Compare Command: Supported 00:19:42.765 Write Uncorrectable Command: Not Supported 00:19:42.765 Dataset Management Command: Supported 00:19:42.765 Write Zeroes Command: Supported 00:19:42.765 Set Features Save Field: Not Supported 00:19:42.765 Reservations: Not Supported 00:19:42.765 Timestamp: Not Supported 00:19:42.765 Copy: Supported 00:19:42.765 Volatile Write Cache: Present 00:19:42.765 Atomic Write Unit (Normal): 1 00:19:42.765 Atomic Write Unit (PFail): 1 00:19:42.765 Atomic Compare & Write Unit: 1 00:19:42.765 Fused Compare & Write: Supported 00:19:42.765 Scatter-Gather List 00:19:42.765 SGL Command Set: Supported (Dword aligned) 00:19:42.765 SGL Keyed: Not Supported 00:19:42.765 SGL Bit Bucket Descriptor: Not Supported 00:19:42.765 SGL Metadata Pointer: Not Supported 00:19:42.765 Oversized SGL: Not Supported 00:19:42.765 SGL Metadata Address: Not Supported 00:19:42.765 SGL Offset: Not Supported 00:19:42.765 Transport SGL Data Block: Not Supported 00:19:42.765 Replay Protected Memory Block: Not Supported 00:19:42.765 00:19:42.765 Firmware Slot Information 00:19:42.765 ========================= 00:19:42.765 Active slot: 1 00:19:42.765 Slot 1 Firmware Revision: 25.01 00:19:42.765 00:19:42.765 00:19:42.765 Commands Supported and Effects 00:19:42.765 ============================== 00:19:42.765 Admin Commands 00:19:42.765 -------------- 00:19:42.765 Get Log Page (02h): Supported 00:19:42.765 Identify (06h): Supported 00:19:42.765 Abort (08h): Supported 00:19:42.765 Set Features (09h): Supported 00:19:42.765 Get Features (0Ah): Supported 00:19:42.765 Asynchronous Event Request (0Ch): Supported 00:19:42.765 Keep Alive (18h): Supported 00:19:42.765 I/O Commands 00:19:42.765 ------------ 00:19:42.765 Flush (00h): Supported LBA-Change 00:19:42.765 Write (01h): Supported LBA-Change 00:19:42.765 Read (02h): Supported 00:19:42.765 Compare (05h): Supported 00:19:42.765 Write Zeroes (08h): Supported LBA-Change 00:19:42.765 Dataset Management (09h): Supported LBA-Change 00:19:42.765 Copy (19h): Supported LBA-Change 00:19:42.765 00:19:42.765 Error Log 00:19:42.765 ========= 00:19:42.765 00:19:42.765 Arbitration 00:19:42.765 =========== 00:19:42.765 Arbitration Burst: 1 00:19:42.765 00:19:42.765 Power Management 00:19:42.765 ================ 00:19:42.765 Number of Power States: 1 00:19:42.765 Current Power State: Power State #0 00:19:42.765 Power State #0: 00:19:42.765 Max Power: 0.00 W 00:19:42.765 Non-Operational State: Operational 00:19:42.765 Entry Latency: Not Reported 00:19:42.765 Exit Latency: Not Reported 00:19:42.765 Relative Read Throughput: 0 00:19:42.765 Relative Read Latency: 0 00:19:42.765 Relative Write Throughput: 0 00:19:42.765 Relative Write Latency: 0 00:19:42.765 Idle Power: Not Reported 00:19:42.765 Active Power: Not Reported 00:19:42.765 Non-Operational Permissive Mode: Not Supported 00:19:42.765 00:19:42.765 Health Information 00:19:42.765 ================== 00:19:42.765 Critical Warnings: 00:19:42.765 Available Spare Space: OK 00:19:42.765 Temperature: OK 00:19:42.765 Device Reliability: OK 00:19:42.765 Read Only: No 00:19:42.765 Volatile Memory Backup: OK 00:19:42.765 Current Temperature: 0 Kelvin (-273 Celsius) 00:19:42.765 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:19:42.765 Available Spare: 0% 00:19:42.765 Available Sp[2024-12-09 09:37:18.016742] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:19:42.765 [2024-12-09 09:37:18.024642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:19:42.765 [2024-12-09 09:37:18.024668] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:19:42.765 [2024-12-09 09:37:18.024675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.765 [2024-12-09 09:37:18.024679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.765 [2024-12-09 09:37:18.024684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.765 [2024-12-09 09:37:18.024688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:42.765 [2024-12-09 09:37:18.024726] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:19:42.765 [2024-12-09 09:37:18.024735] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:19:42.765 [2024-12-09 09:37:18.025730] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:42.765 [2024-12-09 09:37:18.025766] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:19:42.765 [2024-12-09 09:37:18.025771] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:19:42.765 [2024-12-09 09:37:18.026731] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:19:42.765 [2024-12-09 09:37:18.026739] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:19:42.765 [2024-12-09 09:37:18.026781] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:19:42.765 [2024-12-09 09:37:18.027746] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:19:42.765 are Threshold: 0% 00:19:42.765 Life Percentage Used: 0% 00:19:42.765 Data Units Read: 0 00:19:42.765 Data Units Written: 0 00:19:42.765 Host Read Commands: 0 00:19:42.765 Host Write Commands: 0 00:19:42.765 Controller Busy Time: 0 minutes 00:19:42.765 Power Cycles: 0 00:19:42.765 Power On Hours: 0 hours 00:19:42.765 Unsafe Shutdowns: 0 00:19:42.765 Unrecoverable Media Errors: 0 00:19:42.765 Lifetime Error Log Entries: 0 00:19:42.765 Warning Temperature Time: 0 minutes 00:19:42.765 Critical Temperature Time: 0 minutes 00:19:42.765 00:19:42.765 Number of Queues 00:19:42.765 ================ 00:19:42.765 Number of I/O Submission Queues: 127 00:19:42.765 Number of I/O Completion Queues: 127 00:19:42.765 00:19:42.765 Active Namespaces 00:19:42.765 ================= 00:19:42.765 Namespace ID:1 00:19:42.765 Error Recovery Timeout: Unlimited 00:19:42.765 Command Set Identifier: NVM (00h) 00:19:42.765 Deallocate: Supported 00:19:42.765 Deallocated/Unwritten Error: Not Supported 00:19:42.765 Deallocated Read Value: Unknown 00:19:42.766 Deallocate in Write Zeroes: Not Supported 00:19:42.766 Deallocated Guard Field: 0xFFFF 00:19:42.766 Flush: Supported 00:19:42.766 Reservation: Supported 00:19:42.766 Namespace Sharing Capabilities: Multiple Controllers 00:19:42.766 Size (in LBAs): 131072 (0GiB) 00:19:42.766 Capacity (in LBAs): 131072 (0GiB) 00:19:42.766 Utilization (in LBAs): 131072 (0GiB) 00:19:42.766 NGUID: 35C07BCD0B754F3D84E3E0DB81C64438 00:19:42.766 UUID: 35c07bcd-0b75-4f3d-84e3-e0db81c64438 00:19:42.766 Thin Provisioning: Not Supported 00:19:42.766 Per-NS Atomic Units: Yes 00:19:42.766 Atomic Boundary Size (Normal): 0 00:19:42.766 Atomic Boundary Size (PFail): 0 00:19:42.766 Atomic Boundary Offset: 0 00:19:42.766 Maximum Single Source Range Length: 65535 00:19:42.766 Maximum Copy Length: 65535 00:19:42.766 Maximum Source Range Count: 1 00:19:42.766 NGUID/EUI64 Never Reused: No 00:19:42.766 Namespace Write Protected: No 00:19:42.766 Number of LBA Formats: 1 00:19:42.766 Current LBA Format: LBA Format #00 00:19:42.766 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:42.766 00:19:42.766 09:37:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:19:43.026 [2024-12-09 09:37:18.217030] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:48.311 Initializing NVMe Controllers 00:19:48.311 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:48.311 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:19:48.311 Initialization complete. Launching workers. 00:19:48.311 ======================================================== 00:19:48.311 Latency(us) 00:19:48.311 Device Information : IOPS MiB/s Average min max 00:19:48.311 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 40020.80 156.33 3200.72 864.79 6840.31 00:19:48.311 ======================================================== 00:19:48.311 Total : 40020.80 156.33 3200.72 864.79 6840.31 00:19:48.311 00:19:48.311 [2024-12-09 09:37:23.323830] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:48.311 09:37:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:19:48.311 [2024-12-09 09:37:23.515435] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:53.592 Initializing NVMe Controllers 00:19:53.592 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:53.592 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:19:53.592 Initialization complete. Launching workers. 00:19:53.592 ======================================================== 00:19:53.592 Latency(us) 00:19:53.592 Device Information : IOPS MiB/s Average min max 00:19:53.592 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39984.20 156.19 3201.15 865.23 7726.00 00:19:53.592 ======================================================== 00:19:53.592 Total : 39984.20 156.19 3201.15 865.23 7726.00 00:19:53.592 00:19:53.592 [2024-12-09 09:37:28.532358] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:53.592 09:37:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:19:53.592 [2024-12-09 09:37:28.735520] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:58.915 [2024-12-09 09:37:33.883727] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:58.915 Initializing NVMe Controllers 00:19:58.915 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:58.915 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:58.915 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:19:58.915 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:19:58.915 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:19:58.915 Initialization complete. Launching workers. 00:19:58.915 Starting thread on core 2 00:19:58.915 Starting thread on core 3 00:19:58.915 Starting thread on core 1 00:19:58.916 09:37:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:19:58.916 [2024-12-09 09:37:34.130017] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:20:02.212 [2024-12-09 09:37:37.179002] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:20:02.212 Initializing NVMe Controllers 00:20:02.212 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:20:02.212 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:20:02.212 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:20:02.212 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:20:02.212 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:20:02.212 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:20:02.212 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:20:02.212 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:20:02.212 Initialization complete. Launching workers. 00:20:02.212 Starting thread on core 1 with urgent priority queue 00:20:02.212 Starting thread on core 2 with urgent priority queue 00:20:02.212 Starting thread on core 3 with urgent priority queue 00:20:02.212 Starting thread on core 0 with urgent priority queue 00:20:02.212 SPDK bdev Controller (SPDK2 ) core 0: 15695.33 IO/s 6.37 secs/100000 ios 00:20:02.212 SPDK bdev Controller (SPDK2 ) core 1: 11170.67 IO/s 8.95 secs/100000 ios 00:20:02.212 SPDK bdev Controller (SPDK2 ) core 2: 7933.33 IO/s 12.61 secs/100000 ios 00:20:02.212 SPDK bdev Controller (SPDK2 ) core 3: 16909.67 IO/s 5.91 secs/100000 ios 00:20:02.212 ======================================================== 00:20:02.212 00:20:02.212 09:37:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:20:02.212 [2024-12-09 09:37:37.419335] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:20:02.212 Initializing NVMe Controllers 00:20:02.212 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:20:02.212 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:20:02.212 Namespace ID: 1 size: 0GB 00:20:02.212 Initialization complete. 00:20:02.212 INFO: using host memory buffer for IO 00:20:02.212 Hello world! 00:20:02.212 [2024-12-09 09:37:37.429394] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:20:02.212 09:37:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:20:02.212 [2024-12-09 09:37:37.657575] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:20:03.599 Initializing NVMe Controllers 00:20:03.599 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:20:03.599 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:20:03.599 Initialization complete. Launching workers. 00:20:03.599 submit (in ns) avg, min, max = 6366.5, 2832.5, 3998485.0 00:20:03.599 complete (in ns) avg, min, max = 18031.0, 1640.0, 4995104.2 00:20:03.599 00:20:03.599 Submit histogram 00:20:03.599 ================ 00:20:03.599 Range in us Cumulative Count 00:20:03.599 2.827 - 2.840: 0.3089% ( 62) 00:20:03.599 2.840 - 2.853: 1.4945% ( 238) 00:20:03.599 2.853 - 2.867: 4.1397% ( 531) 00:20:03.599 2.867 - 2.880: 8.9021% ( 956) 00:20:03.599 2.880 - 2.893: 14.0231% ( 1028) 00:20:03.599 2.893 - 2.907: 19.0445% ( 1008) 00:20:03.599 2.907 - 2.920: 24.3947% ( 1074) 00:20:03.599 2.920 - 2.933: 29.4361% ( 1012) 00:20:03.599 2.933 - 2.947: 35.0105% ( 1119) 00:20:03.599 2.947 - 2.960: 40.3158% ( 1065) 00:20:03.599 2.960 - 2.973: 45.9151% ( 1124) 00:20:03.599 2.973 - 2.987: 51.2354% ( 1068) 00:20:03.599 2.987 - 3.000: 57.9406% ( 1346) 00:20:03.599 3.000 - 3.013: 66.3296% ( 1684) 00:20:03.599 3.013 - 3.027: 75.0922% ( 1759) 00:20:03.599 3.027 - 3.040: 82.3403% ( 1455) 00:20:03.599 3.040 - 3.053: 88.8612% ( 1309) 00:20:03.599 3.053 - 3.067: 93.7382% ( 979) 00:20:03.599 3.067 - 3.080: 96.7620% ( 607) 00:20:03.599 3.080 - 3.093: 98.3312% ( 315) 00:20:03.599 3.093 - 3.107: 99.0884% ( 152) 00:20:03.599 3.107 - 3.120: 99.3424% ( 51) 00:20:03.599 3.120 - 3.133: 99.4769% ( 27) 00:20:03.599 3.133 - 3.147: 99.5068% ( 6) 00:20:03.599 3.147 - 3.160: 99.5218% ( 3) 00:20:03.599 3.160 - 3.173: 99.5268% ( 1) 00:20:03.599 3.187 - 3.200: 99.5317% ( 1) 00:20:03.599 3.293 - 3.307: 99.5367% ( 1) 00:20:03.599 3.333 - 3.347: 99.5417% ( 1) 00:20:03.599 3.493 - 3.520: 99.5467% ( 1) 00:20:03.599 3.547 - 3.573: 99.5517% ( 1) 00:20:03.599 3.627 - 3.653: 99.5566% ( 1) 00:20:03.599 3.920 - 3.947: 99.5616% ( 1) 00:20:03.599 4.160 - 4.187: 99.5666% ( 1) 00:20:03.599 4.400 - 4.427: 99.5716% ( 1) 00:20:03.599 4.453 - 4.480: 99.5815% ( 2) 00:20:03.599 4.560 - 4.587: 99.5915% ( 2) 00:20:03.599 4.613 - 4.640: 99.5965% ( 1) 00:20:03.599 4.667 - 4.693: 99.6114% ( 3) 00:20:03.599 4.693 - 4.720: 99.6214% ( 2) 00:20:03.599 4.720 - 4.747: 99.6264% ( 1) 00:20:03.599 4.827 - 4.853: 99.6314% ( 1) 00:20:03.599 4.853 - 4.880: 99.6413% ( 2) 00:20:03.599 4.880 - 4.907: 99.6463% ( 1) 00:20:03.599 4.933 - 4.960: 99.6563% ( 2) 00:20:03.599 4.960 - 4.987: 99.6613% ( 1) 00:20:03.599 5.013 - 5.040: 99.6662% ( 1) 00:20:03.599 5.040 - 5.067: 99.6762% ( 2) 00:20:03.599 5.067 - 5.093: 99.6911% ( 3) 00:20:03.599 5.093 - 5.120: 99.7011% ( 2) 00:20:03.599 5.120 - 5.147: 99.7111% ( 2) 00:20:03.599 5.147 - 5.173: 99.7210% ( 2) 00:20:03.599 5.173 - 5.200: 99.7310% ( 2) 00:20:03.599 5.227 - 5.253: 99.7360% ( 1) 00:20:03.599 5.253 - 5.280: 99.7459% ( 2) 00:20:03.599 5.307 - 5.333: 99.7559% ( 2) 00:20:03.599 5.333 - 5.360: 99.7609% ( 1) 00:20:03.599 5.387 - 5.413: 99.7659% ( 1) 00:20:03.599 5.440 - 5.467: 99.7708% ( 1) 00:20:03.599 5.493 - 5.520: 99.7758% ( 1) 00:20:03.599 5.520 - 5.547: 99.7808% ( 1) 00:20:03.599 5.547 - 5.573: 99.7858% ( 1) 00:20:03.599 5.600 - 5.627: 99.7958% ( 2) 00:20:03.599 5.733 - 5.760: 99.8007% ( 1) 00:20:03.599 5.813 - 5.840: 99.8057% ( 1) 00:20:03.599 5.840 - 5.867: 99.8157% ( 2) 00:20:03.599 5.867 - 5.893: 99.8207% ( 1) 00:20:03.599 5.893 - 5.920: 99.8256% ( 1) 00:20:03.599 5.947 - 5.973: 99.8306% ( 1) 00:20:03.599 6.027 - 6.053: 99.8356% ( 1) 00:20:03.599 6.133 - 6.160: 99.8406% ( 1) 00:20:03.599 6.160 - 6.187: 99.8456% ( 1) 00:20:03.599 6.187 - 6.213: 99.8605% ( 3) 00:20:03.599 6.267 - 6.293: 99.8655% ( 1) 00:20:03.599 6.293 - 6.320: 99.8705% ( 1) 00:20:03.599 6.320 - 6.347: 99.8755% ( 1) 00:20:03.599 6.453 - 6.480: 99.8904% ( 3) 00:20:03.599 6.587 - 6.613: 99.8954% ( 1) 00:20:03.599 6.987 - 7.040: 99.9004% ( 1) 00:20:03.599 [2024-12-09 09:37:38.755241] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:20:03.599 7.627 - 7.680: 99.9054% ( 1) 00:20:03.599 9.973 - 10.027: 99.9103% ( 1) 00:20:03.599 12.747 - 12.800: 99.9153% ( 1) 00:20:03.599 3986.773 - 4014.080: 100.0000% ( 17) 00:20:03.599 00:20:03.599 Complete histogram 00:20:03.599 ================== 00:20:03.599 Range in us Cumulative Count 00:20:03.599 1.640 - 1.647: 0.9864% ( 198) 00:20:03.599 1.647 - 1.653: 1.3500% ( 73) 00:20:03.599 1.653 - 1.660: 1.3749% ( 5) 00:20:03.599 1.660 - 1.667: 1.6489% ( 55) 00:20:03.599 1.667 - 1.673: 1.7186% ( 14) 00:20:03.599 1.673 - 1.680: 1.7685% ( 10) 00:20:03.599 1.680 - 1.687: 1.8382% ( 14) 00:20:03.599 1.687 - 1.693: 38.4627% ( 7352) 00:20:03.599 1.693 - 1.700: 46.2389% ( 1561) 00:20:03.599 1.700 - 1.707: 50.8170% ( 919) 00:20:03.599 1.707 - 1.720: 78.3053% ( 5518) 00:20:03.599 1.720 - 1.733: 84.6020% ( 1264) 00:20:03.599 1.733 - 1.747: 85.7826% ( 237) 00:20:03.599 1.747 - 1.760: 88.8363% ( 613) 00:20:03.599 1.760 - 1.773: 93.6684% ( 970) 00:20:03.599 1.773 - 1.787: 97.0459% ( 678) 00:20:03.599 1.787 - 1.800: 98.9389% ( 380) 00:20:03.599 1.800 - 1.813: 99.4172% ( 96) 00:20:03.599 1.813 - 1.827: 99.4819% ( 13) 00:20:03.599 1.853 - 1.867: 99.4869% ( 1) 00:20:03.599 1.867 - 1.880: 99.4969% ( 2) 00:20:03.599 3.467 - 3.493: 99.5018% ( 1) 00:20:03.599 3.573 - 3.600: 99.5068% ( 1) 00:20:03.600 3.600 - 3.627: 99.5118% ( 1) 00:20:03.600 3.627 - 3.653: 99.5168% ( 1) 00:20:03.600 3.787 - 3.813: 99.5218% ( 1) 00:20:03.600 4.107 - 4.133: 99.5317% ( 2) 00:20:03.600 4.400 - 4.427: 99.5367% ( 1) 00:20:03.600 4.507 - 4.533: 99.5417% ( 1) 00:20:03.600 4.773 - 4.800: 99.5467% ( 1) 00:20:03.600 4.853 - 4.880: 99.5517% ( 1) 00:20:03.600 4.960 - 4.987: 99.5566% ( 1) 00:20:03.600 4.987 - 5.013: 99.5616% ( 1) 00:20:03.600 5.067 - 5.093: 99.5666% ( 1) 00:20:03.600 5.227 - 5.253: 99.5716% ( 1) 00:20:03.600 5.840 - 5.867: 99.5766% ( 1) 00:20:03.600 6.107 - 6.133: 99.5815% ( 1) 00:20:03.600 7.467 - 7.520: 99.5865% ( 1) 00:20:03.600 8.053 - 8.107: 99.5915% ( 1) 00:20:03.600 2990.080 - 3003.733: 99.5965% ( 1) 00:20:03.600 3017.387 - 3031.040: 99.6015% ( 1) 00:20:03.600 3986.773 - 4014.080: 99.9900% ( 78) 00:20:03.600 4969.813 - 4997.120: 100.0000% ( 2) 00:20:03.600 00:20:03.600 09:37:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:20:03.600 09:37:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:20:03.600 09:37:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:20:03.600 09:37:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:20:03.600 09:37:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:20:03.600 [ 00:20:03.600 { 00:20:03.600 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:03.600 "subtype": "Discovery", 00:20:03.600 "listen_addresses": [], 00:20:03.600 "allow_any_host": true, 00:20:03.600 "hosts": [] 00:20:03.600 }, 00:20:03.600 { 00:20:03.600 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:20:03.600 "subtype": "NVMe", 00:20:03.600 "listen_addresses": [ 00:20:03.600 { 00:20:03.600 "trtype": "VFIOUSER", 00:20:03.600 "adrfam": "IPv4", 00:20:03.600 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:20:03.600 "trsvcid": "0" 00:20:03.600 } 00:20:03.600 ], 00:20:03.600 "allow_any_host": true, 00:20:03.600 "hosts": [], 00:20:03.600 "serial_number": "SPDK1", 00:20:03.600 "model_number": "SPDK bdev Controller", 00:20:03.600 "max_namespaces": 32, 00:20:03.600 "min_cntlid": 1, 00:20:03.600 "max_cntlid": 65519, 00:20:03.600 "namespaces": [ 00:20:03.600 { 00:20:03.600 "nsid": 1, 00:20:03.600 "bdev_name": "Malloc1", 00:20:03.600 "name": "Malloc1", 00:20:03.600 "nguid": "99CB4A51689246769C4B17097C200C53", 00:20:03.600 "uuid": "99cb4a51-6892-4676-9c4b-17097c200c53" 00:20:03.600 }, 00:20:03.600 { 00:20:03.600 "nsid": 2, 00:20:03.600 "bdev_name": "Malloc3", 00:20:03.600 "name": "Malloc3", 00:20:03.600 "nguid": "6A10836A7CE048258FBF7313C671276F", 00:20:03.600 "uuid": "6a10836a-7ce0-4825-8fbf-7313c671276f" 00:20:03.600 } 00:20:03.600 ] 00:20:03.600 }, 00:20:03.600 { 00:20:03.600 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:20:03.600 "subtype": "NVMe", 00:20:03.600 "listen_addresses": [ 00:20:03.600 { 00:20:03.600 "trtype": "VFIOUSER", 00:20:03.600 "adrfam": "IPv4", 00:20:03.600 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:20:03.600 "trsvcid": "0" 00:20:03.600 } 00:20:03.600 ], 00:20:03.600 "allow_any_host": true, 00:20:03.600 "hosts": [], 00:20:03.600 "serial_number": "SPDK2", 00:20:03.600 "model_number": "SPDK bdev Controller", 00:20:03.600 "max_namespaces": 32, 00:20:03.600 "min_cntlid": 1, 00:20:03.600 "max_cntlid": 65519, 00:20:03.600 "namespaces": [ 00:20:03.600 { 00:20:03.600 "nsid": 1, 00:20:03.600 "bdev_name": "Malloc2", 00:20:03.600 "name": "Malloc2", 00:20:03.600 "nguid": "35C07BCD0B754F3D84E3E0DB81C64438", 00:20:03.600 "uuid": "35c07bcd-0b75-4f3d-84e3-e0db81c64438" 00:20:03.600 } 00:20:03.600 ] 00:20:03.600 } 00:20:03.600 ] 00:20:03.600 09:37:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:20:03.600 09:37:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2778782 00:20:03.600 09:37:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:20:03.600 09:37:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:20:03.600 09:37:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:20:03.600 09:37:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:03.600 09:37:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:03.600 09:37:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:20:03.600 09:37:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:20:03.600 09:37:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:20:03.862 [2024-12-09 09:37:39.137077] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:20:03.862 Malloc4 00:20:03.862 09:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:20:04.123 [2024-12-09 09:37:39.317297] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:20:04.123 09:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:20:04.123 Asynchronous Event Request test 00:20:04.123 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:20:04.123 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:20:04.123 Registering asynchronous event callbacks... 00:20:04.123 Starting namespace attribute notice tests for all controllers... 00:20:04.123 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:20:04.123 aer_cb - Changed Namespace 00:20:04.123 Cleaning up... 00:20:04.123 [ 00:20:04.123 { 00:20:04.123 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:04.123 "subtype": "Discovery", 00:20:04.123 "listen_addresses": [], 00:20:04.123 "allow_any_host": true, 00:20:04.123 "hosts": [] 00:20:04.123 }, 00:20:04.123 { 00:20:04.123 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:20:04.123 "subtype": "NVMe", 00:20:04.123 "listen_addresses": [ 00:20:04.123 { 00:20:04.123 "trtype": "VFIOUSER", 00:20:04.123 "adrfam": "IPv4", 00:20:04.123 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:20:04.123 "trsvcid": "0" 00:20:04.123 } 00:20:04.123 ], 00:20:04.123 "allow_any_host": true, 00:20:04.123 "hosts": [], 00:20:04.123 "serial_number": "SPDK1", 00:20:04.123 "model_number": "SPDK bdev Controller", 00:20:04.123 "max_namespaces": 32, 00:20:04.123 "min_cntlid": 1, 00:20:04.123 "max_cntlid": 65519, 00:20:04.123 "namespaces": [ 00:20:04.123 { 00:20:04.123 "nsid": 1, 00:20:04.123 "bdev_name": "Malloc1", 00:20:04.123 "name": "Malloc1", 00:20:04.123 "nguid": "99CB4A51689246769C4B17097C200C53", 00:20:04.123 "uuid": "99cb4a51-6892-4676-9c4b-17097c200c53" 00:20:04.123 }, 00:20:04.123 { 00:20:04.123 "nsid": 2, 00:20:04.123 "bdev_name": "Malloc3", 00:20:04.123 "name": "Malloc3", 00:20:04.123 "nguid": "6A10836A7CE048258FBF7313C671276F", 00:20:04.123 "uuid": "6a10836a-7ce0-4825-8fbf-7313c671276f" 00:20:04.123 } 00:20:04.123 ] 00:20:04.123 }, 00:20:04.123 { 00:20:04.123 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:20:04.123 "subtype": "NVMe", 00:20:04.123 "listen_addresses": [ 00:20:04.123 { 00:20:04.123 "trtype": "VFIOUSER", 00:20:04.123 "adrfam": "IPv4", 00:20:04.123 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:20:04.123 "trsvcid": "0" 00:20:04.123 } 00:20:04.123 ], 00:20:04.123 "allow_any_host": true, 00:20:04.123 "hosts": [], 00:20:04.123 "serial_number": "SPDK2", 00:20:04.123 "model_number": "SPDK bdev Controller", 00:20:04.123 "max_namespaces": 32, 00:20:04.123 "min_cntlid": 1, 00:20:04.123 "max_cntlid": 65519, 00:20:04.123 "namespaces": [ 00:20:04.123 { 00:20:04.123 "nsid": 1, 00:20:04.123 "bdev_name": "Malloc2", 00:20:04.123 "name": "Malloc2", 00:20:04.123 "nguid": "35C07BCD0B754F3D84E3E0DB81C64438", 00:20:04.123 "uuid": "35c07bcd-0b75-4f3d-84e3-e0db81c64438" 00:20:04.123 }, 00:20:04.123 { 00:20:04.123 "nsid": 2, 00:20:04.123 "bdev_name": "Malloc4", 00:20:04.123 "name": "Malloc4", 00:20:04.123 "nguid": "2B697F2D5BE8448F8A8CEEC1E8543ABF", 00:20:04.123 "uuid": "2b697f2d-5be8-448f-8a8c-eec1e8543abf" 00:20:04.123 } 00:20:04.123 ] 00:20:04.123 } 00:20:04.123 ] 00:20:04.123 09:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2778782 00:20:04.123 09:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:20:04.123 09:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2769717 00:20:04.123 09:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 2769717 ']' 00:20:04.123 09:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 2769717 00:20:04.123 09:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:20:04.123 09:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:04.123 09:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2769717 00:20:04.385 09:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:04.385 09:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:04.385 09:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2769717' 00:20:04.385 killing process with pid 2769717 00:20:04.385 09:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 2769717 00:20:04.385 09:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 2769717 00:20:04.385 09:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:20:04.385 09:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:20:04.385 09:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:20:04.385 09:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:20:04.385 09:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:20:04.385 09:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2778803 00:20:04.385 09:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2778803' 00:20:04.385 Process pid: 2778803 00:20:04.385 09:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:20:04.385 09:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:20:04.385 09:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2778803 00:20:04.385 09:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 2778803 ']' 00:20:04.385 09:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:04.385 09:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:04.385 09:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:04.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:04.385 09:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:04.385 09:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:20:04.385 [2024-12-09 09:37:39.792218] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:20:04.385 [2024-12-09 09:37:39.793124] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:20:04.385 [2024-12-09 09:37:39.793165] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:04.647 [2024-12-09 09:37:39.877427] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:04.648 [2024-12-09 09:37:39.892904] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:04.648 [2024-12-09 09:37:39.892933] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:04.648 [2024-12-09 09:37:39.892939] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:04.648 [2024-12-09 09:37:39.892944] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:04.648 [2024-12-09 09:37:39.892948] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:04.648 [2024-12-09 09:37:39.894375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:04.648 [2024-12-09 09:37:39.894490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:04.648 [2024-12-09 09:37:39.894663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:04.648 [2024-12-09 09:37:39.894665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:04.648 [2024-12-09 09:37:39.940973] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:20:04.648 [2024-12-09 09:37:39.940983] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:20:04.648 [2024-12-09 09:37:39.941708] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:20:04.648 [2024-12-09 09:37:39.942856] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:20:04.648 [2024-12-09 09:37:39.942929] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:20:04.648 09:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:04.648 09:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:20:04.648 09:37:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:20:05.591 09:37:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:20:05.851 09:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:20:05.851 09:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:20:05.851 09:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:20:05.851 09:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:20:05.851 09:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:20:06.112 Malloc1 00:20:06.112 09:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:20:06.371 09:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:20:06.371 09:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:20:06.631 09:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:20:06.631 09:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:20:06.631 09:37:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:20:06.892 Malloc2 00:20:06.892 09:37:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:20:06.892 09:37:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:20:07.152 09:37:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:20:07.412 09:37:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:20:07.412 09:37:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2778803 00:20:07.412 09:37:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 2778803 ']' 00:20:07.412 09:37:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 2778803 00:20:07.412 09:37:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:20:07.412 09:37:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:07.412 09:37:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2778803 00:20:07.412 09:37:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:07.412 09:37:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:07.412 09:37:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2778803' 00:20:07.412 killing process with pid 2778803 00:20:07.412 09:37:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 2778803 00:20:07.412 09:37:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 2778803 00:20:07.412 09:37:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:20:07.412 09:37:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:20:07.412 00:20:07.412 real 0m50.285s 00:20:07.412 user 3m15.031s 00:20:07.412 sys 0m2.608s 00:20:07.412 09:37:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:07.412 09:37:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:20:07.412 ************************************ 00:20:07.412 END TEST nvmf_vfio_user 00:20:07.412 ************************************ 00:20:07.672 09:37:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:20:07.672 09:37:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:07.672 09:37:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:07.673 09:37:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:07.673 ************************************ 00:20:07.673 START TEST nvmf_vfio_user_nvme_compliance 00:20:07.673 ************************************ 00:20:07.673 09:37:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:20:07.673 * Looking for test storage... 00:20:07.673 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:20:07.673 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:07.673 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lcov --version 00:20:07.673 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:07.673 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:07.673 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:07.673 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:07.673 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:07.673 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:20:07.673 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:20:07.673 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:20:07.673 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:20:07.673 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:20:07.673 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:20:07.673 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:20:07.673 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:07.673 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:20:07.673 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:20:07.673 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:07.673 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:07.673 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:20:07.673 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:20:07.673 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:07.673 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:20:07.673 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:20:07.673 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:20:07.673 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:20:07.673 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:07.673 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:20:07.673 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:20:07.673 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:07.673 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:07.673 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:20:07.673 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:07.673 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:07.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:07.673 --rc genhtml_branch_coverage=1 00:20:07.673 --rc genhtml_function_coverage=1 00:20:07.673 --rc genhtml_legend=1 00:20:07.673 --rc geninfo_all_blocks=1 00:20:07.673 --rc geninfo_unexecuted_blocks=1 00:20:07.673 00:20:07.673 ' 00:20:07.673 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:07.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:07.673 --rc genhtml_branch_coverage=1 00:20:07.673 --rc genhtml_function_coverage=1 00:20:07.673 --rc genhtml_legend=1 00:20:07.673 --rc geninfo_all_blocks=1 00:20:07.673 --rc geninfo_unexecuted_blocks=1 00:20:07.673 00:20:07.673 ' 00:20:07.673 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:07.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:07.673 --rc genhtml_branch_coverage=1 00:20:07.673 --rc genhtml_function_coverage=1 00:20:07.673 --rc genhtml_legend=1 00:20:07.673 --rc geninfo_all_blocks=1 00:20:07.673 --rc geninfo_unexecuted_blocks=1 00:20:07.673 00:20:07.673 ' 00:20:07.673 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:07.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:07.673 --rc genhtml_branch_coverage=1 00:20:07.673 --rc genhtml_function_coverage=1 00:20:07.673 --rc genhtml_legend=1 00:20:07.673 --rc geninfo_all_blocks=1 00:20:07.673 --rc geninfo_unexecuted_blocks=1 00:20:07.673 00:20:07.673 ' 00:20:07.673 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:07.934 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:20:07.934 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:07.934 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:07.934 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:07.934 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:07.934 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:07.934 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:07.934 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:07.934 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:07.934 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:07.934 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:07.934 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:07.934 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:07.934 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:07.934 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:07.934 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:07.934 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:07.934 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:07.934 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:20:07.934 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:07.934 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:07.934 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:07.934 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.934 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.934 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.934 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:20:07.934 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.934 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:20:07.934 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:07.934 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:07.934 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:07.934 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:07.934 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:07.934 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:07.934 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:07.934 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:07.934 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:07.934 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:07.934 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:07.934 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:07.934 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:20:07.934 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:20:07.934 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:20:07.934 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=2779551 00:20:07.934 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 2779551' 00:20:07.934 Process pid: 2779551 00:20:07.934 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:20:07.934 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:20:07.934 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 2779551 00:20:07.934 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 2779551 ']' 00:20:07.934 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:07.934 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:07.934 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:07.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:07.934 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:07.934 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:20:07.934 [2024-12-09 09:37:43.215833] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:20:07.934 [2024-12-09 09:37:43.215915] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:07.934 [2024-12-09 09:37:43.302196] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:07.934 [2024-12-09 09:37:43.321048] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:07.934 [2024-12-09 09:37:43.321088] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:07.934 [2024-12-09 09:37:43.321094] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:07.934 [2024-12-09 09:37:43.321099] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:07.934 [2024-12-09 09:37:43.321104] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:07.934 [2024-12-09 09:37:43.322388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:07.934 [2024-12-09 09:37:43.322509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:07.934 [2024-12-09 09:37:43.322510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:08.194 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:08.194 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:20:08.194 09:37:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:20:09.134 09:37:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:20:09.134 09:37:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:20:09.134 09:37:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:20:09.134 09:37:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.134 09:37:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:20:09.134 09:37:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.134 09:37:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:20:09.134 09:37:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:20:09.134 09:37:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.134 09:37:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:20:09.134 malloc0 00:20:09.134 09:37:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.134 09:37:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:20:09.134 09:37:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.134 09:37:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:20:09.134 09:37:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.134 09:37:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:20:09.134 09:37:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.134 09:37:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:20:09.134 09:37:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.134 09:37:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:20:09.134 09:37:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.134 09:37:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:20:09.134 09:37:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.134 09:37:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:20:09.395 00:20:09.395 00:20:09.395 CUnit - A unit testing framework for C - Version 2.1-3 00:20:09.395 http://cunit.sourceforge.net/ 00:20:09.395 00:20:09.395 00:20:09.395 Suite: nvme_compliance 00:20:09.395 Test: admin_identify_ctrlr_verify_dptr ...[2024-12-09 09:37:44.647003] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:09.395 [2024-12-09 09:37:44.648292] vfio_user.c: 832:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:20:09.395 [2024-12-09 09:37:44.648303] vfio_user.c:5544:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:20:09.395 [2024-12-09 09:37:44.648308] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:20:09.395 [2024-12-09 09:37:44.650022] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:09.395 passed 00:20:09.395 Test: admin_identify_ctrlr_verify_fused ...[2024-12-09 09:37:44.730541] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:09.395 [2024-12-09 09:37:44.733559] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:09.395 passed 00:20:09.395 Test: admin_identify_ns ...[2024-12-09 09:37:44.813523] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:09.657 [2024-12-09 09:37:44.873648] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:20:09.657 [2024-12-09 09:37:44.881647] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:20:09.657 [2024-12-09 09:37:44.902730] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:09.657 passed 00:20:09.657 Test: admin_get_features_mandatory_features ...[2024-12-09 09:37:44.976945] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:09.657 [2024-12-09 09:37:44.979961] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:09.657 passed 00:20:09.657 Test: admin_get_features_optional_features ...[2024-12-09 09:37:45.055425] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:09.657 [2024-12-09 09:37:45.058439] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:09.657 passed 00:20:09.923 Test: admin_set_features_number_of_queues ...[2024-12-09 09:37:45.134709] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:09.923 [2024-12-09 09:37:45.241733] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:09.923 passed 00:20:09.923 Test: admin_get_log_page_mandatory_logs ...[2024-12-09 09:37:45.314941] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:09.923 [2024-12-09 09:37:45.317966] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:09.923 passed 00:20:10.182 Test: admin_get_log_page_with_lpo ...[2024-12-09 09:37:45.391703] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:10.182 [2024-12-09 09:37:45.460648] ctrlr.c:2700:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:20:10.182 [2024-12-09 09:37:45.473687] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:10.182 passed 00:20:10.182 Test: fabric_property_get ...[2024-12-09 09:37:45.547921] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:10.182 [2024-12-09 09:37:45.549123] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:20:10.183 [2024-12-09 09:37:45.550938] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:10.183 passed 00:20:10.183 Test: admin_delete_io_sq_use_admin_qid ...[2024-12-09 09:37:45.626422] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:10.183 [2024-12-09 09:37:45.627622] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:20:10.183 [2024-12-09 09:37:45.629440] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:10.442 passed 00:20:10.442 Test: admin_delete_io_sq_delete_sq_twice ...[2024-12-09 09:37:45.706219] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:10.442 [2024-12-09 09:37:45.790645] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:20:10.443 [2024-12-09 09:37:45.806651] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:20:10.443 [2024-12-09 09:37:45.811723] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:10.443 passed 00:20:10.443 Test: admin_delete_io_cq_use_admin_qid ...[2024-12-09 09:37:45.884900] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:10.443 [2024-12-09 09:37:45.886097] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:20:10.443 [2024-12-09 09:37:45.887920] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:10.702 passed 00:20:10.702 Test: admin_delete_io_cq_delete_cq_first ...[2024-12-09 09:37:45.965691] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:10.702 [2024-12-09 09:37:46.045644] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:20:10.702 [2024-12-09 09:37:46.069642] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:20:10.702 [2024-12-09 09:37:46.074714] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:10.702 passed 00:20:10.702 Test: admin_create_io_cq_verify_iv_pc ...[2024-12-09 09:37:46.148882] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:10.702 [2024-12-09 09:37:46.150091] vfio_user.c:2178:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:20:10.702 [2024-12-09 09:37:46.150110] vfio_user.c:2172:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:20:10.702 [2024-12-09 09:37:46.151897] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:10.962 passed 00:20:10.962 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-12-09 09:37:46.229111] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:10.962 [2024-12-09 09:37:46.320645] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:20:10.962 [2024-12-09 09:37:46.328642] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:20:10.962 [2024-12-09 09:37:46.336659] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:20:10.962 [2024-12-09 09:37:46.344640] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:20:10.962 [2024-12-09 09:37:46.373720] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:10.962 passed 00:20:11.222 Test: admin_create_io_sq_verify_pc ...[2024-12-09 09:37:46.450754] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:11.222 [2024-12-09 09:37:46.468648] vfio_user.c:2071:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:20:11.222 [2024-12-09 09:37:46.485921] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:11.222 passed 00:20:11.222 Test: admin_create_io_qp_max_qps ...[2024-12-09 09:37:46.562380] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:12.605 [2024-12-09 09:37:47.676646] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:20:12.865 [2024-12-09 09:37:48.059076] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:12.865 passed 00:20:12.865 Test: admin_create_io_sq_shared_cq ...[2024-12-09 09:37:48.136997] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:12.866 [2024-12-09 09:37:48.268646] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:20:12.866 [2024-12-09 09:37:48.305692] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:13.126 passed 00:20:13.126 00:20:13.126 Run Summary: Type Total Ran Passed Failed Inactive 00:20:13.126 suites 1 1 n/a 0 0 00:20:13.126 tests 18 18 18 0 0 00:20:13.126 asserts 360 360 360 0 n/a 00:20:13.126 00:20:13.126 Elapsed time = 1.505 seconds 00:20:13.126 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 2779551 00:20:13.126 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 2779551 ']' 00:20:13.126 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 2779551 00:20:13.126 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:20:13.126 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:13.126 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2779551 00:20:13.126 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:13.126 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:13.126 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2779551' 00:20:13.126 killing process with pid 2779551 00:20:13.126 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 2779551 00:20:13.126 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 2779551 00:20:13.126 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:20:13.126 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:20:13.126 00:20:13.126 real 0m5.605s 00:20:13.126 user 0m15.684s 00:20:13.126 sys 0m0.511s 00:20:13.126 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:13.126 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:20:13.126 ************************************ 00:20:13.126 END TEST nvmf_vfio_user_nvme_compliance 00:20:13.126 ************************************ 00:20:13.126 09:37:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:20:13.126 09:37:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:13.126 09:37:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:13.126 09:37:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:13.388 ************************************ 00:20:13.389 START TEST nvmf_vfio_user_fuzz 00:20:13.389 ************************************ 00:20:13.389 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:20:13.389 * Looking for test storage... 00:20:13.389 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:13.389 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:13.389 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:20:13.389 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:13.389 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:13.389 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:13.389 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:13.389 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:13.389 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:20:13.389 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:20:13.389 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:20:13.389 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:20:13.389 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:20:13.389 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:20:13.389 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:20:13.389 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:13.389 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:20:13.389 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:20:13.389 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:13.389 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:13.389 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:20:13.389 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:20:13.389 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:13.389 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:20:13.389 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:20:13.389 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:20:13.389 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:20:13.389 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:13.389 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:20:13.389 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:20:13.389 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:13.389 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:13.389 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:20:13.389 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:13.389 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:13.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:13.389 --rc genhtml_branch_coverage=1 00:20:13.389 --rc genhtml_function_coverage=1 00:20:13.389 --rc genhtml_legend=1 00:20:13.389 --rc geninfo_all_blocks=1 00:20:13.389 --rc geninfo_unexecuted_blocks=1 00:20:13.389 00:20:13.389 ' 00:20:13.389 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:13.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:13.389 --rc genhtml_branch_coverage=1 00:20:13.389 --rc genhtml_function_coverage=1 00:20:13.389 --rc genhtml_legend=1 00:20:13.389 --rc geninfo_all_blocks=1 00:20:13.389 --rc geninfo_unexecuted_blocks=1 00:20:13.389 00:20:13.389 ' 00:20:13.389 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:13.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:13.389 --rc genhtml_branch_coverage=1 00:20:13.389 --rc genhtml_function_coverage=1 00:20:13.389 --rc genhtml_legend=1 00:20:13.389 --rc geninfo_all_blocks=1 00:20:13.389 --rc geninfo_unexecuted_blocks=1 00:20:13.389 00:20:13.389 ' 00:20:13.389 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:13.389 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:13.389 --rc genhtml_branch_coverage=1 00:20:13.389 --rc genhtml_function_coverage=1 00:20:13.389 --rc genhtml_legend=1 00:20:13.389 --rc geninfo_all_blocks=1 00:20:13.389 --rc geninfo_unexecuted_blocks=1 00:20:13.389 00:20:13.389 ' 00:20:13.389 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:13.389 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:20:13.389 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:13.389 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:13.389 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:13.389 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:13.389 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:13.389 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:13.389 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:13.389 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:13.389 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:13.389 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:13.389 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:13.389 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:13.389 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:13.389 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:13.389 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:13.389 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:13.389 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:13.389 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:20:13.389 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:13.389 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:13.389 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:13.389 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.389 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.389 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.389 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:20:13.389 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.389 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:20:13.389 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:13.389 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:13.390 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:13.390 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:13.390 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:13.390 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:13.390 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:13.390 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:13.390 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:13.390 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:13.650 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:13.650 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:13.650 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:20:13.650 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:20:13.650 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:20:13.650 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:20:13.650 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:20:13.650 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2780726 00:20:13.650 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2780726' 00:20:13.650 Process pid: 2780726 00:20:13.650 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:20:13.650 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2780726 00:20:13.650 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:13.650 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 2780726 ']' 00:20:13.650 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:13.650 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:13.650 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:13.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:13.650 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:13.650 09:37:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:14.590 09:37:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:14.590 09:37:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:20:14.590 09:37:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:20:15.530 09:37:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:20:15.530 09:37:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.530 09:37:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:15.530 09:37:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.530 09:37:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:20:15.530 09:37:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:20:15.530 09:37:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.530 09:37:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:15.530 malloc0 00:20:15.530 09:37:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.530 09:37:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:20:15.530 09:37:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.530 09:37:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:15.530 09:37:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.530 09:37:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:20:15.530 09:37:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.530 09:37:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:15.530 09:37:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.530 09:37:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:20:15.530 09:37:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.530 09:37:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:15.530 09:37:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.530 09:37:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:20:15.530 09:37:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:20:47.622 Fuzzing completed. Shutting down the fuzz application 00:20:47.622 00:20:47.622 Dumping successful admin opcodes: 00:20:47.622 9, 10, 00:20:47.622 Dumping successful io opcodes: 00:20:47.622 0, 00:20:47.622 NS: 0x20000081ef00 I/O qp, Total commands completed: 1328441, total successful commands: 5201, random_seed: 191202944 00:20:47.622 NS: 0x20000081ef00 admin qp, Total commands completed: 301277, total successful commands: 74, random_seed: 399675904 00:20:47.623 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:20:47.623 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.623 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:47.623 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.623 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 2780726 00:20:47.623 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 2780726 ']' 00:20:47.623 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 2780726 00:20:47.623 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:20:47.623 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:47.623 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2780726 00:20:47.623 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:47.623 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:47.623 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2780726' 00:20:47.623 killing process with pid 2780726 00:20:47.623 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 2780726 00:20:47.623 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 2780726 00:20:47.623 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:20:47.623 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:20:47.623 00:20:47.623 real 0m32.812s 00:20:47.623 user 0m34.701s 00:20:47.623 sys 0m26.450s 00:20:47.623 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:47.623 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:47.623 ************************************ 00:20:47.623 END TEST nvmf_vfio_user_fuzz 00:20:47.623 ************************************ 00:20:47.623 09:38:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:20:47.623 09:38:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:47.623 09:38:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:47.623 09:38:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:47.623 ************************************ 00:20:47.623 START TEST nvmf_auth_target 00:20:47.623 ************************************ 00:20:47.623 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:20:47.623 * Looking for test storage... 00:20:47.623 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:47.623 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:47.623 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:20:47.623 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:47.623 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:47.623 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:47.623 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:47.623 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:47.623 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:20:47.623 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:20:47.623 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:20:47.623 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:20:47.623 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:20:47.623 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:20:47.623 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:20:47.623 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:47.623 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:20:47.623 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:20:47.623 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:47.623 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:47.623 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:20:47.623 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:20:47.623 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:47.623 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:20:47.623 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:20:47.623 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:20:47.623 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:20:47.623 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:47.623 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:20:47.623 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:20:47.623 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:47.623 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:47.623 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:20:47.623 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:47.623 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:47.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:47.623 --rc genhtml_branch_coverage=1 00:20:47.623 --rc genhtml_function_coverage=1 00:20:47.623 --rc genhtml_legend=1 00:20:47.623 --rc geninfo_all_blocks=1 00:20:47.623 --rc geninfo_unexecuted_blocks=1 00:20:47.623 00:20:47.623 ' 00:20:47.623 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:47.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:47.623 --rc genhtml_branch_coverage=1 00:20:47.623 --rc genhtml_function_coverage=1 00:20:47.623 --rc genhtml_legend=1 00:20:47.623 --rc geninfo_all_blocks=1 00:20:47.623 --rc geninfo_unexecuted_blocks=1 00:20:47.623 00:20:47.623 ' 00:20:47.623 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:47.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:47.623 --rc genhtml_branch_coverage=1 00:20:47.623 --rc genhtml_function_coverage=1 00:20:47.623 --rc genhtml_legend=1 00:20:47.623 --rc geninfo_all_blocks=1 00:20:47.623 --rc geninfo_unexecuted_blocks=1 00:20:47.623 00:20:47.623 ' 00:20:47.623 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:47.623 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:47.623 --rc genhtml_branch_coverage=1 00:20:47.623 --rc genhtml_function_coverage=1 00:20:47.623 --rc genhtml_legend=1 00:20:47.623 --rc geninfo_all_blocks=1 00:20:47.623 --rc geninfo_unexecuted_blocks=1 00:20:47.623 00:20:47.623 ' 00:20:47.623 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:47.623 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:20:47.623 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:47.623 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:47.623 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:47.623 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:47.623 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:47.623 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:47.623 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:47.623 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:47.623 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:47.623 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:47.623 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:47.623 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:47.623 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:47.623 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:47.624 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:47.624 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:47.624 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:47.624 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:20:47.624 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:47.624 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:47.624 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:47.624 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:47.624 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:47.624 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:47.624 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:20:47.624 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:47.624 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:20:47.624 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:47.624 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:47.624 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:47.624 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:47.624 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:47.624 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:47.624 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:47.624 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:47.624 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:47.624 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:47.624 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:20:47.624 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:20:47.624 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:20:47.624 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:47.624 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:20:47.624 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:20:47.624 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:20:47.624 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:20:47.624 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:47.624 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:47.624 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:47.624 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:47.624 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:47.624 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:47.624 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:47.624 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:47.624 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:47.624 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:47.624 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:20:47.624 09:38:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.211 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:54.211 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:20:54.211 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:54.211 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:54.211 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:54.211 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:54.212 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:54.212 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:20:54.212 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:54.212 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:20:54.212 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:20:54.212 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:20:54.212 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:20:54.212 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:20:54.212 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:20:54.212 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:54.212 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:54.212 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:54.212 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:54.212 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:54.212 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:54.212 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:54.212 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:54.212 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:54.212 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:54.212 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:54.212 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:54.212 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:54.212 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:54.212 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:54.212 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:54.212 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:54.212 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:54.212 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:54.212 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:54.212 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:54.212 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:54.212 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:54.212 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:54.212 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:54.212 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:54.212 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:54.212 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:54.212 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:54.212 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:54.212 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:54.212 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:54.212 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:54.212 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:54.212 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:54.212 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:54.212 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:54.212 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:54.212 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:54.212 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:54.212 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:54.212 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:54.212 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:54.212 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:54.212 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:54.212 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:54.212 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:54.212 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:54.212 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:54.212 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:54.212 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:54.212 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:54.212 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:54.212 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:54.212 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:54.212 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:54.212 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:54.212 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:54.212 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:20:54.212 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:54.212 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:54.212 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:54.212 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:54.212 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:54.212 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:54.212 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:54.212 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:54.212 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:54.212 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:54.212 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:54.212 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:54.212 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:54.212 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:54.212 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:54.212 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:54.212 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:54.212 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:54.212 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:54.212 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:54.212 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:54.212 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:54.212 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:54.212 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:54.212 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:54.212 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:54.212 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:54.212 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.698 ms 00:20:54.212 00:20:54.212 --- 10.0.0.2 ping statistics --- 00:20:54.212 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:54.212 rtt min/avg/max/mdev = 0.698/0.698/0.698/0.000 ms 00:20:54.212 09:38:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:54.212 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:54.212 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.259 ms 00:20:54.212 00:20:54.212 --- 10.0.0.1 ping statistics --- 00:20:54.212 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:54.212 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:20:54.212 09:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:54.212 09:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:20:54.212 09:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:54.212 09:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:54.213 09:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:54.213 09:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:54.213 09:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:54.213 09:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:54.213 09:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:54.213 09:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:20:54.213 09:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:54.213 09:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:54.213 09:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.213 09:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2790759 00:20:54.213 09:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2790759 00:20:54.213 09:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:20:54.213 09:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2790759 ']' 00:20:54.213 09:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:54.213 09:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:54.213 09:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:54.213 09:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:54.213 09:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.786 09:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:54.786 09:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:20:54.786 09:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:54.786 09:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:54.786 09:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.786 09:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:54.786 09:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=2790950 00:20:54.786 09:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:20:54.786 09:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:20:54.786 09:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:20:54.786 09:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:54.786 09:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:54.786 09:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:54.786 09:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:20:54.786 09:38:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:20:54.786 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:54.786 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=300446fb6b85f087d5fae71d0b9ae575c2f30487dbef453e 00:20:54.787 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:20:54.787 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.76K 00:20:54.787 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 300446fb6b85f087d5fae71d0b9ae575c2f30487dbef453e 0 00:20:54.787 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 300446fb6b85f087d5fae71d0b9ae575c2f30487dbef453e 0 00:20:54.787 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:54.787 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:54.787 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=300446fb6b85f087d5fae71d0b9ae575c2f30487dbef453e 00:20:54.787 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:20:54.787 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:54.787 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.76K 00:20:54.787 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.76K 00:20:54.787 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.76K 00:20:54.787 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:20:54.787 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:54.787 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:54.787 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:54.787 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:20:54.787 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:20:54.787 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:54.787 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=a99d791d80015e66ad1166066bcc6730459c9b2f9819030115ed9911c248b17d 00:20:54.787 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:20:54.787 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.onq 00:20:54.787 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key a99d791d80015e66ad1166066bcc6730459c9b2f9819030115ed9911c248b17d 3 00:20:54.787 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 a99d791d80015e66ad1166066bcc6730459c9b2f9819030115ed9911c248b17d 3 00:20:54.787 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:54.787 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:54.787 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=a99d791d80015e66ad1166066bcc6730459c9b2f9819030115ed9911c248b17d 00:20:54.787 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:20:54.787 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:54.787 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.onq 00:20:54.787 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.onq 00:20:54.787 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.onq 00:20:54.787 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:20:54.787 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:54.787 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:54.787 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:54.787 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:20:54.787 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:20:54.787 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:54.787 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=c79a4259d6e9cf946d81b7f9f912cc3d 00:20:54.787 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:20:54.787 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.9BV 00:20:54.787 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key c79a4259d6e9cf946d81b7f9f912cc3d 1 00:20:54.787 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 c79a4259d6e9cf946d81b7f9f912cc3d 1 00:20:54.787 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:54.787 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:54.787 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=c79a4259d6e9cf946d81b7f9f912cc3d 00:20:54.787 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:20:54.787 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:54.787 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.9BV 00:20:54.787 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.9BV 00:20:54.787 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.9BV 00:20:54.787 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:20:54.787 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:54.787 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:54.787 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:54.787 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:20:54.787 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:20:54.787 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:54.787 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=a0318987e7b437ed0fba00c279fcc58ffea1b4c4f8ab0f98 00:20:54.787 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:20:54.787 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.0rU 00:20:54.787 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key a0318987e7b437ed0fba00c279fcc58ffea1b4c4f8ab0f98 2 00:20:54.787 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 a0318987e7b437ed0fba00c279fcc58ffea1b4c4f8ab0f98 2 00:20:54.787 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:54.787 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:54.787 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=a0318987e7b437ed0fba00c279fcc58ffea1b4c4f8ab0f98 00:20:54.787 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:20:54.787 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:55.049 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.0rU 00:20:55.049 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.0rU 00:20:55.049 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.0rU 00:20:55.049 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:20:55.049 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:55.049 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:55.049 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:55.049 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:20:55.049 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:20:55.049 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:55.049 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=af459a37ec8b81e79e26612934acba82b42a94700815bfec 00:20:55.049 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:20:55.049 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.YPd 00:20:55.049 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key af459a37ec8b81e79e26612934acba82b42a94700815bfec 2 00:20:55.049 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 af459a37ec8b81e79e26612934acba82b42a94700815bfec 2 00:20:55.049 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:55.049 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:55.050 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=af459a37ec8b81e79e26612934acba82b42a94700815bfec 00:20:55.050 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:20:55.050 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:55.050 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.YPd 00:20:55.050 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.YPd 00:20:55.050 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.YPd 00:20:55.050 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:20:55.050 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:55.050 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:55.050 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:55.050 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:20:55.050 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:20:55.050 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:55.050 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=d55ca5472a7468835f2f561a254cf691 00:20:55.050 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:20:55.050 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Num 00:20:55.050 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key d55ca5472a7468835f2f561a254cf691 1 00:20:55.050 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 d55ca5472a7468835f2f561a254cf691 1 00:20:55.050 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:55.050 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:55.050 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=d55ca5472a7468835f2f561a254cf691 00:20:55.050 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:20:55.050 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:55.050 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Num 00:20:55.050 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Num 00:20:55.050 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.Num 00:20:55.050 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:20:55.050 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:55.050 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:55.050 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:55.050 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:20:55.050 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:20:55.050 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:55.050 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=52a25115bb258dc1e8934f3065a5879c0a43e2d3c75a3b7f4eff81179056862a 00:20:55.050 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:20:55.050 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.bEy 00:20:55.050 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 52a25115bb258dc1e8934f3065a5879c0a43e2d3c75a3b7f4eff81179056862a 3 00:20:55.050 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 52a25115bb258dc1e8934f3065a5879c0a43e2d3c75a3b7f4eff81179056862a 3 00:20:55.050 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:55.050 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:55.050 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=52a25115bb258dc1e8934f3065a5879c0a43e2d3c75a3b7f4eff81179056862a 00:20:55.050 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:20:55.050 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:55.050 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.bEy 00:20:55.050 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.bEy 00:20:55.050 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.bEy 00:20:55.050 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:20:55.050 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 2790759 00:20:55.050 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2790759 ']' 00:20:55.050 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:55.050 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:55.050 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:55.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:55.050 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:55.050 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.311 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:55.311 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:20:55.311 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 2790950 /var/tmp/host.sock 00:20:55.311 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2790950 ']' 00:20:55.311 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:20:55.311 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:55.311 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:20:55.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:20:55.311 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:55.311 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.572 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:55.572 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:20:55.572 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:20:55.572 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.572 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.572 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.573 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:20:55.573 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.76K 00:20:55.573 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.573 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.573 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.573 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.76K 00:20:55.573 09:38:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.76K 00:20:55.864 09:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.onq ]] 00:20:55.864 09:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.onq 00:20:55.864 09:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.864 09:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.864 09:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.864 09:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.onq 00:20:55.864 09:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.onq 00:20:55.864 09:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:20:55.864 09:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.9BV 00:20:55.864 09:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.864 09:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.864 09:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.864 09:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.9BV 00:20:55.864 09:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.9BV 00:20:56.152 09:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.0rU ]] 00:20:56.152 09:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.0rU 00:20:56.152 09:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.152 09:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.152 09:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.152 09:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.0rU 00:20:56.152 09:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.0rU 00:20:56.464 09:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:20:56.464 09:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.YPd 00:20:56.464 09:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.464 09:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.464 09:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.464 09:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.YPd 00:20:56.464 09:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.YPd 00:20:56.731 09:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.Num ]] 00:20:56.731 09:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Num 00:20:56.731 09:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.731 09:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.731 09:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.731 09:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Num 00:20:56.731 09:38:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Num 00:20:56.731 09:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:20:56.731 09:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.bEy 00:20:56.731 09:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.731 09:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.731 09:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.731 09:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.bEy 00:20:56.731 09:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.bEy 00:20:56.993 09:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:20:56.993 09:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:56.993 09:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:56.993 09:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:56.993 09:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:56.993 09:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:57.255 09:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:20:57.255 09:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:57.255 09:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:57.255 09:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:57.255 09:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:57.255 09:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:57.255 09:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:57.255 09:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.255 09:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.255 09:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.255 09:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:57.255 09:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:57.255 09:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:57.517 00:20:57.517 09:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:57.517 09:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:57.517 09:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:57.517 09:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.517 09:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:57.517 09:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.517 09:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.517 09:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.517 09:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:57.517 { 00:20:57.517 "cntlid": 1, 00:20:57.517 "qid": 0, 00:20:57.517 "state": "enabled", 00:20:57.517 "thread": "nvmf_tgt_poll_group_000", 00:20:57.517 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:57.517 "listen_address": { 00:20:57.517 "trtype": "TCP", 00:20:57.517 "adrfam": "IPv4", 00:20:57.517 "traddr": "10.0.0.2", 00:20:57.517 "trsvcid": "4420" 00:20:57.517 }, 00:20:57.517 "peer_address": { 00:20:57.517 "trtype": "TCP", 00:20:57.517 "adrfam": "IPv4", 00:20:57.517 "traddr": "10.0.0.1", 00:20:57.517 "trsvcid": "38024" 00:20:57.517 }, 00:20:57.517 "auth": { 00:20:57.517 "state": "completed", 00:20:57.517 "digest": "sha256", 00:20:57.517 "dhgroup": "null" 00:20:57.517 } 00:20:57.517 } 00:20:57.517 ]' 00:20:57.517 09:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:57.778 09:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:57.778 09:38:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:57.778 09:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:57.778 09:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:57.778 09:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:57.778 09:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:57.778 09:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:58.039 09:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzAwNDQ2ZmI2Yjg1ZjA4N2Q1ZmFlNzFkMGI5YWU1NzVjMmYzMDQ4N2RiZWY0NTNl8Eag5g==: --dhchap-ctrl-secret DHHC-1:03:YTk5ZDc5MWQ4MDAxNWU2NmFkMTE2NjA2NmJjYzY3MzA0NTljOWIyZjk4MTkwMzAxMTVlZDk5MTFjMjQ4YjE3ZC7/qWM=: 00:20:58.039 09:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MzAwNDQ2ZmI2Yjg1ZjA4N2Q1ZmFlNzFkMGI5YWU1NzVjMmYzMDQ4N2RiZWY0NTNl8Eag5g==: --dhchap-ctrl-secret DHHC-1:03:YTk5ZDc5MWQ4MDAxNWU2NmFkMTE2NjA2NmJjYzY3MzA0NTljOWIyZjk4MTkwMzAxMTVlZDk5MTFjMjQ4YjE3ZC7/qWM=: 00:20:58.609 09:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:58.609 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:58.609 09:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:58.609 09:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.609 09:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.610 09:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.610 09:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:58.610 09:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:58.610 09:38:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:58.870 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:20:58.870 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:58.870 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:58.870 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:58.870 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:58.870 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:58.870 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:58.870 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.870 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.870 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.870 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:58.870 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:58.870 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:59.131 00:20:59.131 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:59.131 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:59.131 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:59.131 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.131 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:59.131 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.131 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.131 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.131 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:59.131 { 00:20:59.131 "cntlid": 3, 00:20:59.131 "qid": 0, 00:20:59.131 "state": "enabled", 00:20:59.131 "thread": "nvmf_tgt_poll_group_000", 00:20:59.131 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:59.131 "listen_address": { 00:20:59.131 "trtype": "TCP", 00:20:59.131 "adrfam": "IPv4", 00:20:59.131 "traddr": "10.0.0.2", 00:20:59.131 "trsvcid": "4420" 00:20:59.131 }, 00:20:59.131 "peer_address": { 00:20:59.131 "trtype": "TCP", 00:20:59.131 "adrfam": "IPv4", 00:20:59.131 "traddr": "10.0.0.1", 00:20:59.131 "trsvcid": "58152" 00:20:59.131 }, 00:20:59.131 "auth": { 00:20:59.131 "state": "completed", 00:20:59.131 "digest": "sha256", 00:20:59.131 "dhgroup": "null" 00:20:59.131 } 00:20:59.132 } 00:20:59.132 ]' 00:20:59.132 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:59.392 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:59.392 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:59.392 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:59.392 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:59.392 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:59.392 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:59.392 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:59.663 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Yzc5YTQyNTlkNmU5Y2Y5NDZkODFiN2Y5ZjkxMmNjM2SbHOxD: --dhchap-ctrl-secret DHHC-1:02:YTAzMTg5ODdlN2I0MzdlZDBmYmEwMGMyNzlmY2M1OGZmZWExYjRjNGY4YWIwZjk4FIJFjA==: 00:20:59.663 09:38:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:Yzc5YTQyNTlkNmU5Y2Y5NDZkODFiN2Y5ZjkxMmNjM2SbHOxD: --dhchap-ctrl-secret DHHC-1:02:YTAzMTg5ODdlN2I0MzdlZDBmYmEwMGMyNzlmY2M1OGZmZWExYjRjNGY4YWIwZjk4FIJFjA==: 00:21:00.231 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:00.231 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:00.231 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:00.231 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.231 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.231 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.231 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:00.231 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:00.231 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:00.491 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:21:00.491 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:00.491 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:00.491 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:00.491 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:00.491 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:00.491 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:00.491 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.491 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.491 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.491 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:00.491 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:00.491 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:00.491 00:21:00.491 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:00.491 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:00.491 09:38:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:00.751 09:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.751 09:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:00.751 09:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.751 09:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.751 09:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.751 09:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:00.751 { 00:21:00.751 "cntlid": 5, 00:21:00.751 "qid": 0, 00:21:00.751 "state": "enabled", 00:21:00.751 "thread": "nvmf_tgt_poll_group_000", 00:21:00.751 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:00.751 "listen_address": { 00:21:00.751 "trtype": "TCP", 00:21:00.751 "adrfam": "IPv4", 00:21:00.751 "traddr": "10.0.0.2", 00:21:00.751 "trsvcid": "4420" 00:21:00.751 }, 00:21:00.751 "peer_address": { 00:21:00.751 "trtype": "TCP", 00:21:00.751 "adrfam": "IPv4", 00:21:00.751 "traddr": "10.0.0.1", 00:21:00.751 "trsvcid": "58192" 00:21:00.751 }, 00:21:00.751 "auth": { 00:21:00.751 "state": "completed", 00:21:00.751 "digest": "sha256", 00:21:00.751 "dhgroup": "null" 00:21:00.751 } 00:21:00.751 } 00:21:00.751 ]' 00:21:00.751 09:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:00.751 09:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:00.751 09:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:01.011 09:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:01.011 09:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:01.011 09:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:01.011 09:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:01.011 09:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:01.011 09:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWY0NTlhMzdlYzhiODFlNzllMjY2MTI5MzRhY2JhODJiNDJhOTQ3MDA4MTViZmVjs/6f6w==: --dhchap-ctrl-secret DHHC-1:01:ZDU1Y2E1NDcyYTc0Njg4MzVmMmY1NjFhMjU0Y2Y2OTGEHHys: 00:21:01.011 09:38:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YWY0NTlhMzdlYzhiODFlNzllMjY2MTI5MzRhY2JhODJiNDJhOTQ3MDA4MTViZmVjs/6f6w==: --dhchap-ctrl-secret DHHC-1:01:ZDU1Y2E1NDcyYTc0Njg4MzVmMmY1NjFhMjU0Y2Y2OTGEHHys: 00:21:01.951 09:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:01.951 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:01.951 09:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:01.951 09:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.951 09:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.951 09:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.951 09:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:01.951 09:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:01.951 09:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:01.951 09:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:21:01.951 09:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:01.951 09:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:01.951 09:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:01.951 09:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:01.951 09:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:01.951 09:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:21:01.951 09:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.951 09:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.951 09:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.951 09:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:01.951 09:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:01.951 09:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:02.211 00:21:02.211 09:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:02.211 09:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:02.211 09:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:02.471 09:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.471 09:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:02.471 09:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.471 09:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.471 09:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.471 09:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:02.471 { 00:21:02.471 "cntlid": 7, 00:21:02.471 "qid": 0, 00:21:02.471 "state": "enabled", 00:21:02.471 "thread": "nvmf_tgt_poll_group_000", 00:21:02.471 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:02.471 "listen_address": { 00:21:02.471 "trtype": "TCP", 00:21:02.471 "adrfam": "IPv4", 00:21:02.471 "traddr": "10.0.0.2", 00:21:02.471 "trsvcid": "4420" 00:21:02.471 }, 00:21:02.471 "peer_address": { 00:21:02.471 "trtype": "TCP", 00:21:02.471 "adrfam": "IPv4", 00:21:02.471 "traddr": "10.0.0.1", 00:21:02.471 "trsvcid": "58228" 00:21:02.471 }, 00:21:02.471 "auth": { 00:21:02.471 "state": "completed", 00:21:02.471 "digest": "sha256", 00:21:02.471 "dhgroup": "null" 00:21:02.471 } 00:21:02.471 } 00:21:02.471 ]' 00:21:02.471 09:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:02.471 09:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:02.471 09:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:02.471 09:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:02.471 09:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:02.471 09:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:02.471 09:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:02.471 09:38:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:02.732 09:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTJhMjUxMTViYjI1OGRjMWU4OTM0ZjMwNjVhNTg3OWMwYTQzZTJkM2M3NWEzYjdmNGVmZjgxMTc5MDU2ODYyYa+Fpr4=: 00:21:02.732 09:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NTJhMjUxMTViYjI1OGRjMWU4OTM0ZjMwNjVhNTg3OWMwYTQzZTJkM2M3NWEzYjdmNGVmZjgxMTc5MDU2ODYyYa+Fpr4=: 00:21:03.302 09:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:03.302 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:03.302 09:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:03.302 09:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.302 09:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.302 09:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.302 09:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:03.302 09:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:03.302 09:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:03.302 09:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:03.562 09:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:21:03.562 09:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:03.562 09:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:03.562 09:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:03.562 09:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:03.562 09:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:03.562 09:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:03.562 09:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.562 09:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.562 09:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.562 09:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:03.562 09:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:03.562 09:38:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:03.822 00:21:03.822 09:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:03.822 09:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:03.822 09:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:04.097 09:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.097 09:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:04.097 09:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.097 09:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.097 09:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.097 09:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:04.097 { 00:21:04.097 "cntlid": 9, 00:21:04.097 "qid": 0, 00:21:04.097 "state": "enabled", 00:21:04.097 "thread": "nvmf_tgt_poll_group_000", 00:21:04.097 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:04.097 "listen_address": { 00:21:04.097 "trtype": "TCP", 00:21:04.097 "adrfam": "IPv4", 00:21:04.097 "traddr": "10.0.0.2", 00:21:04.097 "trsvcid": "4420" 00:21:04.097 }, 00:21:04.097 "peer_address": { 00:21:04.097 "trtype": "TCP", 00:21:04.097 "adrfam": "IPv4", 00:21:04.097 "traddr": "10.0.0.1", 00:21:04.097 "trsvcid": "58238" 00:21:04.097 }, 00:21:04.097 "auth": { 00:21:04.097 "state": "completed", 00:21:04.097 "digest": "sha256", 00:21:04.097 "dhgroup": "ffdhe2048" 00:21:04.097 } 00:21:04.097 } 00:21:04.097 ]' 00:21:04.097 09:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:04.097 09:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:04.097 09:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:04.097 09:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:04.097 09:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:04.097 09:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:04.097 09:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:04.097 09:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:04.360 09:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzAwNDQ2ZmI2Yjg1ZjA4N2Q1ZmFlNzFkMGI5YWU1NzVjMmYzMDQ4N2RiZWY0NTNl8Eag5g==: --dhchap-ctrl-secret DHHC-1:03:YTk5ZDc5MWQ4MDAxNWU2NmFkMTE2NjA2NmJjYzY3MzA0NTljOWIyZjk4MTkwMzAxMTVlZDk5MTFjMjQ4YjE3ZC7/qWM=: 00:21:04.360 09:38:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MzAwNDQ2ZmI2Yjg1ZjA4N2Q1ZmFlNzFkMGI5YWU1NzVjMmYzMDQ4N2RiZWY0NTNl8Eag5g==: --dhchap-ctrl-secret DHHC-1:03:YTk5ZDc5MWQ4MDAxNWU2NmFkMTE2NjA2NmJjYzY3MzA0NTljOWIyZjk4MTkwMzAxMTVlZDk5MTFjMjQ4YjE3ZC7/qWM=: 00:21:04.931 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:04.931 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:04.931 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:04.931 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.931 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.931 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.931 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:04.931 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:04.931 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:05.192 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:21:05.192 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:05.192 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:05.192 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:05.192 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:05.192 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:05.192 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:05.192 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.192 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.192 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.192 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:05.192 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:05.192 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:05.453 00:21:05.453 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:05.453 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:05.453 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:05.453 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.453 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:05.453 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.453 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.713 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.713 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:05.713 { 00:21:05.713 "cntlid": 11, 00:21:05.713 "qid": 0, 00:21:05.713 "state": "enabled", 00:21:05.713 "thread": "nvmf_tgt_poll_group_000", 00:21:05.713 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:05.713 "listen_address": { 00:21:05.713 "trtype": "TCP", 00:21:05.713 "adrfam": "IPv4", 00:21:05.713 "traddr": "10.0.0.2", 00:21:05.713 "trsvcid": "4420" 00:21:05.713 }, 00:21:05.713 "peer_address": { 00:21:05.713 "trtype": "TCP", 00:21:05.713 "adrfam": "IPv4", 00:21:05.713 "traddr": "10.0.0.1", 00:21:05.713 "trsvcid": "58254" 00:21:05.713 }, 00:21:05.713 "auth": { 00:21:05.713 "state": "completed", 00:21:05.713 "digest": "sha256", 00:21:05.713 "dhgroup": "ffdhe2048" 00:21:05.713 } 00:21:05.713 } 00:21:05.713 ]' 00:21:05.713 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:05.713 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:05.713 09:38:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:05.713 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:05.713 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:05.713 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:05.713 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:05.713 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:05.975 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Yzc5YTQyNTlkNmU5Y2Y5NDZkODFiN2Y5ZjkxMmNjM2SbHOxD: --dhchap-ctrl-secret DHHC-1:02:YTAzMTg5ODdlN2I0MzdlZDBmYmEwMGMyNzlmY2M1OGZmZWExYjRjNGY4YWIwZjk4FIJFjA==: 00:21:05.975 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:Yzc5YTQyNTlkNmU5Y2Y5NDZkODFiN2Y5ZjkxMmNjM2SbHOxD: --dhchap-ctrl-secret DHHC-1:02:YTAzMTg5ODdlN2I0MzdlZDBmYmEwMGMyNzlmY2M1OGZmZWExYjRjNGY4YWIwZjk4FIJFjA==: 00:21:06.546 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:06.546 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:06.546 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:06.546 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.546 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.546 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.546 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:06.546 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:06.546 09:38:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:06.807 09:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:21:06.807 09:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:06.807 09:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:06.807 09:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:06.807 09:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:06.807 09:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:06.807 09:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:06.807 09:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.807 09:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.807 09:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.807 09:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:06.807 09:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:06.807 09:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:07.067 00:21:07.067 09:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:07.067 09:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:07.067 09:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:07.067 09:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.067 09:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:07.067 09:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.067 09:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.067 09:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.067 09:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:07.068 { 00:21:07.068 "cntlid": 13, 00:21:07.068 "qid": 0, 00:21:07.068 "state": "enabled", 00:21:07.068 "thread": "nvmf_tgt_poll_group_000", 00:21:07.068 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:07.068 "listen_address": { 00:21:07.068 "trtype": "TCP", 00:21:07.068 "adrfam": "IPv4", 00:21:07.068 "traddr": "10.0.0.2", 00:21:07.068 "trsvcid": "4420" 00:21:07.068 }, 00:21:07.068 "peer_address": { 00:21:07.068 "trtype": "TCP", 00:21:07.068 "adrfam": "IPv4", 00:21:07.068 "traddr": "10.0.0.1", 00:21:07.068 "trsvcid": "58282" 00:21:07.068 }, 00:21:07.068 "auth": { 00:21:07.068 "state": "completed", 00:21:07.068 "digest": "sha256", 00:21:07.068 "dhgroup": "ffdhe2048" 00:21:07.068 } 00:21:07.068 } 00:21:07.068 ]' 00:21:07.068 09:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:07.330 09:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:07.330 09:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:07.330 09:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:07.330 09:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:07.330 09:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:07.330 09:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:07.330 09:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:07.591 09:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWY0NTlhMzdlYzhiODFlNzllMjY2MTI5MzRhY2JhODJiNDJhOTQ3MDA4MTViZmVjs/6f6w==: --dhchap-ctrl-secret DHHC-1:01:ZDU1Y2E1NDcyYTc0Njg4MzVmMmY1NjFhMjU0Y2Y2OTGEHHys: 00:21:07.591 09:38:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YWY0NTlhMzdlYzhiODFlNzllMjY2MTI5MzRhY2JhODJiNDJhOTQ3MDA4MTViZmVjs/6f6w==: --dhchap-ctrl-secret DHHC-1:01:ZDU1Y2E1NDcyYTc0Njg4MzVmMmY1NjFhMjU0Y2Y2OTGEHHys: 00:21:08.164 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:08.164 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:08.164 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:08.164 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.164 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.164 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.164 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:08.164 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:08.164 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:08.423 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:21:08.423 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:08.423 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:08.423 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:08.423 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:08.423 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:08.423 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:21:08.423 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.423 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.423 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.423 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:08.423 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:08.423 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:08.423 00:21:08.682 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:08.682 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:08.682 09:38:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:08.682 09:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.682 09:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:08.682 09:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.682 09:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.682 09:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.682 09:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:08.682 { 00:21:08.682 "cntlid": 15, 00:21:08.682 "qid": 0, 00:21:08.682 "state": "enabled", 00:21:08.682 "thread": "nvmf_tgt_poll_group_000", 00:21:08.682 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:08.682 "listen_address": { 00:21:08.682 "trtype": "TCP", 00:21:08.682 "adrfam": "IPv4", 00:21:08.682 "traddr": "10.0.0.2", 00:21:08.682 "trsvcid": "4420" 00:21:08.682 }, 00:21:08.682 "peer_address": { 00:21:08.682 "trtype": "TCP", 00:21:08.682 "adrfam": "IPv4", 00:21:08.682 "traddr": "10.0.0.1", 00:21:08.682 "trsvcid": "42500" 00:21:08.682 }, 00:21:08.682 "auth": { 00:21:08.682 "state": "completed", 00:21:08.682 "digest": "sha256", 00:21:08.682 "dhgroup": "ffdhe2048" 00:21:08.682 } 00:21:08.682 } 00:21:08.682 ]' 00:21:08.682 09:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:08.942 09:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:08.942 09:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:08.942 09:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:08.942 09:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:08.942 09:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:08.942 09:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:08.942 09:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:09.201 09:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTJhMjUxMTViYjI1OGRjMWU4OTM0ZjMwNjVhNTg3OWMwYTQzZTJkM2M3NWEzYjdmNGVmZjgxMTc5MDU2ODYyYa+Fpr4=: 00:21:09.201 09:38:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NTJhMjUxMTViYjI1OGRjMWU4OTM0ZjMwNjVhNTg3OWMwYTQzZTJkM2M3NWEzYjdmNGVmZjgxMTc5MDU2ODYyYa+Fpr4=: 00:21:09.770 09:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:09.770 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:09.770 09:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:09.770 09:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.770 09:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.770 09:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.770 09:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:09.770 09:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:09.770 09:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:09.770 09:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:10.029 09:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:21:10.029 09:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:10.029 09:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:10.029 09:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:10.029 09:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:10.029 09:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:10.029 09:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:10.029 09:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.029 09:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.029 09:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.030 09:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:10.030 09:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:10.030 09:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:10.030 00:21:10.290 09:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:10.290 09:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:10.290 09:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:10.290 09:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:10.290 09:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:10.290 09:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.290 09:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.290 09:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.290 09:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:10.290 { 00:21:10.290 "cntlid": 17, 00:21:10.290 "qid": 0, 00:21:10.290 "state": "enabled", 00:21:10.290 "thread": "nvmf_tgt_poll_group_000", 00:21:10.290 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:10.290 "listen_address": { 00:21:10.290 "trtype": "TCP", 00:21:10.290 "adrfam": "IPv4", 00:21:10.290 "traddr": "10.0.0.2", 00:21:10.290 "trsvcid": "4420" 00:21:10.290 }, 00:21:10.290 "peer_address": { 00:21:10.290 "trtype": "TCP", 00:21:10.290 "adrfam": "IPv4", 00:21:10.290 "traddr": "10.0.0.1", 00:21:10.290 "trsvcid": "42522" 00:21:10.290 }, 00:21:10.290 "auth": { 00:21:10.290 "state": "completed", 00:21:10.290 "digest": "sha256", 00:21:10.290 "dhgroup": "ffdhe3072" 00:21:10.290 } 00:21:10.290 } 00:21:10.290 ]' 00:21:10.290 09:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:10.290 09:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:10.290 09:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:10.549 09:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:10.550 09:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:10.550 09:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:10.550 09:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:10.550 09:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:10.550 09:38:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzAwNDQ2ZmI2Yjg1ZjA4N2Q1ZmFlNzFkMGI5YWU1NzVjMmYzMDQ4N2RiZWY0NTNl8Eag5g==: --dhchap-ctrl-secret DHHC-1:03:YTk5ZDc5MWQ4MDAxNWU2NmFkMTE2NjA2NmJjYzY3MzA0NTljOWIyZjk4MTkwMzAxMTVlZDk5MTFjMjQ4YjE3ZC7/qWM=: 00:21:10.550 09:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MzAwNDQ2ZmI2Yjg1ZjA4N2Q1ZmFlNzFkMGI5YWU1NzVjMmYzMDQ4N2RiZWY0NTNl8Eag5g==: --dhchap-ctrl-secret DHHC-1:03:YTk5ZDc5MWQ4MDAxNWU2NmFkMTE2NjA2NmJjYzY3MzA0NTljOWIyZjk4MTkwMzAxMTVlZDk5MTFjMjQ4YjE3ZC7/qWM=: 00:21:11.489 09:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:11.489 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:11.489 09:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:11.489 09:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.489 09:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.489 09:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.489 09:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:11.489 09:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:11.489 09:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:11.489 09:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:21:11.489 09:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:11.489 09:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:11.489 09:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:11.489 09:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:11.489 09:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:11.489 09:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:11.489 09:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.489 09:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.489 09:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.489 09:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:11.489 09:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:11.489 09:38:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:11.748 00:21:11.748 09:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:11.748 09:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:11.748 09:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:12.008 09:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:12.008 09:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:12.008 09:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.008 09:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.008 09:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.008 09:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:12.008 { 00:21:12.008 "cntlid": 19, 00:21:12.008 "qid": 0, 00:21:12.008 "state": "enabled", 00:21:12.008 "thread": "nvmf_tgt_poll_group_000", 00:21:12.008 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:12.008 "listen_address": { 00:21:12.008 "trtype": "TCP", 00:21:12.008 "adrfam": "IPv4", 00:21:12.008 "traddr": "10.0.0.2", 00:21:12.008 "trsvcid": "4420" 00:21:12.008 }, 00:21:12.008 "peer_address": { 00:21:12.008 "trtype": "TCP", 00:21:12.008 "adrfam": "IPv4", 00:21:12.008 "traddr": "10.0.0.1", 00:21:12.008 "trsvcid": "42548" 00:21:12.008 }, 00:21:12.008 "auth": { 00:21:12.008 "state": "completed", 00:21:12.008 "digest": "sha256", 00:21:12.008 "dhgroup": "ffdhe3072" 00:21:12.008 } 00:21:12.008 } 00:21:12.008 ]' 00:21:12.008 09:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:12.008 09:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:12.008 09:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:12.008 09:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:12.008 09:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:12.008 09:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:12.008 09:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:12.008 09:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:12.268 09:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Yzc5YTQyNTlkNmU5Y2Y5NDZkODFiN2Y5ZjkxMmNjM2SbHOxD: --dhchap-ctrl-secret DHHC-1:02:YTAzMTg5ODdlN2I0MzdlZDBmYmEwMGMyNzlmY2M1OGZmZWExYjRjNGY4YWIwZjk4FIJFjA==: 00:21:12.268 09:38:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:Yzc5YTQyNTlkNmU5Y2Y5NDZkODFiN2Y5ZjkxMmNjM2SbHOxD: --dhchap-ctrl-secret DHHC-1:02:YTAzMTg5ODdlN2I0MzdlZDBmYmEwMGMyNzlmY2M1OGZmZWExYjRjNGY4YWIwZjk4FIJFjA==: 00:21:12.838 09:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:12.838 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:12.838 09:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:12.838 09:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.838 09:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.838 09:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.838 09:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:12.839 09:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:12.839 09:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:13.099 09:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:21:13.099 09:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:13.099 09:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:13.099 09:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:13.099 09:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:13.099 09:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:13.099 09:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:13.099 09:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.099 09:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.099 09:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.099 09:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:13.099 09:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:13.099 09:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:13.360 00:21:13.360 09:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:13.360 09:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:13.360 09:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:13.619 09:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.619 09:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:13.620 09:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.620 09:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.620 09:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.620 09:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:13.620 { 00:21:13.620 "cntlid": 21, 00:21:13.620 "qid": 0, 00:21:13.620 "state": "enabled", 00:21:13.620 "thread": "nvmf_tgt_poll_group_000", 00:21:13.620 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:13.620 "listen_address": { 00:21:13.620 "trtype": "TCP", 00:21:13.620 "adrfam": "IPv4", 00:21:13.620 "traddr": "10.0.0.2", 00:21:13.620 "trsvcid": "4420" 00:21:13.620 }, 00:21:13.620 "peer_address": { 00:21:13.620 "trtype": "TCP", 00:21:13.620 "adrfam": "IPv4", 00:21:13.620 "traddr": "10.0.0.1", 00:21:13.620 "trsvcid": "42582" 00:21:13.620 }, 00:21:13.620 "auth": { 00:21:13.620 "state": "completed", 00:21:13.620 "digest": "sha256", 00:21:13.620 "dhgroup": "ffdhe3072" 00:21:13.620 } 00:21:13.620 } 00:21:13.620 ]' 00:21:13.620 09:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:13.620 09:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:13.620 09:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:13.620 09:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:13.620 09:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:13.620 09:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:13.620 09:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:13.620 09:38:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:13.879 09:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWY0NTlhMzdlYzhiODFlNzllMjY2MTI5MzRhY2JhODJiNDJhOTQ3MDA4MTViZmVjs/6f6w==: --dhchap-ctrl-secret DHHC-1:01:ZDU1Y2E1NDcyYTc0Njg4MzVmMmY1NjFhMjU0Y2Y2OTGEHHys: 00:21:13.879 09:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YWY0NTlhMzdlYzhiODFlNzllMjY2MTI5MzRhY2JhODJiNDJhOTQ3MDA4MTViZmVjs/6f6w==: --dhchap-ctrl-secret DHHC-1:01:ZDU1Y2E1NDcyYTc0Njg4MzVmMmY1NjFhMjU0Y2Y2OTGEHHys: 00:21:14.447 09:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:14.447 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:14.447 09:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:14.447 09:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.447 09:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.447 09:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.447 09:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:14.447 09:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:14.448 09:38:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:14.707 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:21:14.707 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:14.707 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:14.707 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:14.707 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:14.707 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:14.707 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:21:14.707 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.707 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.707 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.707 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:14.707 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:14.707 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:14.967 00:21:14.967 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:14.967 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:14.967 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:15.227 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.227 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:15.227 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.227 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.227 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.227 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:15.227 { 00:21:15.227 "cntlid": 23, 00:21:15.227 "qid": 0, 00:21:15.227 "state": "enabled", 00:21:15.227 "thread": "nvmf_tgt_poll_group_000", 00:21:15.227 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:15.227 "listen_address": { 00:21:15.227 "trtype": "TCP", 00:21:15.227 "adrfam": "IPv4", 00:21:15.227 "traddr": "10.0.0.2", 00:21:15.227 "trsvcid": "4420" 00:21:15.227 }, 00:21:15.227 "peer_address": { 00:21:15.227 "trtype": "TCP", 00:21:15.227 "adrfam": "IPv4", 00:21:15.227 "traddr": "10.0.0.1", 00:21:15.227 "trsvcid": "42622" 00:21:15.227 }, 00:21:15.227 "auth": { 00:21:15.227 "state": "completed", 00:21:15.227 "digest": "sha256", 00:21:15.227 "dhgroup": "ffdhe3072" 00:21:15.227 } 00:21:15.227 } 00:21:15.227 ]' 00:21:15.227 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:15.227 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:15.227 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:15.227 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:15.227 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:15.227 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:15.227 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:15.227 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:15.486 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTJhMjUxMTViYjI1OGRjMWU4OTM0ZjMwNjVhNTg3OWMwYTQzZTJkM2M3NWEzYjdmNGVmZjgxMTc5MDU2ODYyYa+Fpr4=: 00:21:15.486 09:38:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NTJhMjUxMTViYjI1OGRjMWU4OTM0ZjMwNjVhNTg3OWMwYTQzZTJkM2M3NWEzYjdmNGVmZjgxMTc5MDU2ODYyYa+Fpr4=: 00:21:16.057 09:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:16.057 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:16.057 09:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:16.057 09:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.057 09:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.057 09:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.057 09:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:16.057 09:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:16.057 09:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:16.057 09:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:16.319 09:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:21:16.319 09:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:16.319 09:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:16.319 09:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:16.319 09:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:16.319 09:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:16.319 09:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:16.319 09:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.319 09:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.319 09:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.319 09:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:16.319 09:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:16.319 09:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:16.581 00:21:16.581 09:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:16.581 09:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:16.581 09:38:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:16.843 09:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.843 09:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:16.843 09:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.843 09:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.843 09:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.843 09:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:16.843 { 00:21:16.843 "cntlid": 25, 00:21:16.843 "qid": 0, 00:21:16.843 "state": "enabled", 00:21:16.843 "thread": "nvmf_tgt_poll_group_000", 00:21:16.843 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:16.843 "listen_address": { 00:21:16.843 "trtype": "TCP", 00:21:16.843 "adrfam": "IPv4", 00:21:16.843 "traddr": "10.0.0.2", 00:21:16.843 "trsvcid": "4420" 00:21:16.843 }, 00:21:16.843 "peer_address": { 00:21:16.843 "trtype": "TCP", 00:21:16.843 "adrfam": "IPv4", 00:21:16.843 "traddr": "10.0.0.1", 00:21:16.843 "trsvcid": "42642" 00:21:16.843 }, 00:21:16.843 "auth": { 00:21:16.843 "state": "completed", 00:21:16.843 "digest": "sha256", 00:21:16.843 "dhgroup": "ffdhe4096" 00:21:16.843 } 00:21:16.843 } 00:21:16.843 ]' 00:21:16.843 09:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:16.843 09:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:16.843 09:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:16.843 09:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:16.843 09:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:16.843 09:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:16.843 09:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:16.843 09:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:17.104 09:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzAwNDQ2ZmI2Yjg1ZjA4N2Q1ZmFlNzFkMGI5YWU1NzVjMmYzMDQ4N2RiZWY0NTNl8Eag5g==: --dhchap-ctrl-secret DHHC-1:03:YTk5ZDc5MWQ4MDAxNWU2NmFkMTE2NjA2NmJjYzY3MzA0NTljOWIyZjk4MTkwMzAxMTVlZDk5MTFjMjQ4YjE3ZC7/qWM=: 00:21:17.104 09:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MzAwNDQ2ZmI2Yjg1ZjA4N2Q1ZmFlNzFkMGI5YWU1NzVjMmYzMDQ4N2RiZWY0NTNl8Eag5g==: --dhchap-ctrl-secret DHHC-1:03:YTk5ZDc5MWQ4MDAxNWU2NmFkMTE2NjA2NmJjYzY3MzA0NTljOWIyZjk4MTkwMzAxMTVlZDk5MTFjMjQ4YjE3ZC7/qWM=: 00:21:17.677 09:38:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:17.677 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:17.677 09:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:17.677 09:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.677 09:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.677 09:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.677 09:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:17.677 09:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:17.677 09:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:17.939 09:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:21:17.939 09:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:17.939 09:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:17.939 09:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:17.939 09:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:17.939 09:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:17.939 09:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:17.939 09:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.939 09:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.939 09:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.939 09:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:17.939 09:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:17.939 09:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:18.200 00:21:18.200 09:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:18.200 09:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:18.200 09:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:18.461 09:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.461 09:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:18.461 09:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.461 09:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.461 09:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.461 09:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:18.461 { 00:21:18.461 "cntlid": 27, 00:21:18.461 "qid": 0, 00:21:18.461 "state": "enabled", 00:21:18.461 "thread": "nvmf_tgt_poll_group_000", 00:21:18.461 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:18.461 "listen_address": { 00:21:18.461 "trtype": "TCP", 00:21:18.461 "adrfam": "IPv4", 00:21:18.461 "traddr": "10.0.0.2", 00:21:18.461 "trsvcid": "4420" 00:21:18.461 }, 00:21:18.461 "peer_address": { 00:21:18.461 "trtype": "TCP", 00:21:18.461 "adrfam": "IPv4", 00:21:18.461 "traddr": "10.0.0.1", 00:21:18.461 "trsvcid": "42670" 00:21:18.461 }, 00:21:18.461 "auth": { 00:21:18.461 "state": "completed", 00:21:18.461 "digest": "sha256", 00:21:18.461 "dhgroup": "ffdhe4096" 00:21:18.461 } 00:21:18.462 } 00:21:18.462 ]' 00:21:18.462 09:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:18.462 09:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:18.462 09:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:18.462 09:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:18.462 09:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:18.462 09:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:18.462 09:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:18.462 09:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:18.722 09:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Yzc5YTQyNTlkNmU5Y2Y5NDZkODFiN2Y5ZjkxMmNjM2SbHOxD: --dhchap-ctrl-secret DHHC-1:02:YTAzMTg5ODdlN2I0MzdlZDBmYmEwMGMyNzlmY2M1OGZmZWExYjRjNGY4YWIwZjk4FIJFjA==: 00:21:18.722 09:38:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:Yzc5YTQyNTlkNmU5Y2Y5NDZkODFiN2Y5ZjkxMmNjM2SbHOxD: --dhchap-ctrl-secret DHHC-1:02:YTAzMTg5ODdlN2I0MzdlZDBmYmEwMGMyNzlmY2M1OGZmZWExYjRjNGY4YWIwZjk4FIJFjA==: 00:21:19.293 09:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:19.293 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:19.293 09:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:19.293 09:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.293 09:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.293 09:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.293 09:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:19.293 09:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:19.293 09:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:19.555 09:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:21:19.555 09:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:19.555 09:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:19.555 09:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:19.555 09:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:19.555 09:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:19.555 09:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:19.555 09:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.555 09:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.555 09:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.555 09:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:19.555 09:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:19.555 09:38:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:19.824 00:21:19.824 09:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:19.824 09:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:19.824 09:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:19.824 09:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:19.824 09:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:19.824 09:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.824 09:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.824 09:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.824 09:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:19.824 { 00:21:19.824 "cntlid": 29, 00:21:19.824 "qid": 0, 00:21:19.824 "state": "enabled", 00:21:19.824 "thread": "nvmf_tgt_poll_group_000", 00:21:19.824 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:19.824 "listen_address": { 00:21:19.824 "trtype": "TCP", 00:21:19.824 "adrfam": "IPv4", 00:21:19.824 "traddr": "10.0.0.2", 00:21:19.824 "trsvcid": "4420" 00:21:19.824 }, 00:21:19.824 "peer_address": { 00:21:19.824 "trtype": "TCP", 00:21:19.824 "adrfam": "IPv4", 00:21:19.824 "traddr": "10.0.0.1", 00:21:19.824 "trsvcid": "46658" 00:21:19.824 }, 00:21:19.824 "auth": { 00:21:19.824 "state": "completed", 00:21:19.824 "digest": "sha256", 00:21:19.824 "dhgroup": "ffdhe4096" 00:21:19.824 } 00:21:19.824 } 00:21:19.824 ]' 00:21:19.824 09:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:19.824 09:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:19.824 09:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:20.086 09:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:20.086 09:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:20.086 09:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:20.086 09:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:20.086 09:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:20.086 09:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWY0NTlhMzdlYzhiODFlNzllMjY2MTI5MzRhY2JhODJiNDJhOTQ3MDA4MTViZmVjs/6f6w==: --dhchap-ctrl-secret DHHC-1:01:ZDU1Y2E1NDcyYTc0Njg4MzVmMmY1NjFhMjU0Y2Y2OTGEHHys: 00:21:20.345 09:38:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YWY0NTlhMzdlYzhiODFlNzllMjY2MTI5MzRhY2JhODJiNDJhOTQ3MDA4MTViZmVjs/6f6w==: --dhchap-ctrl-secret DHHC-1:01:ZDU1Y2E1NDcyYTc0Njg4MzVmMmY1NjFhMjU0Y2Y2OTGEHHys: 00:21:20.917 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:20.917 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:20.917 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:20.917 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.917 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.917 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.917 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:20.917 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:20.917 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:21.178 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:21:21.178 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:21.178 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:21.178 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:21.178 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:21.178 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:21.178 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:21:21.178 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.178 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.178 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.178 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:21.178 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:21.178 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:21.439 00:21:21.439 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:21.439 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:21.439 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:21.439 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.439 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:21.439 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.439 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.439 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.439 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:21.439 { 00:21:21.439 "cntlid": 31, 00:21:21.439 "qid": 0, 00:21:21.439 "state": "enabled", 00:21:21.439 "thread": "nvmf_tgt_poll_group_000", 00:21:21.439 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:21.439 "listen_address": { 00:21:21.439 "trtype": "TCP", 00:21:21.439 "adrfam": "IPv4", 00:21:21.439 "traddr": "10.0.0.2", 00:21:21.439 "trsvcid": "4420" 00:21:21.439 }, 00:21:21.439 "peer_address": { 00:21:21.439 "trtype": "TCP", 00:21:21.439 "adrfam": "IPv4", 00:21:21.439 "traddr": "10.0.0.1", 00:21:21.439 "trsvcid": "46692" 00:21:21.439 }, 00:21:21.439 "auth": { 00:21:21.439 "state": "completed", 00:21:21.439 "digest": "sha256", 00:21:21.439 "dhgroup": "ffdhe4096" 00:21:21.439 } 00:21:21.439 } 00:21:21.439 ]' 00:21:21.439 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:21.700 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:21.700 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:21.700 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:21.700 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:21.700 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:21.700 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:21.700 09:38:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:21.961 09:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTJhMjUxMTViYjI1OGRjMWU4OTM0ZjMwNjVhNTg3OWMwYTQzZTJkM2M3NWEzYjdmNGVmZjgxMTc5MDU2ODYyYa+Fpr4=: 00:21:21.961 09:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NTJhMjUxMTViYjI1OGRjMWU4OTM0ZjMwNjVhNTg3OWMwYTQzZTJkM2M3NWEzYjdmNGVmZjgxMTc5MDU2ODYyYa+Fpr4=: 00:21:22.534 09:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:22.534 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:22.534 09:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:22.534 09:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.534 09:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.534 09:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.534 09:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:22.534 09:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:22.534 09:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:22.534 09:38:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:22.795 09:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:21:22.795 09:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:22.795 09:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:22.795 09:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:22.795 09:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:22.795 09:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:22.795 09:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:22.795 09:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.795 09:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.795 09:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.795 09:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:22.795 09:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:22.795 09:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:23.056 00:21:23.056 09:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:23.056 09:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:23.056 09:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:23.318 09:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:23.318 09:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:23.318 09:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.318 09:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.318 09:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.318 09:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:23.318 { 00:21:23.318 "cntlid": 33, 00:21:23.318 "qid": 0, 00:21:23.318 "state": "enabled", 00:21:23.318 "thread": "nvmf_tgt_poll_group_000", 00:21:23.318 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:23.318 "listen_address": { 00:21:23.318 "trtype": "TCP", 00:21:23.318 "adrfam": "IPv4", 00:21:23.318 "traddr": "10.0.0.2", 00:21:23.318 "trsvcid": "4420" 00:21:23.318 }, 00:21:23.318 "peer_address": { 00:21:23.318 "trtype": "TCP", 00:21:23.318 "adrfam": "IPv4", 00:21:23.318 "traddr": "10.0.0.1", 00:21:23.318 "trsvcid": "46722" 00:21:23.318 }, 00:21:23.318 "auth": { 00:21:23.318 "state": "completed", 00:21:23.318 "digest": "sha256", 00:21:23.318 "dhgroup": "ffdhe6144" 00:21:23.318 } 00:21:23.318 } 00:21:23.318 ]' 00:21:23.318 09:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:23.318 09:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:23.318 09:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:23.318 09:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:23.318 09:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:23.318 09:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:23.318 09:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:23.318 09:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:23.579 09:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzAwNDQ2ZmI2Yjg1ZjA4N2Q1ZmFlNzFkMGI5YWU1NzVjMmYzMDQ4N2RiZWY0NTNl8Eag5g==: --dhchap-ctrl-secret DHHC-1:03:YTk5ZDc5MWQ4MDAxNWU2NmFkMTE2NjA2NmJjYzY3MzA0NTljOWIyZjk4MTkwMzAxMTVlZDk5MTFjMjQ4YjE3ZC7/qWM=: 00:21:23.579 09:38:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MzAwNDQ2ZmI2Yjg1ZjA4N2Q1ZmFlNzFkMGI5YWU1NzVjMmYzMDQ4N2RiZWY0NTNl8Eag5g==: --dhchap-ctrl-secret DHHC-1:03:YTk5ZDc5MWQ4MDAxNWU2NmFkMTE2NjA2NmJjYzY3MzA0NTljOWIyZjk4MTkwMzAxMTVlZDk5MTFjMjQ4YjE3ZC7/qWM=: 00:21:24.150 09:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:24.150 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:24.150 09:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:24.150 09:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.150 09:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.150 09:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.150 09:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:24.150 09:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:24.150 09:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:24.411 09:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:21:24.411 09:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:24.411 09:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:24.411 09:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:24.411 09:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:24.411 09:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:24.411 09:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:24.411 09:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.411 09:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.411 09:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.411 09:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:24.411 09:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:24.411 09:38:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:24.671 00:21:24.932 09:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:24.932 09:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:24.932 09:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:24.932 09:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.932 09:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:24.932 09:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.932 09:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.932 09:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.932 09:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:24.932 { 00:21:24.932 "cntlid": 35, 00:21:24.932 "qid": 0, 00:21:24.932 "state": "enabled", 00:21:24.932 "thread": "nvmf_tgt_poll_group_000", 00:21:24.932 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:24.932 "listen_address": { 00:21:24.932 "trtype": "TCP", 00:21:24.932 "adrfam": "IPv4", 00:21:24.932 "traddr": "10.0.0.2", 00:21:24.932 "trsvcid": "4420" 00:21:24.932 }, 00:21:24.932 "peer_address": { 00:21:24.932 "trtype": "TCP", 00:21:24.932 "adrfam": "IPv4", 00:21:24.932 "traddr": "10.0.0.1", 00:21:24.932 "trsvcid": "46746" 00:21:24.932 }, 00:21:24.932 "auth": { 00:21:24.932 "state": "completed", 00:21:24.932 "digest": "sha256", 00:21:24.932 "dhgroup": "ffdhe6144" 00:21:24.932 } 00:21:24.932 } 00:21:24.932 ]' 00:21:24.932 09:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:24.932 09:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:24.932 09:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:25.192 09:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:25.192 09:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:25.192 09:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:25.192 09:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:25.192 09:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:25.192 09:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Yzc5YTQyNTlkNmU5Y2Y5NDZkODFiN2Y5ZjkxMmNjM2SbHOxD: --dhchap-ctrl-secret DHHC-1:02:YTAzMTg5ODdlN2I0MzdlZDBmYmEwMGMyNzlmY2M1OGZmZWExYjRjNGY4YWIwZjk4FIJFjA==: 00:21:25.192 09:39:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:Yzc5YTQyNTlkNmU5Y2Y5NDZkODFiN2Y5ZjkxMmNjM2SbHOxD: --dhchap-ctrl-secret DHHC-1:02:YTAzMTg5ODdlN2I0MzdlZDBmYmEwMGMyNzlmY2M1OGZmZWExYjRjNGY4YWIwZjk4FIJFjA==: 00:21:26.131 09:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:26.131 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:26.131 09:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:26.131 09:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.131 09:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.131 09:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.132 09:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:26.132 09:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:26.132 09:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:26.132 09:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:21:26.132 09:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:26.132 09:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:26.132 09:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:26.132 09:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:26.132 09:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:26.132 09:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:26.132 09:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.132 09:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.132 09:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.132 09:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:26.132 09:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:26.132 09:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:26.391 00:21:26.391 09:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:26.391 09:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:26.391 09:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:26.651 09:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.651 09:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:26.651 09:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.651 09:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.651 09:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.651 09:39:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:26.651 { 00:21:26.651 "cntlid": 37, 00:21:26.651 "qid": 0, 00:21:26.651 "state": "enabled", 00:21:26.651 "thread": "nvmf_tgt_poll_group_000", 00:21:26.651 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:26.651 "listen_address": { 00:21:26.651 "trtype": "TCP", 00:21:26.651 "adrfam": "IPv4", 00:21:26.651 "traddr": "10.0.0.2", 00:21:26.651 "trsvcid": "4420" 00:21:26.651 }, 00:21:26.651 "peer_address": { 00:21:26.651 "trtype": "TCP", 00:21:26.651 "adrfam": "IPv4", 00:21:26.651 "traddr": "10.0.0.1", 00:21:26.651 "trsvcid": "46782" 00:21:26.651 }, 00:21:26.651 "auth": { 00:21:26.651 "state": "completed", 00:21:26.651 "digest": "sha256", 00:21:26.651 "dhgroup": "ffdhe6144" 00:21:26.651 } 00:21:26.651 } 00:21:26.651 ]' 00:21:26.651 09:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:26.651 09:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:26.651 09:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:26.651 09:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:26.651 09:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:26.916 09:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:26.916 09:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:26.916 09:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:26.916 09:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWY0NTlhMzdlYzhiODFlNzllMjY2MTI5MzRhY2JhODJiNDJhOTQ3MDA4MTViZmVjs/6f6w==: --dhchap-ctrl-secret DHHC-1:01:ZDU1Y2E1NDcyYTc0Njg4MzVmMmY1NjFhMjU0Y2Y2OTGEHHys: 00:21:26.916 09:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YWY0NTlhMzdlYzhiODFlNzllMjY2MTI5MzRhY2JhODJiNDJhOTQ3MDA4MTViZmVjs/6f6w==: --dhchap-ctrl-secret DHHC-1:01:ZDU1Y2E1NDcyYTc0Njg4MzVmMmY1NjFhMjU0Y2Y2OTGEHHys: 00:21:27.858 09:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:27.858 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:27.858 09:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:27.858 09:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.858 09:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.858 09:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.858 09:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:27.858 09:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:27.858 09:39:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:27.858 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:21:27.858 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:27.858 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:27.858 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:27.858 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:27.858 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:27.858 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:21:27.858 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.858 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.858 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.858 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:27.858 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:27.858 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:28.119 00:21:28.119 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:28.119 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:28.119 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:28.380 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:28.380 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:28.380 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.380 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.380 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.380 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:28.380 { 00:21:28.380 "cntlid": 39, 00:21:28.380 "qid": 0, 00:21:28.380 "state": "enabled", 00:21:28.380 "thread": "nvmf_tgt_poll_group_000", 00:21:28.380 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:28.380 "listen_address": { 00:21:28.380 "trtype": "TCP", 00:21:28.380 "adrfam": "IPv4", 00:21:28.380 "traddr": "10.0.0.2", 00:21:28.380 "trsvcid": "4420" 00:21:28.380 }, 00:21:28.380 "peer_address": { 00:21:28.380 "trtype": "TCP", 00:21:28.380 "adrfam": "IPv4", 00:21:28.380 "traddr": "10.0.0.1", 00:21:28.381 "trsvcid": "46810" 00:21:28.381 }, 00:21:28.381 "auth": { 00:21:28.381 "state": "completed", 00:21:28.381 "digest": "sha256", 00:21:28.381 "dhgroup": "ffdhe6144" 00:21:28.381 } 00:21:28.381 } 00:21:28.381 ]' 00:21:28.381 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:28.381 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:28.381 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:28.381 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:28.381 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:28.641 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:28.641 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:28.641 09:39:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:28.641 09:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTJhMjUxMTViYjI1OGRjMWU4OTM0ZjMwNjVhNTg3OWMwYTQzZTJkM2M3NWEzYjdmNGVmZjgxMTc5MDU2ODYyYa+Fpr4=: 00:21:28.641 09:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NTJhMjUxMTViYjI1OGRjMWU4OTM0ZjMwNjVhNTg3OWMwYTQzZTJkM2M3NWEzYjdmNGVmZjgxMTc5MDU2ODYyYa+Fpr4=: 00:21:29.211 09:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:29.473 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:29.473 09:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:29.473 09:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.473 09:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.473 09:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.473 09:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:29.473 09:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:29.473 09:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:29.473 09:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:29.473 09:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:21:29.473 09:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:29.473 09:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:29.473 09:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:29.473 09:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:29.473 09:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:29.473 09:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:29.473 09:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.473 09:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.473 09:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.473 09:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:29.473 09:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:29.473 09:39:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:30.045 00:21:30.045 09:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:30.045 09:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:30.045 09:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:30.305 09:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:30.305 09:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:30.305 09:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.305 09:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.305 09:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.305 09:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:30.305 { 00:21:30.305 "cntlid": 41, 00:21:30.305 "qid": 0, 00:21:30.305 "state": "enabled", 00:21:30.305 "thread": "nvmf_tgt_poll_group_000", 00:21:30.305 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:30.305 "listen_address": { 00:21:30.305 "trtype": "TCP", 00:21:30.305 "adrfam": "IPv4", 00:21:30.305 "traddr": "10.0.0.2", 00:21:30.305 "trsvcid": "4420" 00:21:30.305 }, 00:21:30.305 "peer_address": { 00:21:30.305 "trtype": "TCP", 00:21:30.305 "adrfam": "IPv4", 00:21:30.305 "traddr": "10.0.0.1", 00:21:30.305 "trsvcid": "43498" 00:21:30.305 }, 00:21:30.305 "auth": { 00:21:30.305 "state": "completed", 00:21:30.306 "digest": "sha256", 00:21:30.306 "dhgroup": "ffdhe8192" 00:21:30.306 } 00:21:30.306 } 00:21:30.306 ]' 00:21:30.306 09:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:30.306 09:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:30.306 09:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:30.306 09:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:30.306 09:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:30.306 09:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:30.306 09:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:30.306 09:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:30.565 09:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzAwNDQ2ZmI2Yjg1ZjA4N2Q1ZmFlNzFkMGI5YWU1NzVjMmYzMDQ4N2RiZWY0NTNl8Eag5g==: --dhchap-ctrl-secret DHHC-1:03:YTk5ZDc5MWQ4MDAxNWU2NmFkMTE2NjA2NmJjYzY3MzA0NTljOWIyZjk4MTkwMzAxMTVlZDk5MTFjMjQ4YjE3ZC7/qWM=: 00:21:30.566 09:39:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MzAwNDQ2ZmI2Yjg1ZjA4N2Q1ZmFlNzFkMGI5YWU1NzVjMmYzMDQ4N2RiZWY0NTNl8Eag5g==: --dhchap-ctrl-secret DHHC-1:03:YTk5ZDc5MWQ4MDAxNWU2NmFkMTE2NjA2NmJjYzY3MzA0NTljOWIyZjk4MTkwMzAxMTVlZDk5MTFjMjQ4YjE3ZC7/qWM=: 00:21:31.136 09:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:31.136 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:31.136 09:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:31.136 09:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.136 09:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.137 09:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.137 09:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:31.137 09:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:31.137 09:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:31.397 09:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:21:31.397 09:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:31.397 09:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:31.397 09:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:31.397 09:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:31.397 09:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:31.397 09:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:31.397 09:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.397 09:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.397 09:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.397 09:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:31.397 09:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:31.397 09:39:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:31.968 00:21:31.968 09:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:31.968 09:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:31.968 09:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:31.968 09:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.968 09:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:31.968 09:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.968 09:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.968 09:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.968 09:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:31.968 { 00:21:31.968 "cntlid": 43, 00:21:31.968 "qid": 0, 00:21:31.968 "state": "enabled", 00:21:31.968 "thread": "nvmf_tgt_poll_group_000", 00:21:31.968 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:31.968 "listen_address": { 00:21:31.968 "trtype": "TCP", 00:21:31.968 "adrfam": "IPv4", 00:21:31.968 "traddr": "10.0.0.2", 00:21:31.968 "trsvcid": "4420" 00:21:31.968 }, 00:21:31.968 "peer_address": { 00:21:31.968 "trtype": "TCP", 00:21:31.968 "adrfam": "IPv4", 00:21:31.968 "traddr": "10.0.0.1", 00:21:31.968 "trsvcid": "43526" 00:21:31.968 }, 00:21:31.968 "auth": { 00:21:31.968 "state": "completed", 00:21:31.968 "digest": "sha256", 00:21:31.968 "dhgroup": "ffdhe8192" 00:21:31.968 } 00:21:31.968 } 00:21:31.968 ]' 00:21:31.968 09:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:31.968 09:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:31.968 09:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:32.229 09:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:32.229 09:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:32.229 09:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:32.229 09:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:32.229 09:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:32.229 09:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Yzc5YTQyNTlkNmU5Y2Y5NDZkODFiN2Y5ZjkxMmNjM2SbHOxD: --dhchap-ctrl-secret DHHC-1:02:YTAzMTg5ODdlN2I0MzdlZDBmYmEwMGMyNzlmY2M1OGZmZWExYjRjNGY4YWIwZjk4FIJFjA==: 00:21:32.229 09:39:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:Yzc5YTQyNTlkNmU5Y2Y5NDZkODFiN2Y5ZjkxMmNjM2SbHOxD: --dhchap-ctrl-secret DHHC-1:02:YTAzMTg5ODdlN2I0MzdlZDBmYmEwMGMyNzlmY2M1OGZmZWExYjRjNGY4YWIwZjk4FIJFjA==: 00:21:33.172 09:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:33.172 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:33.172 09:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:33.172 09:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.172 09:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.172 09:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.172 09:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:33.172 09:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:33.172 09:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:33.172 09:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:21:33.172 09:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:33.172 09:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:33.172 09:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:33.172 09:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:33.172 09:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:33.172 09:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:33.172 09:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.172 09:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.172 09:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.172 09:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:33.172 09:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:33.172 09:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:33.745 00:21:33.745 09:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:33.745 09:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:33.745 09:39:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:33.745 09:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:33.745 09:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:33.745 09:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.745 09:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.745 09:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.745 09:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:33.745 { 00:21:33.745 "cntlid": 45, 00:21:33.745 "qid": 0, 00:21:33.745 "state": "enabled", 00:21:33.745 "thread": "nvmf_tgt_poll_group_000", 00:21:33.745 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:33.745 "listen_address": { 00:21:33.745 "trtype": "TCP", 00:21:33.745 "adrfam": "IPv4", 00:21:33.745 "traddr": "10.0.0.2", 00:21:33.745 "trsvcid": "4420" 00:21:33.745 }, 00:21:33.745 "peer_address": { 00:21:33.745 "trtype": "TCP", 00:21:33.745 "adrfam": "IPv4", 00:21:33.745 "traddr": "10.0.0.1", 00:21:33.745 "trsvcid": "43556" 00:21:33.745 }, 00:21:33.745 "auth": { 00:21:33.745 "state": "completed", 00:21:33.745 "digest": "sha256", 00:21:33.745 "dhgroup": "ffdhe8192" 00:21:33.745 } 00:21:33.745 } 00:21:33.745 ]' 00:21:33.745 09:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:34.012 09:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:34.012 09:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:34.012 09:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:34.012 09:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:34.012 09:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:34.012 09:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:34.012 09:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:34.379 09:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWY0NTlhMzdlYzhiODFlNzllMjY2MTI5MzRhY2JhODJiNDJhOTQ3MDA4MTViZmVjs/6f6w==: --dhchap-ctrl-secret DHHC-1:01:ZDU1Y2E1NDcyYTc0Njg4MzVmMmY1NjFhMjU0Y2Y2OTGEHHys: 00:21:34.379 09:39:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YWY0NTlhMzdlYzhiODFlNzllMjY2MTI5MzRhY2JhODJiNDJhOTQ3MDA4MTViZmVjs/6f6w==: --dhchap-ctrl-secret DHHC-1:01:ZDU1Y2E1NDcyYTc0Njg4MzVmMmY1NjFhMjU0Y2Y2OTGEHHys: 00:21:34.688 09:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:34.958 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:34.958 09:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:34.958 09:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.958 09:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.958 09:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.958 09:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:34.958 09:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:34.958 09:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:34.958 09:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:21:34.958 09:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:34.958 09:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:34.958 09:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:34.958 09:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:34.958 09:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:34.958 09:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:21:34.958 09:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.958 09:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.958 09:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.958 09:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:34.958 09:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:34.958 09:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:35.528 00:21:35.528 09:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:35.528 09:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:35.528 09:39:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:35.790 09:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:35.790 09:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:35.790 09:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.790 09:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.790 09:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.790 09:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:35.790 { 00:21:35.790 "cntlid": 47, 00:21:35.790 "qid": 0, 00:21:35.790 "state": "enabled", 00:21:35.790 "thread": "nvmf_tgt_poll_group_000", 00:21:35.790 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:35.790 "listen_address": { 00:21:35.790 "trtype": "TCP", 00:21:35.790 "adrfam": "IPv4", 00:21:35.790 "traddr": "10.0.0.2", 00:21:35.790 "trsvcid": "4420" 00:21:35.790 }, 00:21:35.790 "peer_address": { 00:21:35.790 "trtype": "TCP", 00:21:35.790 "adrfam": "IPv4", 00:21:35.790 "traddr": "10.0.0.1", 00:21:35.790 "trsvcid": "43586" 00:21:35.790 }, 00:21:35.790 "auth": { 00:21:35.790 "state": "completed", 00:21:35.790 "digest": "sha256", 00:21:35.790 "dhgroup": "ffdhe8192" 00:21:35.790 } 00:21:35.790 } 00:21:35.790 ]' 00:21:35.790 09:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:35.790 09:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:35.790 09:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:35.790 09:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:35.790 09:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:35.790 09:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:35.790 09:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:35.790 09:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:36.050 09:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTJhMjUxMTViYjI1OGRjMWU4OTM0ZjMwNjVhNTg3OWMwYTQzZTJkM2M3NWEzYjdmNGVmZjgxMTc5MDU2ODYyYa+Fpr4=: 00:21:36.050 09:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NTJhMjUxMTViYjI1OGRjMWU4OTM0ZjMwNjVhNTg3OWMwYTQzZTJkM2M3NWEzYjdmNGVmZjgxMTc5MDU2ODYyYa+Fpr4=: 00:21:36.620 09:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:36.620 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:36.620 09:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:36.620 09:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.620 09:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.620 09:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.620 09:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:21:36.620 09:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:36.620 09:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:36.620 09:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:36.620 09:39:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:36.881 09:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:21:36.881 09:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:36.881 09:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:36.881 09:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:36.881 09:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:36.881 09:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:36.881 09:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:36.881 09:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.881 09:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.881 09:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.881 09:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:36.881 09:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:36.881 09:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:36.881 00:21:36.881 09:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:36.881 09:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:36.881 09:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:37.142 09:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:37.142 09:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:37.142 09:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.142 09:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.142 09:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.142 09:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:37.142 { 00:21:37.142 "cntlid": 49, 00:21:37.142 "qid": 0, 00:21:37.142 "state": "enabled", 00:21:37.142 "thread": "nvmf_tgt_poll_group_000", 00:21:37.142 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:37.142 "listen_address": { 00:21:37.142 "trtype": "TCP", 00:21:37.142 "adrfam": "IPv4", 00:21:37.142 "traddr": "10.0.0.2", 00:21:37.142 "trsvcid": "4420" 00:21:37.142 }, 00:21:37.142 "peer_address": { 00:21:37.142 "trtype": "TCP", 00:21:37.142 "adrfam": "IPv4", 00:21:37.142 "traddr": "10.0.0.1", 00:21:37.142 "trsvcid": "43622" 00:21:37.142 }, 00:21:37.142 "auth": { 00:21:37.142 "state": "completed", 00:21:37.142 "digest": "sha384", 00:21:37.142 "dhgroup": "null" 00:21:37.142 } 00:21:37.142 } 00:21:37.142 ]' 00:21:37.142 09:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:37.142 09:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:37.142 09:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:37.404 09:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:37.404 09:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:37.404 09:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:37.404 09:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:37.404 09:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:37.404 09:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzAwNDQ2ZmI2Yjg1ZjA4N2Q1ZmFlNzFkMGI5YWU1NzVjMmYzMDQ4N2RiZWY0NTNl8Eag5g==: --dhchap-ctrl-secret DHHC-1:03:YTk5ZDc5MWQ4MDAxNWU2NmFkMTE2NjA2NmJjYzY3MzA0NTljOWIyZjk4MTkwMzAxMTVlZDk5MTFjMjQ4YjE3ZC7/qWM=: 00:21:37.404 09:39:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MzAwNDQ2ZmI2Yjg1ZjA4N2Q1ZmFlNzFkMGI5YWU1NzVjMmYzMDQ4N2RiZWY0NTNl8Eag5g==: --dhchap-ctrl-secret DHHC-1:03:YTk5ZDc5MWQ4MDAxNWU2NmFkMTE2NjA2NmJjYzY3MzA0NTljOWIyZjk4MTkwMzAxMTVlZDk5MTFjMjQ4YjE3ZC7/qWM=: 00:21:38.347 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:38.347 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:38.347 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:38.347 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.347 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.347 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.347 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:38.347 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:38.347 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:38.347 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:21:38.347 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:38.347 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:38.347 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:38.347 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:38.347 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:38.347 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:38.347 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.347 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.347 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.347 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:38.347 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:38.347 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:38.608 00:21:38.608 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:38.608 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:38.608 09:39:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:38.869 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:38.869 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:38.869 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.869 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.869 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.869 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:38.869 { 00:21:38.869 "cntlid": 51, 00:21:38.869 "qid": 0, 00:21:38.869 "state": "enabled", 00:21:38.869 "thread": "nvmf_tgt_poll_group_000", 00:21:38.869 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:38.869 "listen_address": { 00:21:38.869 "trtype": "TCP", 00:21:38.869 "adrfam": "IPv4", 00:21:38.869 "traddr": "10.0.0.2", 00:21:38.869 "trsvcid": "4420" 00:21:38.869 }, 00:21:38.869 "peer_address": { 00:21:38.869 "trtype": "TCP", 00:21:38.869 "adrfam": "IPv4", 00:21:38.869 "traddr": "10.0.0.1", 00:21:38.869 "trsvcid": "58542" 00:21:38.869 }, 00:21:38.869 "auth": { 00:21:38.869 "state": "completed", 00:21:38.869 "digest": "sha384", 00:21:38.869 "dhgroup": "null" 00:21:38.869 } 00:21:38.869 } 00:21:38.869 ]' 00:21:38.869 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:38.869 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:38.869 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:38.869 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:38.869 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:38.869 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:38.869 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:38.869 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:39.131 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Yzc5YTQyNTlkNmU5Y2Y5NDZkODFiN2Y5ZjkxMmNjM2SbHOxD: --dhchap-ctrl-secret DHHC-1:02:YTAzMTg5ODdlN2I0MzdlZDBmYmEwMGMyNzlmY2M1OGZmZWExYjRjNGY4YWIwZjk4FIJFjA==: 00:21:39.131 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:Yzc5YTQyNTlkNmU5Y2Y5NDZkODFiN2Y5ZjkxMmNjM2SbHOxD: --dhchap-ctrl-secret DHHC-1:02:YTAzMTg5ODdlN2I0MzdlZDBmYmEwMGMyNzlmY2M1OGZmZWExYjRjNGY4YWIwZjk4FIJFjA==: 00:21:39.702 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:39.702 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:39.702 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:39.702 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.702 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.702 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.702 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:39.702 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:39.702 09:39:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:39.963 09:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:21:39.963 09:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:39.963 09:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:39.963 09:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:39.963 09:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:39.963 09:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:39.963 09:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:39.963 09:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.963 09:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.963 09:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.963 09:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:39.963 09:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:39.963 09:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:39.963 00:21:40.224 09:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:40.224 09:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:40.224 09:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:40.224 09:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:40.224 09:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:40.224 09:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.224 09:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.224 09:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.224 09:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:40.224 { 00:21:40.224 "cntlid": 53, 00:21:40.224 "qid": 0, 00:21:40.224 "state": "enabled", 00:21:40.224 "thread": "nvmf_tgt_poll_group_000", 00:21:40.224 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:40.224 "listen_address": { 00:21:40.224 "trtype": "TCP", 00:21:40.224 "adrfam": "IPv4", 00:21:40.224 "traddr": "10.0.0.2", 00:21:40.224 "trsvcid": "4420" 00:21:40.224 }, 00:21:40.224 "peer_address": { 00:21:40.224 "trtype": "TCP", 00:21:40.224 "adrfam": "IPv4", 00:21:40.224 "traddr": "10.0.0.1", 00:21:40.224 "trsvcid": "58564" 00:21:40.224 }, 00:21:40.224 "auth": { 00:21:40.224 "state": "completed", 00:21:40.224 "digest": "sha384", 00:21:40.224 "dhgroup": "null" 00:21:40.224 } 00:21:40.224 } 00:21:40.224 ]' 00:21:40.224 09:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:40.224 09:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:40.224 09:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:40.484 09:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:40.484 09:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:40.484 09:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:40.484 09:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:40.484 09:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:40.484 09:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWY0NTlhMzdlYzhiODFlNzllMjY2MTI5MzRhY2JhODJiNDJhOTQ3MDA4MTViZmVjs/6f6w==: --dhchap-ctrl-secret DHHC-1:01:ZDU1Y2E1NDcyYTc0Njg4MzVmMmY1NjFhMjU0Y2Y2OTGEHHys: 00:21:40.484 09:39:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YWY0NTlhMzdlYzhiODFlNzllMjY2MTI5MzRhY2JhODJiNDJhOTQ3MDA4MTViZmVjs/6f6w==: --dhchap-ctrl-secret DHHC-1:01:ZDU1Y2E1NDcyYTc0Njg4MzVmMmY1NjFhMjU0Y2Y2OTGEHHys: 00:21:41.424 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:41.424 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:41.424 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:41.424 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.425 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.425 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.425 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:41.425 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:41.425 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:41.425 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:21:41.425 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:41.425 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:41.425 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:41.425 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:41.425 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:41.425 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:21:41.425 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.425 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.425 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.425 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:41.425 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:41.425 09:39:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:41.684 00:21:41.684 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:41.684 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:41.684 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:41.944 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:41.944 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:41.944 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.944 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.944 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.944 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:41.944 { 00:21:41.944 "cntlid": 55, 00:21:41.944 "qid": 0, 00:21:41.944 "state": "enabled", 00:21:41.944 "thread": "nvmf_tgt_poll_group_000", 00:21:41.944 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:41.944 "listen_address": { 00:21:41.944 "trtype": "TCP", 00:21:41.944 "adrfam": "IPv4", 00:21:41.945 "traddr": "10.0.0.2", 00:21:41.945 "trsvcid": "4420" 00:21:41.945 }, 00:21:41.945 "peer_address": { 00:21:41.945 "trtype": "TCP", 00:21:41.945 "adrfam": "IPv4", 00:21:41.945 "traddr": "10.0.0.1", 00:21:41.945 "trsvcid": "58600" 00:21:41.945 }, 00:21:41.945 "auth": { 00:21:41.945 "state": "completed", 00:21:41.945 "digest": "sha384", 00:21:41.945 "dhgroup": "null" 00:21:41.945 } 00:21:41.945 } 00:21:41.945 ]' 00:21:41.945 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:41.945 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:41.945 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:41.945 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:41.945 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:41.945 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:41.945 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:41.945 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:42.206 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTJhMjUxMTViYjI1OGRjMWU4OTM0ZjMwNjVhNTg3OWMwYTQzZTJkM2M3NWEzYjdmNGVmZjgxMTc5MDU2ODYyYa+Fpr4=: 00:21:42.206 09:39:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NTJhMjUxMTViYjI1OGRjMWU4OTM0ZjMwNjVhNTg3OWMwYTQzZTJkM2M3NWEzYjdmNGVmZjgxMTc5MDU2ODYyYa+Fpr4=: 00:21:42.776 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:42.776 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:42.776 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:42.776 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.776 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.776 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.777 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:42.777 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:42.777 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:42.777 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:43.038 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:21:43.038 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:43.038 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:43.038 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:43.038 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:43.038 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:43.038 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:43.038 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.038 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.038 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.038 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:43.038 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:43.038 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:43.299 00:21:43.299 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:43.299 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:43.299 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:43.560 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:43.560 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:43.560 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.560 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.560 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.560 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:43.560 { 00:21:43.560 "cntlid": 57, 00:21:43.560 "qid": 0, 00:21:43.560 "state": "enabled", 00:21:43.560 "thread": "nvmf_tgt_poll_group_000", 00:21:43.560 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:43.560 "listen_address": { 00:21:43.560 "trtype": "TCP", 00:21:43.560 "adrfam": "IPv4", 00:21:43.560 "traddr": "10.0.0.2", 00:21:43.560 "trsvcid": "4420" 00:21:43.560 }, 00:21:43.560 "peer_address": { 00:21:43.560 "trtype": "TCP", 00:21:43.560 "adrfam": "IPv4", 00:21:43.560 "traddr": "10.0.0.1", 00:21:43.560 "trsvcid": "58636" 00:21:43.560 }, 00:21:43.560 "auth": { 00:21:43.560 "state": "completed", 00:21:43.560 "digest": "sha384", 00:21:43.560 "dhgroup": "ffdhe2048" 00:21:43.560 } 00:21:43.560 } 00:21:43.560 ]' 00:21:43.560 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:43.560 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:43.560 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:43.560 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:43.560 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:43.560 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:43.560 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:43.560 09:39:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:43.821 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzAwNDQ2ZmI2Yjg1ZjA4N2Q1ZmFlNzFkMGI5YWU1NzVjMmYzMDQ4N2RiZWY0NTNl8Eag5g==: --dhchap-ctrl-secret DHHC-1:03:YTk5ZDc5MWQ4MDAxNWU2NmFkMTE2NjA2NmJjYzY3MzA0NTljOWIyZjk4MTkwMzAxMTVlZDk5MTFjMjQ4YjE3ZC7/qWM=: 00:21:43.821 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MzAwNDQ2ZmI2Yjg1ZjA4N2Q1ZmFlNzFkMGI5YWU1NzVjMmYzMDQ4N2RiZWY0NTNl8Eag5g==: --dhchap-ctrl-secret DHHC-1:03:YTk5ZDc5MWQ4MDAxNWU2NmFkMTE2NjA2NmJjYzY3MzA0NTljOWIyZjk4MTkwMzAxMTVlZDk5MTFjMjQ4YjE3ZC7/qWM=: 00:21:44.394 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:44.394 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:44.394 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:44.394 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.394 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.394 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.394 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:44.394 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:44.394 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:44.655 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:21:44.655 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:44.655 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:44.655 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:44.655 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:44.655 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:44.655 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:44.655 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.655 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.655 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.655 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:44.655 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:44.655 09:39:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:44.915 00:21:44.915 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:44.915 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:44.915 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:44.915 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:44.915 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:44.915 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.915 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.176 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.176 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:45.176 { 00:21:45.176 "cntlid": 59, 00:21:45.176 "qid": 0, 00:21:45.176 "state": "enabled", 00:21:45.176 "thread": "nvmf_tgt_poll_group_000", 00:21:45.176 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:45.176 "listen_address": { 00:21:45.176 "trtype": "TCP", 00:21:45.176 "adrfam": "IPv4", 00:21:45.176 "traddr": "10.0.0.2", 00:21:45.176 "trsvcid": "4420" 00:21:45.176 }, 00:21:45.176 "peer_address": { 00:21:45.176 "trtype": "TCP", 00:21:45.176 "adrfam": "IPv4", 00:21:45.176 "traddr": "10.0.0.1", 00:21:45.176 "trsvcid": "58668" 00:21:45.176 }, 00:21:45.176 "auth": { 00:21:45.176 "state": "completed", 00:21:45.176 "digest": "sha384", 00:21:45.176 "dhgroup": "ffdhe2048" 00:21:45.176 } 00:21:45.176 } 00:21:45.176 ]' 00:21:45.176 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:45.176 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:45.176 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:45.176 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:45.176 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:45.176 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:45.176 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:45.176 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:45.437 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Yzc5YTQyNTlkNmU5Y2Y5NDZkODFiN2Y5ZjkxMmNjM2SbHOxD: --dhchap-ctrl-secret DHHC-1:02:YTAzMTg5ODdlN2I0MzdlZDBmYmEwMGMyNzlmY2M1OGZmZWExYjRjNGY4YWIwZjk4FIJFjA==: 00:21:45.437 09:39:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:Yzc5YTQyNTlkNmU5Y2Y5NDZkODFiN2Y5ZjkxMmNjM2SbHOxD: --dhchap-ctrl-secret DHHC-1:02:YTAzMTg5ODdlN2I0MzdlZDBmYmEwMGMyNzlmY2M1OGZmZWExYjRjNGY4YWIwZjk4FIJFjA==: 00:21:46.009 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:46.009 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:46.009 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:46.009 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.009 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.009 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.009 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:46.009 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:46.009 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:46.269 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:21:46.269 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:46.269 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:46.269 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:46.269 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:46.269 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:46.269 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:46.269 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.269 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.269 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.269 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:46.269 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:46.269 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:46.529 00:21:46.529 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:46.529 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:46.529 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:46.529 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:46.529 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:46.529 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.529 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.529 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.529 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:46.529 { 00:21:46.529 "cntlid": 61, 00:21:46.529 "qid": 0, 00:21:46.529 "state": "enabled", 00:21:46.529 "thread": "nvmf_tgt_poll_group_000", 00:21:46.529 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:46.529 "listen_address": { 00:21:46.529 "trtype": "TCP", 00:21:46.529 "adrfam": "IPv4", 00:21:46.529 "traddr": "10.0.0.2", 00:21:46.529 "trsvcid": "4420" 00:21:46.529 }, 00:21:46.529 "peer_address": { 00:21:46.529 "trtype": "TCP", 00:21:46.529 "adrfam": "IPv4", 00:21:46.529 "traddr": "10.0.0.1", 00:21:46.529 "trsvcid": "58686" 00:21:46.529 }, 00:21:46.529 "auth": { 00:21:46.529 "state": "completed", 00:21:46.529 "digest": "sha384", 00:21:46.529 "dhgroup": "ffdhe2048" 00:21:46.529 } 00:21:46.529 } 00:21:46.529 ]' 00:21:46.529 09:39:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:46.789 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:46.789 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:46.789 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:46.789 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:46.789 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:46.789 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:46.789 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:47.049 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWY0NTlhMzdlYzhiODFlNzllMjY2MTI5MzRhY2JhODJiNDJhOTQ3MDA4MTViZmVjs/6f6w==: --dhchap-ctrl-secret DHHC-1:01:ZDU1Y2E1NDcyYTc0Njg4MzVmMmY1NjFhMjU0Y2Y2OTGEHHys: 00:21:47.049 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YWY0NTlhMzdlYzhiODFlNzllMjY2MTI5MzRhY2JhODJiNDJhOTQ3MDA4MTViZmVjs/6f6w==: --dhchap-ctrl-secret DHHC-1:01:ZDU1Y2E1NDcyYTc0Njg4MzVmMmY1NjFhMjU0Y2Y2OTGEHHys: 00:21:47.620 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:47.620 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:47.620 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:47.620 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.620 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.620 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.620 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:47.620 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:47.620 09:39:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:47.880 09:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:21:47.880 09:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:47.880 09:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:47.880 09:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:47.880 09:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:47.880 09:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:47.880 09:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:21:47.880 09:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.880 09:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.880 09:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.880 09:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:47.880 09:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:47.880 09:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:47.880 00:21:48.139 09:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:48.139 09:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:48.139 09:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:48.139 09:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:48.139 09:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:48.139 09:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.139 09:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.139 09:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.139 09:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:48.139 { 00:21:48.139 "cntlid": 63, 00:21:48.139 "qid": 0, 00:21:48.139 "state": "enabled", 00:21:48.139 "thread": "nvmf_tgt_poll_group_000", 00:21:48.139 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:48.139 "listen_address": { 00:21:48.139 "trtype": "TCP", 00:21:48.139 "adrfam": "IPv4", 00:21:48.139 "traddr": "10.0.0.2", 00:21:48.139 "trsvcid": "4420" 00:21:48.139 }, 00:21:48.139 "peer_address": { 00:21:48.139 "trtype": "TCP", 00:21:48.139 "adrfam": "IPv4", 00:21:48.139 "traddr": "10.0.0.1", 00:21:48.139 "trsvcid": "58702" 00:21:48.139 }, 00:21:48.139 "auth": { 00:21:48.139 "state": "completed", 00:21:48.139 "digest": "sha384", 00:21:48.139 "dhgroup": "ffdhe2048" 00:21:48.139 } 00:21:48.139 } 00:21:48.139 ]' 00:21:48.139 09:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:48.139 09:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:48.139 09:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:48.399 09:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:48.399 09:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:48.399 09:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:48.399 09:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:48.399 09:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:48.399 09:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTJhMjUxMTViYjI1OGRjMWU4OTM0ZjMwNjVhNTg3OWMwYTQzZTJkM2M3NWEzYjdmNGVmZjgxMTc5MDU2ODYyYa+Fpr4=: 00:21:48.399 09:39:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NTJhMjUxMTViYjI1OGRjMWU4OTM0ZjMwNjVhNTg3OWMwYTQzZTJkM2M3NWEzYjdmNGVmZjgxMTc5MDU2ODYyYa+Fpr4=: 00:21:49.341 09:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:49.341 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:49.341 09:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:49.341 09:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.341 09:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.341 09:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.341 09:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:49.341 09:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:49.341 09:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:49.341 09:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:49.341 09:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:21:49.341 09:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:49.341 09:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:49.341 09:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:49.341 09:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:49.341 09:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:49.341 09:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:49.341 09:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.341 09:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.341 09:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.341 09:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:49.341 09:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:49.341 09:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:49.601 00:21:49.601 09:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:49.601 09:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:49.601 09:39:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:49.861 09:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:49.861 09:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:49.861 09:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.861 09:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.861 09:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.861 09:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:49.861 { 00:21:49.861 "cntlid": 65, 00:21:49.861 "qid": 0, 00:21:49.861 "state": "enabled", 00:21:49.861 "thread": "nvmf_tgt_poll_group_000", 00:21:49.861 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:49.861 "listen_address": { 00:21:49.861 "trtype": "TCP", 00:21:49.861 "adrfam": "IPv4", 00:21:49.861 "traddr": "10.0.0.2", 00:21:49.861 "trsvcid": "4420" 00:21:49.861 }, 00:21:49.861 "peer_address": { 00:21:49.861 "trtype": "TCP", 00:21:49.861 "adrfam": "IPv4", 00:21:49.861 "traddr": "10.0.0.1", 00:21:49.861 "trsvcid": "34942" 00:21:49.861 }, 00:21:49.861 "auth": { 00:21:49.861 "state": "completed", 00:21:49.861 "digest": "sha384", 00:21:49.861 "dhgroup": "ffdhe3072" 00:21:49.861 } 00:21:49.861 } 00:21:49.861 ]' 00:21:49.861 09:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:49.861 09:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:49.861 09:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:49.861 09:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:49.861 09:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:49.861 09:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:49.861 09:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:49.861 09:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:50.120 09:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzAwNDQ2ZmI2Yjg1ZjA4N2Q1ZmFlNzFkMGI5YWU1NzVjMmYzMDQ4N2RiZWY0NTNl8Eag5g==: --dhchap-ctrl-secret DHHC-1:03:YTk5ZDc5MWQ4MDAxNWU2NmFkMTE2NjA2NmJjYzY3MzA0NTljOWIyZjk4MTkwMzAxMTVlZDk5MTFjMjQ4YjE3ZC7/qWM=: 00:21:50.120 09:39:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MzAwNDQ2ZmI2Yjg1ZjA4N2Q1ZmFlNzFkMGI5YWU1NzVjMmYzMDQ4N2RiZWY0NTNl8Eag5g==: --dhchap-ctrl-secret DHHC-1:03:YTk5ZDc5MWQ4MDAxNWU2NmFkMTE2NjA2NmJjYzY3MzA0NTljOWIyZjk4MTkwMzAxMTVlZDk5MTFjMjQ4YjE3ZC7/qWM=: 00:21:50.688 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:50.688 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:50.688 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:50.688 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.688 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.688 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.688 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:50.688 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:50.688 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:50.948 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:21:50.948 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:50.948 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:50.948 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:50.948 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:50.948 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:50.948 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:50.948 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.948 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.948 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.948 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:50.948 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:50.948 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:51.208 00:21:51.208 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:51.208 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:51.208 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:51.468 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:51.468 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:51.468 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.468 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.468 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.468 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:51.468 { 00:21:51.468 "cntlid": 67, 00:21:51.468 "qid": 0, 00:21:51.468 "state": "enabled", 00:21:51.468 "thread": "nvmf_tgt_poll_group_000", 00:21:51.468 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:51.468 "listen_address": { 00:21:51.468 "trtype": "TCP", 00:21:51.468 "adrfam": "IPv4", 00:21:51.468 "traddr": "10.0.0.2", 00:21:51.468 "trsvcid": "4420" 00:21:51.468 }, 00:21:51.468 "peer_address": { 00:21:51.468 "trtype": "TCP", 00:21:51.468 "adrfam": "IPv4", 00:21:51.468 "traddr": "10.0.0.1", 00:21:51.468 "trsvcid": "34976" 00:21:51.468 }, 00:21:51.468 "auth": { 00:21:51.468 "state": "completed", 00:21:51.468 "digest": "sha384", 00:21:51.468 "dhgroup": "ffdhe3072" 00:21:51.468 } 00:21:51.468 } 00:21:51.468 ]' 00:21:51.468 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:51.468 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:51.468 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:51.468 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:51.468 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:51.468 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:51.468 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:51.468 09:39:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:51.728 09:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Yzc5YTQyNTlkNmU5Y2Y5NDZkODFiN2Y5ZjkxMmNjM2SbHOxD: --dhchap-ctrl-secret DHHC-1:02:YTAzMTg5ODdlN2I0MzdlZDBmYmEwMGMyNzlmY2M1OGZmZWExYjRjNGY4YWIwZjk4FIJFjA==: 00:21:51.728 09:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:Yzc5YTQyNTlkNmU5Y2Y5NDZkODFiN2Y5ZjkxMmNjM2SbHOxD: --dhchap-ctrl-secret DHHC-1:02:YTAzMTg5ODdlN2I0MzdlZDBmYmEwMGMyNzlmY2M1OGZmZWExYjRjNGY4YWIwZjk4FIJFjA==: 00:21:52.298 09:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:52.298 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:52.298 09:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:52.298 09:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.298 09:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.298 09:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.298 09:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:52.298 09:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:52.298 09:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:52.558 09:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:21:52.558 09:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:52.558 09:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:52.559 09:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:52.559 09:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:52.559 09:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:52.559 09:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:52.559 09:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.559 09:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.559 09:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.559 09:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:52.559 09:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:52.559 09:39:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:52.819 00:21:52.819 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:52.819 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:52.819 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:53.079 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:53.079 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:53.079 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.079 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.079 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.079 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:53.079 { 00:21:53.079 "cntlid": 69, 00:21:53.079 "qid": 0, 00:21:53.079 "state": "enabled", 00:21:53.079 "thread": "nvmf_tgt_poll_group_000", 00:21:53.079 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:53.079 "listen_address": { 00:21:53.079 "trtype": "TCP", 00:21:53.079 "adrfam": "IPv4", 00:21:53.079 "traddr": "10.0.0.2", 00:21:53.079 "trsvcid": "4420" 00:21:53.079 }, 00:21:53.079 "peer_address": { 00:21:53.079 "trtype": "TCP", 00:21:53.079 "adrfam": "IPv4", 00:21:53.079 "traddr": "10.0.0.1", 00:21:53.079 "trsvcid": "35012" 00:21:53.079 }, 00:21:53.079 "auth": { 00:21:53.079 "state": "completed", 00:21:53.079 "digest": "sha384", 00:21:53.079 "dhgroup": "ffdhe3072" 00:21:53.079 } 00:21:53.079 } 00:21:53.079 ]' 00:21:53.079 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:53.079 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:53.079 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:53.079 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:53.079 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:53.079 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:53.079 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:53.079 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:53.339 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWY0NTlhMzdlYzhiODFlNzllMjY2MTI5MzRhY2JhODJiNDJhOTQ3MDA4MTViZmVjs/6f6w==: --dhchap-ctrl-secret DHHC-1:01:ZDU1Y2E1NDcyYTc0Njg4MzVmMmY1NjFhMjU0Y2Y2OTGEHHys: 00:21:53.339 09:39:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YWY0NTlhMzdlYzhiODFlNzllMjY2MTI5MzRhY2JhODJiNDJhOTQ3MDA4MTViZmVjs/6f6w==: --dhchap-ctrl-secret DHHC-1:01:ZDU1Y2E1NDcyYTc0Njg4MzVmMmY1NjFhMjU0Y2Y2OTGEHHys: 00:21:53.910 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:53.910 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:53.910 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:53.910 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.910 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.910 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.910 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:53.910 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:53.910 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:54.170 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:21:54.170 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:54.170 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:54.170 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:54.170 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:54.170 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:54.170 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:21:54.170 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.170 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.170 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.170 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:54.170 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:54.170 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:54.430 00:21:54.430 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:54.430 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:54.430 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:54.691 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:54.691 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:54.691 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.691 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.691 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.691 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:54.691 { 00:21:54.691 "cntlid": 71, 00:21:54.691 "qid": 0, 00:21:54.691 "state": "enabled", 00:21:54.691 "thread": "nvmf_tgt_poll_group_000", 00:21:54.691 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:54.691 "listen_address": { 00:21:54.691 "trtype": "TCP", 00:21:54.691 "adrfam": "IPv4", 00:21:54.691 "traddr": "10.0.0.2", 00:21:54.691 "trsvcid": "4420" 00:21:54.691 }, 00:21:54.691 "peer_address": { 00:21:54.691 "trtype": "TCP", 00:21:54.691 "adrfam": "IPv4", 00:21:54.691 "traddr": "10.0.0.1", 00:21:54.691 "trsvcid": "35038" 00:21:54.691 }, 00:21:54.691 "auth": { 00:21:54.691 "state": "completed", 00:21:54.691 "digest": "sha384", 00:21:54.691 "dhgroup": "ffdhe3072" 00:21:54.691 } 00:21:54.691 } 00:21:54.691 ]' 00:21:54.691 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:54.691 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:54.691 09:39:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:54.691 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:54.691 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:54.691 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:54.691 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:54.691 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:54.952 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTJhMjUxMTViYjI1OGRjMWU4OTM0ZjMwNjVhNTg3OWMwYTQzZTJkM2M3NWEzYjdmNGVmZjgxMTc5MDU2ODYyYa+Fpr4=: 00:21:54.952 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NTJhMjUxMTViYjI1OGRjMWU4OTM0ZjMwNjVhNTg3OWMwYTQzZTJkM2M3NWEzYjdmNGVmZjgxMTc5MDU2ODYyYa+Fpr4=: 00:21:55.524 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:55.524 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:55.524 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:55.524 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.524 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.524 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.524 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:55.524 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:55.524 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:55.524 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:55.524 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:21:55.524 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:55.524 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:55.524 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:55.524 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:55.524 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:55.524 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:55.524 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.524 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.784 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.784 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:55.784 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:55.784 09:39:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:55.784 00:21:56.045 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:56.045 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:56.045 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:56.045 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:56.045 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:56.045 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.045 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.045 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.045 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:56.045 { 00:21:56.045 "cntlid": 73, 00:21:56.045 "qid": 0, 00:21:56.045 "state": "enabled", 00:21:56.045 "thread": "nvmf_tgt_poll_group_000", 00:21:56.045 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:56.045 "listen_address": { 00:21:56.045 "trtype": "TCP", 00:21:56.045 "adrfam": "IPv4", 00:21:56.045 "traddr": "10.0.0.2", 00:21:56.045 "trsvcid": "4420" 00:21:56.045 }, 00:21:56.045 "peer_address": { 00:21:56.045 "trtype": "TCP", 00:21:56.045 "adrfam": "IPv4", 00:21:56.045 "traddr": "10.0.0.1", 00:21:56.045 "trsvcid": "35050" 00:21:56.045 }, 00:21:56.045 "auth": { 00:21:56.045 "state": "completed", 00:21:56.045 "digest": "sha384", 00:21:56.045 "dhgroup": "ffdhe4096" 00:21:56.045 } 00:21:56.045 } 00:21:56.045 ]' 00:21:56.045 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:56.045 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:56.045 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:56.306 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:56.306 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:56.306 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:56.306 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:56.306 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:56.306 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzAwNDQ2ZmI2Yjg1ZjA4N2Q1ZmFlNzFkMGI5YWU1NzVjMmYzMDQ4N2RiZWY0NTNl8Eag5g==: --dhchap-ctrl-secret DHHC-1:03:YTk5ZDc5MWQ4MDAxNWU2NmFkMTE2NjA2NmJjYzY3MzA0NTljOWIyZjk4MTkwMzAxMTVlZDk5MTFjMjQ4YjE3ZC7/qWM=: 00:21:56.306 09:39:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MzAwNDQ2ZmI2Yjg1ZjA4N2Q1ZmFlNzFkMGI5YWU1NzVjMmYzMDQ4N2RiZWY0NTNl8Eag5g==: --dhchap-ctrl-secret DHHC-1:03:YTk5ZDc5MWQ4MDAxNWU2NmFkMTE2NjA2NmJjYzY3MzA0NTljOWIyZjk4MTkwMzAxMTVlZDk5MTFjMjQ4YjE3ZC7/qWM=: 00:21:56.876 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:57.136 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:57.136 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:57.136 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.136 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.136 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.136 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:57.136 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:57.136 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:57.136 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:21:57.136 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:57.136 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:57.136 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:57.136 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:57.136 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:57.136 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:57.136 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.136 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.136 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.136 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:57.136 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:57.136 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:57.396 00:21:57.396 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:57.396 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:57.396 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:57.656 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:57.656 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:57.656 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.656 09:39:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.656 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.656 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:57.656 { 00:21:57.656 "cntlid": 75, 00:21:57.656 "qid": 0, 00:21:57.656 "state": "enabled", 00:21:57.656 "thread": "nvmf_tgt_poll_group_000", 00:21:57.656 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:57.656 "listen_address": { 00:21:57.656 "trtype": "TCP", 00:21:57.656 "adrfam": "IPv4", 00:21:57.656 "traddr": "10.0.0.2", 00:21:57.656 "trsvcid": "4420" 00:21:57.656 }, 00:21:57.656 "peer_address": { 00:21:57.656 "trtype": "TCP", 00:21:57.656 "adrfam": "IPv4", 00:21:57.656 "traddr": "10.0.0.1", 00:21:57.656 "trsvcid": "35078" 00:21:57.656 }, 00:21:57.656 "auth": { 00:21:57.656 "state": "completed", 00:21:57.656 "digest": "sha384", 00:21:57.656 "dhgroup": "ffdhe4096" 00:21:57.656 } 00:21:57.656 } 00:21:57.656 ]' 00:21:57.656 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:57.656 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:57.656 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:57.656 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:57.656 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:57.916 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:57.916 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:57.916 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:57.916 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Yzc5YTQyNTlkNmU5Y2Y5NDZkODFiN2Y5ZjkxMmNjM2SbHOxD: --dhchap-ctrl-secret DHHC-1:02:YTAzMTg5ODdlN2I0MzdlZDBmYmEwMGMyNzlmY2M1OGZmZWExYjRjNGY4YWIwZjk4FIJFjA==: 00:21:57.916 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:Yzc5YTQyNTlkNmU5Y2Y5NDZkODFiN2Y5ZjkxMmNjM2SbHOxD: --dhchap-ctrl-secret DHHC-1:02:YTAzMTg5ODdlN2I0MzdlZDBmYmEwMGMyNzlmY2M1OGZmZWExYjRjNGY4YWIwZjk4FIJFjA==: 00:21:58.899 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:58.899 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:58.899 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:58.899 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.899 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.899 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.899 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:58.899 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:58.899 09:39:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:58.899 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:21:58.899 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:58.899 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:58.899 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:58.899 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:58.899 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:58.899 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:58.899 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.899 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.899 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.899 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:58.899 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:58.899 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:59.159 00:21:59.159 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:59.159 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:59.159 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:59.421 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:59.421 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:59.421 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.421 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.421 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.421 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:59.421 { 00:21:59.421 "cntlid": 77, 00:21:59.421 "qid": 0, 00:21:59.421 "state": "enabled", 00:21:59.421 "thread": "nvmf_tgt_poll_group_000", 00:21:59.421 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:59.421 "listen_address": { 00:21:59.421 "trtype": "TCP", 00:21:59.421 "adrfam": "IPv4", 00:21:59.421 "traddr": "10.0.0.2", 00:21:59.421 "trsvcid": "4420" 00:21:59.421 }, 00:21:59.421 "peer_address": { 00:21:59.421 "trtype": "TCP", 00:21:59.421 "adrfam": "IPv4", 00:21:59.421 "traddr": "10.0.0.1", 00:21:59.421 "trsvcid": "38984" 00:21:59.421 }, 00:21:59.421 "auth": { 00:21:59.421 "state": "completed", 00:21:59.421 "digest": "sha384", 00:21:59.421 "dhgroup": "ffdhe4096" 00:21:59.421 } 00:21:59.421 } 00:21:59.421 ]' 00:21:59.421 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:59.421 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:59.421 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:59.421 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:59.421 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:59.421 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:59.421 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:59.421 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:59.682 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWY0NTlhMzdlYzhiODFlNzllMjY2MTI5MzRhY2JhODJiNDJhOTQ3MDA4MTViZmVjs/6f6w==: --dhchap-ctrl-secret DHHC-1:01:ZDU1Y2E1NDcyYTc0Njg4MzVmMmY1NjFhMjU0Y2Y2OTGEHHys: 00:21:59.682 09:39:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YWY0NTlhMzdlYzhiODFlNzllMjY2MTI5MzRhY2JhODJiNDJhOTQ3MDA4MTViZmVjs/6f6w==: --dhchap-ctrl-secret DHHC-1:01:ZDU1Y2E1NDcyYTc0Njg4MzVmMmY1NjFhMjU0Y2Y2OTGEHHys: 00:22:00.252 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:00.252 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:00.252 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:00.252 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.252 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.252 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.252 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:00.252 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:00.252 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:00.511 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:22:00.511 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:00.511 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:00.511 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:00.511 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:00.511 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:00.511 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:22:00.511 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.511 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.511 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.511 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:00.511 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:00.511 09:39:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:00.771 00:22:00.771 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:00.771 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:00.771 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:01.031 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:01.031 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:01.031 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.031 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.031 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.031 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:01.031 { 00:22:01.031 "cntlid": 79, 00:22:01.031 "qid": 0, 00:22:01.031 "state": "enabled", 00:22:01.031 "thread": "nvmf_tgt_poll_group_000", 00:22:01.031 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:01.031 "listen_address": { 00:22:01.031 "trtype": "TCP", 00:22:01.031 "adrfam": "IPv4", 00:22:01.031 "traddr": "10.0.0.2", 00:22:01.031 "trsvcid": "4420" 00:22:01.031 }, 00:22:01.031 "peer_address": { 00:22:01.031 "trtype": "TCP", 00:22:01.031 "adrfam": "IPv4", 00:22:01.031 "traddr": "10.0.0.1", 00:22:01.032 "trsvcid": "38998" 00:22:01.032 }, 00:22:01.032 "auth": { 00:22:01.032 "state": "completed", 00:22:01.032 "digest": "sha384", 00:22:01.032 "dhgroup": "ffdhe4096" 00:22:01.032 } 00:22:01.032 } 00:22:01.032 ]' 00:22:01.032 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:01.032 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:01.032 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:01.032 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:01.032 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:01.032 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:01.032 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:01.032 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:01.292 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTJhMjUxMTViYjI1OGRjMWU4OTM0ZjMwNjVhNTg3OWMwYTQzZTJkM2M3NWEzYjdmNGVmZjgxMTc5MDU2ODYyYa+Fpr4=: 00:22:01.292 09:39:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NTJhMjUxMTViYjI1OGRjMWU4OTM0ZjMwNjVhNTg3OWMwYTQzZTJkM2M3NWEzYjdmNGVmZjgxMTc5MDU2ODYyYa+Fpr4=: 00:22:01.864 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:01.864 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:01.864 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:01.864 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.864 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.864 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.864 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:01.864 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:01.865 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:01.865 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:02.125 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:22:02.125 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:02.125 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:02.125 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:02.125 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:02.125 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:02.125 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:02.125 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.125 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.125 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.125 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:02.125 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:02.125 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:02.386 00:22:02.386 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:02.386 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:02.386 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:02.646 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:02.646 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:02.646 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.646 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.646 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.646 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:02.646 { 00:22:02.646 "cntlid": 81, 00:22:02.646 "qid": 0, 00:22:02.646 "state": "enabled", 00:22:02.646 "thread": "nvmf_tgt_poll_group_000", 00:22:02.646 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:02.646 "listen_address": { 00:22:02.646 "trtype": "TCP", 00:22:02.646 "adrfam": "IPv4", 00:22:02.646 "traddr": "10.0.0.2", 00:22:02.646 "trsvcid": "4420" 00:22:02.646 }, 00:22:02.646 "peer_address": { 00:22:02.646 "trtype": "TCP", 00:22:02.646 "adrfam": "IPv4", 00:22:02.646 "traddr": "10.0.0.1", 00:22:02.646 "trsvcid": "39018" 00:22:02.646 }, 00:22:02.646 "auth": { 00:22:02.646 "state": "completed", 00:22:02.646 "digest": "sha384", 00:22:02.646 "dhgroup": "ffdhe6144" 00:22:02.646 } 00:22:02.646 } 00:22:02.646 ]' 00:22:02.646 09:39:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:02.646 09:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:02.646 09:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:02.646 09:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:02.646 09:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:02.907 09:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:02.907 09:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:02.907 09:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:02.907 09:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzAwNDQ2ZmI2Yjg1ZjA4N2Q1ZmFlNzFkMGI5YWU1NzVjMmYzMDQ4N2RiZWY0NTNl8Eag5g==: --dhchap-ctrl-secret DHHC-1:03:YTk5ZDc5MWQ4MDAxNWU2NmFkMTE2NjA2NmJjYzY3MzA0NTljOWIyZjk4MTkwMzAxMTVlZDk5MTFjMjQ4YjE3ZC7/qWM=: 00:22:02.907 09:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MzAwNDQ2ZmI2Yjg1ZjA4N2Q1ZmFlNzFkMGI5YWU1NzVjMmYzMDQ4N2RiZWY0NTNl8Eag5g==: --dhchap-ctrl-secret DHHC-1:03:YTk5ZDc5MWQ4MDAxNWU2NmFkMTE2NjA2NmJjYzY3MzA0NTljOWIyZjk4MTkwMzAxMTVlZDk5MTFjMjQ4YjE3ZC7/qWM=: 00:22:03.480 09:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:03.741 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:03.741 09:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:03.741 09:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.741 09:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.741 09:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.741 09:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:03.741 09:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:03.741 09:39:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:03.741 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:22:03.741 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:03.741 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:03.741 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:03.741 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:03.741 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:03.741 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:03.741 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.741 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.741 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.741 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:03.741 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:03.741 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:04.002 00:22:04.263 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:04.263 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:04.263 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:04.263 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:04.263 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:04.263 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.263 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.263 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.263 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:04.263 { 00:22:04.263 "cntlid": 83, 00:22:04.263 "qid": 0, 00:22:04.263 "state": "enabled", 00:22:04.263 "thread": "nvmf_tgt_poll_group_000", 00:22:04.263 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:04.263 "listen_address": { 00:22:04.263 "trtype": "TCP", 00:22:04.263 "adrfam": "IPv4", 00:22:04.263 "traddr": "10.0.0.2", 00:22:04.263 "trsvcid": "4420" 00:22:04.263 }, 00:22:04.263 "peer_address": { 00:22:04.263 "trtype": "TCP", 00:22:04.263 "adrfam": "IPv4", 00:22:04.263 "traddr": "10.0.0.1", 00:22:04.263 "trsvcid": "39044" 00:22:04.263 }, 00:22:04.263 "auth": { 00:22:04.263 "state": "completed", 00:22:04.263 "digest": "sha384", 00:22:04.263 "dhgroup": "ffdhe6144" 00:22:04.263 } 00:22:04.263 } 00:22:04.263 ]' 00:22:04.263 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:04.263 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:04.263 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:04.523 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:04.523 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:04.523 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:04.523 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:04.524 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:04.524 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Yzc5YTQyNTlkNmU5Y2Y5NDZkODFiN2Y5ZjkxMmNjM2SbHOxD: --dhchap-ctrl-secret DHHC-1:02:YTAzMTg5ODdlN2I0MzdlZDBmYmEwMGMyNzlmY2M1OGZmZWExYjRjNGY4YWIwZjk4FIJFjA==: 00:22:04.524 09:39:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:Yzc5YTQyNTlkNmU5Y2Y5NDZkODFiN2Y5ZjkxMmNjM2SbHOxD: --dhchap-ctrl-secret DHHC-1:02:YTAzMTg5ODdlN2I0MzdlZDBmYmEwMGMyNzlmY2M1OGZmZWExYjRjNGY4YWIwZjk4FIJFjA==: 00:22:05.466 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:05.466 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:05.466 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:05.466 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.466 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.466 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.466 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:05.466 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:05.466 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:05.466 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:22:05.466 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:05.466 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:05.466 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:05.466 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:05.466 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:05.466 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:05.466 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.466 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.466 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.466 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:05.466 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:05.466 09:39:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:05.727 00:22:05.727 09:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:05.727 09:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:05.727 09:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:05.988 09:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:05.988 09:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:05.988 09:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.988 09:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.988 09:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.988 09:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:05.988 { 00:22:05.988 "cntlid": 85, 00:22:05.988 "qid": 0, 00:22:05.988 "state": "enabled", 00:22:05.988 "thread": "nvmf_tgt_poll_group_000", 00:22:05.988 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:05.988 "listen_address": { 00:22:05.988 "trtype": "TCP", 00:22:05.988 "adrfam": "IPv4", 00:22:05.988 "traddr": "10.0.0.2", 00:22:05.988 "trsvcid": "4420" 00:22:05.988 }, 00:22:05.988 "peer_address": { 00:22:05.988 "trtype": "TCP", 00:22:05.988 "adrfam": "IPv4", 00:22:05.988 "traddr": "10.0.0.1", 00:22:05.988 "trsvcid": "39058" 00:22:05.988 }, 00:22:05.988 "auth": { 00:22:05.988 "state": "completed", 00:22:05.988 "digest": "sha384", 00:22:05.988 "dhgroup": "ffdhe6144" 00:22:05.988 } 00:22:05.988 } 00:22:05.988 ]' 00:22:05.988 09:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:05.988 09:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:05.988 09:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:05.988 09:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:05.988 09:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:05.988 09:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:05.988 09:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:05.988 09:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:06.249 09:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWY0NTlhMzdlYzhiODFlNzllMjY2MTI5MzRhY2JhODJiNDJhOTQ3MDA4MTViZmVjs/6f6w==: --dhchap-ctrl-secret DHHC-1:01:ZDU1Y2E1NDcyYTc0Njg4MzVmMmY1NjFhMjU0Y2Y2OTGEHHys: 00:22:06.249 09:39:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YWY0NTlhMzdlYzhiODFlNzllMjY2MTI5MzRhY2JhODJiNDJhOTQ3MDA4MTViZmVjs/6f6w==: --dhchap-ctrl-secret DHHC-1:01:ZDU1Y2E1NDcyYTc0Njg4MzVmMmY1NjFhMjU0Y2Y2OTGEHHys: 00:22:06.820 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:07.081 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:07.081 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:07.081 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.081 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.081 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.081 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:07.081 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:07.081 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:07.081 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:22:07.081 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:07.081 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:07.081 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:07.081 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:07.081 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:07.081 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:22:07.081 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.081 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.081 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.081 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:07.081 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:07.081 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:07.341 00:22:07.601 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:07.601 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:07.601 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:07.601 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:07.601 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:07.601 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.602 09:39:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.602 09:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.602 09:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:07.602 { 00:22:07.602 "cntlid": 87, 00:22:07.602 "qid": 0, 00:22:07.602 "state": "enabled", 00:22:07.602 "thread": "nvmf_tgt_poll_group_000", 00:22:07.602 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:07.602 "listen_address": { 00:22:07.602 "trtype": "TCP", 00:22:07.602 "adrfam": "IPv4", 00:22:07.602 "traddr": "10.0.0.2", 00:22:07.602 "trsvcid": "4420" 00:22:07.602 }, 00:22:07.602 "peer_address": { 00:22:07.602 "trtype": "TCP", 00:22:07.602 "adrfam": "IPv4", 00:22:07.602 "traddr": "10.0.0.1", 00:22:07.602 "trsvcid": "39100" 00:22:07.602 }, 00:22:07.602 "auth": { 00:22:07.602 "state": "completed", 00:22:07.602 "digest": "sha384", 00:22:07.602 "dhgroup": "ffdhe6144" 00:22:07.602 } 00:22:07.602 } 00:22:07.602 ]' 00:22:07.602 09:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:07.864 09:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:07.864 09:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:07.864 09:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:07.864 09:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:07.864 09:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:07.864 09:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:07.864 09:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:08.125 09:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTJhMjUxMTViYjI1OGRjMWU4OTM0ZjMwNjVhNTg3OWMwYTQzZTJkM2M3NWEzYjdmNGVmZjgxMTc5MDU2ODYyYa+Fpr4=: 00:22:08.125 09:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NTJhMjUxMTViYjI1OGRjMWU4OTM0ZjMwNjVhNTg3OWMwYTQzZTJkM2M3NWEzYjdmNGVmZjgxMTc5MDU2ODYyYa+Fpr4=: 00:22:08.696 09:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:08.697 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:08.697 09:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:08.697 09:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.697 09:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.697 09:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.697 09:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:08.697 09:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:08.697 09:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:08.697 09:39:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:08.957 09:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:22:08.957 09:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:08.957 09:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:08.957 09:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:08.957 09:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:08.957 09:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:08.957 09:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:08.957 09:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.957 09:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.957 09:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.957 09:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:08.957 09:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:08.957 09:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:09.217 00:22:09.217 09:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:09.217 09:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:09.217 09:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:09.478 09:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:09.478 09:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:09.478 09:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.478 09:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.478 09:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.478 09:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:09.478 { 00:22:09.478 "cntlid": 89, 00:22:09.478 "qid": 0, 00:22:09.478 "state": "enabled", 00:22:09.478 "thread": "nvmf_tgt_poll_group_000", 00:22:09.478 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:09.478 "listen_address": { 00:22:09.478 "trtype": "TCP", 00:22:09.478 "adrfam": "IPv4", 00:22:09.478 "traddr": "10.0.0.2", 00:22:09.478 "trsvcid": "4420" 00:22:09.478 }, 00:22:09.478 "peer_address": { 00:22:09.478 "trtype": "TCP", 00:22:09.478 "adrfam": "IPv4", 00:22:09.478 "traddr": "10.0.0.1", 00:22:09.478 "trsvcid": "45800" 00:22:09.478 }, 00:22:09.478 "auth": { 00:22:09.478 "state": "completed", 00:22:09.478 "digest": "sha384", 00:22:09.478 "dhgroup": "ffdhe8192" 00:22:09.478 } 00:22:09.478 } 00:22:09.478 ]' 00:22:09.478 09:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:09.478 09:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:09.478 09:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:09.739 09:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:09.739 09:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:09.739 09:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:09.739 09:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:09.739 09:39:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:09.739 09:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzAwNDQ2ZmI2Yjg1ZjA4N2Q1ZmFlNzFkMGI5YWU1NzVjMmYzMDQ4N2RiZWY0NTNl8Eag5g==: --dhchap-ctrl-secret DHHC-1:03:YTk5ZDc5MWQ4MDAxNWU2NmFkMTE2NjA2NmJjYzY3MzA0NTljOWIyZjk4MTkwMzAxMTVlZDk5MTFjMjQ4YjE3ZC7/qWM=: 00:22:09.739 09:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MzAwNDQ2ZmI2Yjg1ZjA4N2Q1ZmFlNzFkMGI5YWU1NzVjMmYzMDQ4N2RiZWY0NTNl8Eag5g==: --dhchap-ctrl-secret DHHC-1:03:YTk5ZDc5MWQ4MDAxNWU2NmFkMTE2NjA2NmJjYzY3MzA0NTljOWIyZjk4MTkwMzAxMTVlZDk5MTFjMjQ4YjE3ZC7/qWM=: 00:22:10.348 09:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:10.348 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:10.348 09:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:10.348 09:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.348 09:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.609 09:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.609 09:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:10.609 09:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:10.609 09:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:10.609 09:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:22:10.609 09:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:10.609 09:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:10.609 09:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:10.609 09:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:10.609 09:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:10.609 09:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:10.609 09:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.609 09:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.609 09:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.609 09:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:10.609 09:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:10.609 09:39:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:11.180 00:22:11.180 09:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:11.180 09:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:11.180 09:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:11.440 09:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:11.440 09:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:11.440 09:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.440 09:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.440 09:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.440 09:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:11.440 { 00:22:11.440 "cntlid": 91, 00:22:11.440 "qid": 0, 00:22:11.440 "state": "enabled", 00:22:11.440 "thread": "nvmf_tgt_poll_group_000", 00:22:11.440 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:11.440 "listen_address": { 00:22:11.440 "trtype": "TCP", 00:22:11.440 "adrfam": "IPv4", 00:22:11.440 "traddr": "10.0.0.2", 00:22:11.440 "trsvcid": "4420" 00:22:11.440 }, 00:22:11.440 "peer_address": { 00:22:11.440 "trtype": "TCP", 00:22:11.440 "adrfam": "IPv4", 00:22:11.440 "traddr": "10.0.0.1", 00:22:11.440 "trsvcid": "45824" 00:22:11.440 }, 00:22:11.440 "auth": { 00:22:11.440 "state": "completed", 00:22:11.440 "digest": "sha384", 00:22:11.440 "dhgroup": "ffdhe8192" 00:22:11.440 } 00:22:11.440 } 00:22:11.440 ]' 00:22:11.440 09:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:11.440 09:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:11.441 09:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:11.441 09:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:11.441 09:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:11.441 09:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:11.441 09:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:11.441 09:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:11.701 09:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Yzc5YTQyNTlkNmU5Y2Y5NDZkODFiN2Y5ZjkxMmNjM2SbHOxD: --dhchap-ctrl-secret DHHC-1:02:YTAzMTg5ODdlN2I0MzdlZDBmYmEwMGMyNzlmY2M1OGZmZWExYjRjNGY4YWIwZjk4FIJFjA==: 00:22:11.701 09:39:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:Yzc5YTQyNTlkNmU5Y2Y5NDZkODFiN2Y5ZjkxMmNjM2SbHOxD: --dhchap-ctrl-secret DHHC-1:02:YTAzMTg5ODdlN2I0MzdlZDBmYmEwMGMyNzlmY2M1OGZmZWExYjRjNGY4YWIwZjk4FIJFjA==: 00:22:12.271 09:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:12.271 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:12.272 09:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:12.272 09:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.272 09:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.272 09:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.272 09:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:12.272 09:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:12.272 09:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:12.542 09:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:22:12.542 09:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:12.542 09:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:12.542 09:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:12.542 09:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:12.542 09:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:12.542 09:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:12.542 09:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.542 09:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.542 09:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.542 09:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:12.542 09:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:12.542 09:39:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:12.844 00:22:13.157 09:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:13.157 09:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:13.157 09:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:13.157 09:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:13.157 09:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:13.157 09:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.157 09:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.157 09:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.157 09:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:13.157 { 00:22:13.157 "cntlid": 93, 00:22:13.157 "qid": 0, 00:22:13.157 "state": "enabled", 00:22:13.157 "thread": "nvmf_tgt_poll_group_000", 00:22:13.157 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:13.157 "listen_address": { 00:22:13.157 "trtype": "TCP", 00:22:13.157 "adrfam": "IPv4", 00:22:13.157 "traddr": "10.0.0.2", 00:22:13.157 "trsvcid": "4420" 00:22:13.157 }, 00:22:13.157 "peer_address": { 00:22:13.157 "trtype": "TCP", 00:22:13.157 "adrfam": "IPv4", 00:22:13.157 "traddr": "10.0.0.1", 00:22:13.157 "trsvcid": "45844" 00:22:13.157 }, 00:22:13.157 "auth": { 00:22:13.157 "state": "completed", 00:22:13.157 "digest": "sha384", 00:22:13.157 "dhgroup": "ffdhe8192" 00:22:13.157 } 00:22:13.157 } 00:22:13.157 ]' 00:22:13.157 09:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:13.157 09:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:13.157 09:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:13.157 09:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:13.420 09:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:13.420 09:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:13.420 09:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:13.420 09:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:13.420 09:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWY0NTlhMzdlYzhiODFlNzllMjY2MTI5MzRhY2JhODJiNDJhOTQ3MDA4MTViZmVjs/6f6w==: --dhchap-ctrl-secret DHHC-1:01:ZDU1Y2E1NDcyYTc0Njg4MzVmMmY1NjFhMjU0Y2Y2OTGEHHys: 00:22:13.420 09:39:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YWY0NTlhMzdlYzhiODFlNzllMjY2MTI5MzRhY2JhODJiNDJhOTQ3MDA4MTViZmVjs/6f6w==: --dhchap-ctrl-secret DHHC-1:01:ZDU1Y2E1NDcyYTc0Njg4MzVmMmY1NjFhMjU0Y2Y2OTGEHHys: 00:22:13.992 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:13.992 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:13.992 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:13.992 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.992 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.993 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.993 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:13.993 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:13.993 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:14.253 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:22:14.253 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:14.253 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:14.253 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:14.253 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:14.253 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:14.253 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:22:14.253 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.253 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.253 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.253 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:14.253 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:14.253 09:39:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:14.825 00:22:14.825 09:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:14.825 09:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:14.825 09:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:14.825 09:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:14.825 09:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:14.825 09:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.825 09:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.825 09:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.825 09:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:14.825 { 00:22:14.825 "cntlid": 95, 00:22:14.825 "qid": 0, 00:22:14.825 "state": "enabled", 00:22:14.825 "thread": "nvmf_tgt_poll_group_000", 00:22:14.825 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:14.825 "listen_address": { 00:22:14.825 "trtype": "TCP", 00:22:14.825 "adrfam": "IPv4", 00:22:14.825 "traddr": "10.0.0.2", 00:22:14.825 "trsvcid": "4420" 00:22:14.825 }, 00:22:14.825 "peer_address": { 00:22:14.825 "trtype": "TCP", 00:22:14.825 "adrfam": "IPv4", 00:22:14.825 "traddr": "10.0.0.1", 00:22:14.825 "trsvcid": "45880" 00:22:14.825 }, 00:22:14.825 "auth": { 00:22:14.825 "state": "completed", 00:22:14.825 "digest": "sha384", 00:22:14.825 "dhgroup": "ffdhe8192" 00:22:14.825 } 00:22:14.825 } 00:22:14.825 ]' 00:22:14.825 09:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:15.086 09:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:15.086 09:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:15.086 09:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:15.086 09:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:15.086 09:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:15.086 09:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:15.086 09:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:15.348 09:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTJhMjUxMTViYjI1OGRjMWU4OTM0ZjMwNjVhNTg3OWMwYTQzZTJkM2M3NWEzYjdmNGVmZjgxMTc5MDU2ODYyYa+Fpr4=: 00:22:15.348 09:39:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NTJhMjUxMTViYjI1OGRjMWU4OTM0ZjMwNjVhNTg3OWMwYTQzZTJkM2M3NWEzYjdmNGVmZjgxMTc5MDU2ODYyYa+Fpr4=: 00:22:15.920 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:15.920 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:15.920 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:15.920 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.920 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.920 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.920 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:22:15.920 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:15.920 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:15.920 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:15.920 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:16.180 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:22:16.180 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:16.180 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:16.180 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:16.180 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:16.180 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:16.180 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:16.180 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.180 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.180 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.180 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:16.180 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:16.180 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:16.180 00:22:16.440 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:16.440 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:16.440 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:16.440 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:16.440 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:16.440 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.440 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.440 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.440 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:16.440 { 00:22:16.440 "cntlid": 97, 00:22:16.440 "qid": 0, 00:22:16.440 "state": "enabled", 00:22:16.440 "thread": "nvmf_tgt_poll_group_000", 00:22:16.440 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:16.440 "listen_address": { 00:22:16.440 "trtype": "TCP", 00:22:16.440 "adrfam": "IPv4", 00:22:16.440 "traddr": "10.0.0.2", 00:22:16.440 "trsvcid": "4420" 00:22:16.440 }, 00:22:16.440 "peer_address": { 00:22:16.440 "trtype": "TCP", 00:22:16.440 "adrfam": "IPv4", 00:22:16.440 "traddr": "10.0.0.1", 00:22:16.440 "trsvcid": "45902" 00:22:16.440 }, 00:22:16.440 "auth": { 00:22:16.440 "state": "completed", 00:22:16.440 "digest": "sha512", 00:22:16.440 "dhgroup": "null" 00:22:16.440 } 00:22:16.440 } 00:22:16.440 ]' 00:22:16.440 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:16.441 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:16.441 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:16.701 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:16.701 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:16.701 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:16.701 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:16.701 09:39:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:16.961 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzAwNDQ2ZmI2Yjg1ZjA4N2Q1ZmFlNzFkMGI5YWU1NzVjMmYzMDQ4N2RiZWY0NTNl8Eag5g==: --dhchap-ctrl-secret DHHC-1:03:YTk5ZDc5MWQ4MDAxNWU2NmFkMTE2NjA2NmJjYzY3MzA0NTljOWIyZjk4MTkwMzAxMTVlZDk5MTFjMjQ4YjE3ZC7/qWM=: 00:22:16.961 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MzAwNDQ2ZmI2Yjg1ZjA4N2Q1ZmFlNzFkMGI5YWU1NzVjMmYzMDQ4N2RiZWY0NTNl8Eag5g==: --dhchap-ctrl-secret DHHC-1:03:YTk5ZDc5MWQ4MDAxNWU2NmFkMTE2NjA2NmJjYzY3MzA0NTljOWIyZjk4MTkwMzAxMTVlZDk5MTFjMjQ4YjE3ZC7/qWM=: 00:22:17.535 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:17.535 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:17.535 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:17.535 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.535 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.535 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.535 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:17.535 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:17.535 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:17.535 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:22:17.535 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:17.535 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:17.535 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:17.535 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:17.535 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:17.535 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:17.535 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.535 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.535 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.535 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:17.535 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:17.535 09:39:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:17.795 00:22:17.795 09:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:17.795 09:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:17.795 09:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:18.055 09:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:18.055 09:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:18.055 09:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.055 09:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.055 09:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.055 09:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:18.055 { 00:22:18.055 "cntlid": 99, 00:22:18.055 "qid": 0, 00:22:18.055 "state": "enabled", 00:22:18.055 "thread": "nvmf_tgt_poll_group_000", 00:22:18.055 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:18.055 "listen_address": { 00:22:18.055 "trtype": "TCP", 00:22:18.055 "adrfam": "IPv4", 00:22:18.055 "traddr": "10.0.0.2", 00:22:18.055 "trsvcid": "4420" 00:22:18.055 }, 00:22:18.055 "peer_address": { 00:22:18.055 "trtype": "TCP", 00:22:18.055 "adrfam": "IPv4", 00:22:18.055 "traddr": "10.0.0.1", 00:22:18.055 "trsvcid": "45946" 00:22:18.055 }, 00:22:18.055 "auth": { 00:22:18.055 "state": "completed", 00:22:18.055 "digest": "sha512", 00:22:18.055 "dhgroup": "null" 00:22:18.055 } 00:22:18.055 } 00:22:18.055 ]' 00:22:18.055 09:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:18.055 09:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:18.055 09:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:18.055 09:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:18.055 09:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:18.315 09:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:18.315 09:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:18.315 09:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:18.315 09:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Yzc5YTQyNTlkNmU5Y2Y5NDZkODFiN2Y5ZjkxMmNjM2SbHOxD: --dhchap-ctrl-secret DHHC-1:02:YTAzMTg5ODdlN2I0MzdlZDBmYmEwMGMyNzlmY2M1OGZmZWExYjRjNGY4YWIwZjk4FIJFjA==: 00:22:18.315 09:39:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:Yzc5YTQyNTlkNmU5Y2Y5NDZkODFiN2Y5ZjkxMmNjM2SbHOxD: --dhchap-ctrl-secret DHHC-1:02:YTAzMTg5ODdlN2I0MzdlZDBmYmEwMGMyNzlmY2M1OGZmZWExYjRjNGY4YWIwZjk4FIJFjA==: 00:22:18.885 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:18.885 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:18.885 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:18.885 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.885 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.145 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.145 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:19.145 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:19.145 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:19.145 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:22:19.145 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:19.145 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:19.145 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:19.145 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:19.145 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:19.145 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:19.145 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.145 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.145 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.145 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:19.145 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:19.145 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:19.406 00:22:19.406 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:19.406 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:19.406 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:19.667 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:19.667 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:19.667 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.667 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.667 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.667 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:19.667 { 00:22:19.667 "cntlid": 101, 00:22:19.667 "qid": 0, 00:22:19.667 "state": "enabled", 00:22:19.667 "thread": "nvmf_tgt_poll_group_000", 00:22:19.667 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:19.667 "listen_address": { 00:22:19.667 "trtype": "TCP", 00:22:19.667 "adrfam": "IPv4", 00:22:19.667 "traddr": "10.0.0.2", 00:22:19.667 "trsvcid": "4420" 00:22:19.667 }, 00:22:19.667 "peer_address": { 00:22:19.667 "trtype": "TCP", 00:22:19.667 "adrfam": "IPv4", 00:22:19.667 "traddr": "10.0.0.1", 00:22:19.667 "trsvcid": "34864" 00:22:19.667 }, 00:22:19.667 "auth": { 00:22:19.667 "state": "completed", 00:22:19.667 "digest": "sha512", 00:22:19.667 "dhgroup": "null" 00:22:19.667 } 00:22:19.667 } 00:22:19.667 ]' 00:22:19.667 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:19.667 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:19.667 09:39:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:19.667 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:19.667 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:19.667 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:19.667 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:19.667 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:19.926 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWY0NTlhMzdlYzhiODFlNzllMjY2MTI5MzRhY2JhODJiNDJhOTQ3MDA4MTViZmVjs/6f6w==: --dhchap-ctrl-secret DHHC-1:01:ZDU1Y2E1NDcyYTc0Njg4MzVmMmY1NjFhMjU0Y2Y2OTGEHHys: 00:22:19.926 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YWY0NTlhMzdlYzhiODFlNzllMjY2MTI5MzRhY2JhODJiNDJhOTQ3MDA4MTViZmVjs/6f6w==: --dhchap-ctrl-secret DHHC-1:01:ZDU1Y2E1NDcyYTc0Njg4MzVmMmY1NjFhMjU0Y2Y2OTGEHHys: 00:22:20.496 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:20.496 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:20.496 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:20.496 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.496 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.496 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.496 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:20.496 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:20.496 09:39:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:20.756 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:22:20.756 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:20.756 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:20.756 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:20.756 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:20.756 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:20.756 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:22:20.756 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.756 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.756 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.756 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:20.756 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:20.756 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:21.015 00:22:21.015 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:21.015 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:21.015 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:21.015 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:21.015 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:21.015 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.015 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.015 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.015 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:21.015 { 00:22:21.015 "cntlid": 103, 00:22:21.015 "qid": 0, 00:22:21.015 "state": "enabled", 00:22:21.015 "thread": "nvmf_tgt_poll_group_000", 00:22:21.015 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:21.015 "listen_address": { 00:22:21.015 "trtype": "TCP", 00:22:21.015 "adrfam": "IPv4", 00:22:21.015 "traddr": "10.0.0.2", 00:22:21.015 "trsvcid": "4420" 00:22:21.015 }, 00:22:21.015 "peer_address": { 00:22:21.015 "trtype": "TCP", 00:22:21.015 "adrfam": "IPv4", 00:22:21.015 "traddr": "10.0.0.1", 00:22:21.015 "trsvcid": "34872" 00:22:21.015 }, 00:22:21.015 "auth": { 00:22:21.015 "state": "completed", 00:22:21.015 "digest": "sha512", 00:22:21.015 "dhgroup": "null" 00:22:21.015 } 00:22:21.015 } 00:22:21.015 ]' 00:22:21.015 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:21.275 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:21.275 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:21.275 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:21.275 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:21.275 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:21.275 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:21.275 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:21.534 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTJhMjUxMTViYjI1OGRjMWU4OTM0ZjMwNjVhNTg3OWMwYTQzZTJkM2M3NWEzYjdmNGVmZjgxMTc5MDU2ODYyYa+Fpr4=: 00:22:21.534 09:39:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NTJhMjUxMTViYjI1OGRjMWU4OTM0ZjMwNjVhNTg3OWMwYTQzZTJkM2M3NWEzYjdmNGVmZjgxMTc5MDU2ODYyYa+Fpr4=: 00:22:22.104 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:22.104 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:22.104 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:22.104 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.104 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.104 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.104 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:22.104 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:22.105 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:22.105 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:22.366 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:22:22.366 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:22.366 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:22.366 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:22.366 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:22.366 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:22.366 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:22.366 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.366 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.366 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.366 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:22.366 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:22.366 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:22.366 00:22:22.671 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:22.671 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:22.671 09:39:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:22.671 09:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:22.671 09:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:22.671 09:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.671 09:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.671 09:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.671 09:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:22.671 { 00:22:22.671 "cntlid": 105, 00:22:22.671 "qid": 0, 00:22:22.671 "state": "enabled", 00:22:22.671 "thread": "nvmf_tgt_poll_group_000", 00:22:22.671 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:22.671 "listen_address": { 00:22:22.671 "trtype": "TCP", 00:22:22.671 "adrfam": "IPv4", 00:22:22.671 "traddr": "10.0.0.2", 00:22:22.671 "trsvcid": "4420" 00:22:22.671 }, 00:22:22.671 "peer_address": { 00:22:22.671 "trtype": "TCP", 00:22:22.671 "adrfam": "IPv4", 00:22:22.671 "traddr": "10.0.0.1", 00:22:22.671 "trsvcid": "34904" 00:22:22.671 }, 00:22:22.671 "auth": { 00:22:22.671 "state": "completed", 00:22:22.671 "digest": "sha512", 00:22:22.671 "dhgroup": "ffdhe2048" 00:22:22.671 } 00:22:22.671 } 00:22:22.671 ]' 00:22:22.671 09:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:22.671 09:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:22.671 09:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:22.671 09:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:22.671 09:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:22.931 09:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:22.931 09:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:22.931 09:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:22.931 09:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzAwNDQ2ZmI2Yjg1ZjA4N2Q1ZmFlNzFkMGI5YWU1NzVjMmYzMDQ4N2RiZWY0NTNl8Eag5g==: --dhchap-ctrl-secret DHHC-1:03:YTk5ZDc5MWQ4MDAxNWU2NmFkMTE2NjA2NmJjYzY3MzA0NTljOWIyZjk4MTkwMzAxMTVlZDk5MTFjMjQ4YjE3ZC7/qWM=: 00:22:22.931 09:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MzAwNDQ2ZmI2Yjg1ZjA4N2Q1ZmFlNzFkMGI5YWU1NzVjMmYzMDQ4N2RiZWY0NTNl8Eag5g==: --dhchap-ctrl-secret DHHC-1:03:YTk5ZDc5MWQ4MDAxNWU2NmFkMTE2NjA2NmJjYzY3MzA0NTljOWIyZjk4MTkwMzAxMTVlZDk5MTFjMjQ4YjE3ZC7/qWM=: 00:22:23.871 09:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:23.871 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:23.871 09:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:23.871 09:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.871 09:39:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.871 09:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.871 09:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:23.871 09:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:23.871 09:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:23.871 09:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:22:23.871 09:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:23.871 09:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:23.871 09:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:23.871 09:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:23.871 09:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:23.871 09:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:23.871 09:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.871 09:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.871 09:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.871 09:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:23.871 09:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:23.871 09:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:24.131 00:22:24.131 09:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:24.131 09:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:24.131 09:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:24.391 09:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:24.391 09:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:24.391 09:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.391 09:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.391 09:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.392 09:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:24.392 { 00:22:24.392 "cntlid": 107, 00:22:24.392 "qid": 0, 00:22:24.392 "state": "enabled", 00:22:24.392 "thread": "nvmf_tgt_poll_group_000", 00:22:24.392 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:24.392 "listen_address": { 00:22:24.392 "trtype": "TCP", 00:22:24.392 "adrfam": "IPv4", 00:22:24.392 "traddr": "10.0.0.2", 00:22:24.392 "trsvcid": "4420" 00:22:24.392 }, 00:22:24.392 "peer_address": { 00:22:24.392 "trtype": "TCP", 00:22:24.392 "adrfam": "IPv4", 00:22:24.392 "traddr": "10.0.0.1", 00:22:24.392 "trsvcid": "34926" 00:22:24.392 }, 00:22:24.392 "auth": { 00:22:24.392 "state": "completed", 00:22:24.392 "digest": "sha512", 00:22:24.392 "dhgroup": "ffdhe2048" 00:22:24.392 } 00:22:24.392 } 00:22:24.392 ]' 00:22:24.392 09:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:24.392 09:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:24.392 09:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:24.392 09:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:24.392 09:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:24.392 09:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:24.392 09:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:24.392 09:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:24.651 09:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Yzc5YTQyNTlkNmU5Y2Y5NDZkODFiN2Y5ZjkxMmNjM2SbHOxD: --dhchap-ctrl-secret DHHC-1:02:YTAzMTg5ODdlN2I0MzdlZDBmYmEwMGMyNzlmY2M1OGZmZWExYjRjNGY4YWIwZjk4FIJFjA==: 00:22:24.652 09:39:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:Yzc5YTQyNTlkNmU5Y2Y5NDZkODFiN2Y5ZjkxMmNjM2SbHOxD: --dhchap-ctrl-secret DHHC-1:02:YTAzMTg5ODdlN2I0MzdlZDBmYmEwMGMyNzlmY2M1OGZmZWExYjRjNGY4YWIwZjk4FIJFjA==: 00:22:25.223 09:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:25.223 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:25.223 09:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:25.223 09:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.223 09:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.223 09:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.223 09:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:25.223 09:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:25.223 09:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:25.482 09:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:22:25.482 09:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:25.482 09:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:25.482 09:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:25.482 09:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:25.482 09:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:25.482 09:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:25.482 09:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.482 09:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.482 09:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.482 09:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:25.482 09:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:25.482 09:40:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:25.742 00:22:25.742 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:25.742 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:25.742 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:26.003 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:26.003 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:26.003 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.003 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.003 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.003 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:26.003 { 00:22:26.003 "cntlid": 109, 00:22:26.003 "qid": 0, 00:22:26.003 "state": "enabled", 00:22:26.003 "thread": "nvmf_tgt_poll_group_000", 00:22:26.003 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:26.003 "listen_address": { 00:22:26.003 "trtype": "TCP", 00:22:26.003 "adrfam": "IPv4", 00:22:26.003 "traddr": "10.0.0.2", 00:22:26.003 "trsvcid": "4420" 00:22:26.003 }, 00:22:26.003 "peer_address": { 00:22:26.003 "trtype": "TCP", 00:22:26.003 "adrfam": "IPv4", 00:22:26.003 "traddr": "10.0.0.1", 00:22:26.003 "trsvcid": "34940" 00:22:26.003 }, 00:22:26.003 "auth": { 00:22:26.003 "state": "completed", 00:22:26.003 "digest": "sha512", 00:22:26.003 "dhgroup": "ffdhe2048" 00:22:26.003 } 00:22:26.003 } 00:22:26.003 ]' 00:22:26.003 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:26.003 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:26.003 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:26.003 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:26.003 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:26.003 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:26.003 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:26.003 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:26.263 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWY0NTlhMzdlYzhiODFlNzllMjY2MTI5MzRhY2JhODJiNDJhOTQ3MDA4MTViZmVjs/6f6w==: --dhchap-ctrl-secret DHHC-1:01:ZDU1Y2E1NDcyYTc0Njg4MzVmMmY1NjFhMjU0Y2Y2OTGEHHys: 00:22:26.263 09:40:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YWY0NTlhMzdlYzhiODFlNzllMjY2MTI5MzRhY2JhODJiNDJhOTQ3MDA4MTViZmVjs/6f6w==: --dhchap-ctrl-secret DHHC-1:01:ZDU1Y2E1NDcyYTc0Njg4MzVmMmY1NjFhMjU0Y2Y2OTGEHHys: 00:22:26.849 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:26.849 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:26.849 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:26.849 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.849 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.849 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.849 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:26.849 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:26.849 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:27.110 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:22:27.110 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:27.110 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:27.110 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:27.110 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:27.110 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:27.110 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:22:27.110 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.110 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.110 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.110 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:27.110 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:27.110 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:27.370 00:22:27.370 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:27.370 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:27.370 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:27.630 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:27.631 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:27.631 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.631 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.631 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.631 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:27.631 { 00:22:27.631 "cntlid": 111, 00:22:27.631 "qid": 0, 00:22:27.631 "state": "enabled", 00:22:27.631 "thread": "nvmf_tgt_poll_group_000", 00:22:27.631 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:27.631 "listen_address": { 00:22:27.631 "trtype": "TCP", 00:22:27.631 "adrfam": "IPv4", 00:22:27.631 "traddr": "10.0.0.2", 00:22:27.631 "trsvcid": "4420" 00:22:27.631 }, 00:22:27.631 "peer_address": { 00:22:27.631 "trtype": "TCP", 00:22:27.631 "adrfam": "IPv4", 00:22:27.631 "traddr": "10.0.0.1", 00:22:27.631 "trsvcid": "34978" 00:22:27.631 }, 00:22:27.631 "auth": { 00:22:27.631 "state": "completed", 00:22:27.631 "digest": "sha512", 00:22:27.631 "dhgroup": "ffdhe2048" 00:22:27.631 } 00:22:27.631 } 00:22:27.631 ]' 00:22:27.631 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:27.631 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:27.631 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:27.631 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:27.631 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:27.631 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:27.631 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:27.631 09:40:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:27.890 09:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTJhMjUxMTViYjI1OGRjMWU4OTM0ZjMwNjVhNTg3OWMwYTQzZTJkM2M3NWEzYjdmNGVmZjgxMTc5MDU2ODYyYa+Fpr4=: 00:22:27.890 09:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NTJhMjUxMTViYjI1OGRjMWU4OTM0ZjMwNjVhNTg3OWMwYTQzZTJkM2M3NWEzYjdmNGVmZjgxMTc5MDU2ODYyYa+Fpr4=: 00:22:28.462 09:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:28.462 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:28.462 09:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:28.462 09:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.462 09:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.462 09:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.462 09:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:28.462 09:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:28.462 09:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:28.462 09:40:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:28.723 09:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:22:28.723 09:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:28.723 09:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:28.723 09:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:28.723 09:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:28.723 09:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:28.723 09:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:28.723 09:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.723 09:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.723 09:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.723 09:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:28.723 09:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:28.724 09:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:28.984 00:22:28.985 09:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:28.985 09:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:28.985 09:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:28.985 09:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:28.985 09:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:28.985 09:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.985 09:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.246 09:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.246 09:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:29.246 { 00:22:29.246 "cntlid": 113, 00:22:29.246 "qid": 0, 00:22:29.246 "state": "enabled", 00:22:29.246 "thread": "nvmf_tgt_poll_group_000", 00:22:29.246 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:29.246 "listen_address": { 00:22:29.246 "trtype": "TCP", 00:22:29.246 "adrfam": "IPv4", 00:22:29.246 "traddr": "10.0.0.2", 00:22:29.246 "trsvcid": "4420" 00:22:29.246 }, 00:22:29.246 "peer_address": { 00:22:29.246 "trtype": "TCP", 00:22:29.246 "adrfam": "IPv4", 00:22:29.246 "traddr": "10.0.0.1", 00:22:29.246 "trsvcid": "39420" 00:22:29.246 }, 00:22:29.246 "auth": { 00:22:29.246 "state": "completed", 00:22:29.246 "digest": "sha512", 00:22:29.246 "dhgroup": "ffdhe3072" 00:22:29.246 } 00:22:29.246 } 00:22:29.246 ]' 00:22:29.246 09:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:29.246 09:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:29.246 09:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:29.246 09:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:29.246 09:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:29.246 09:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:29.246 09:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:29.246 09:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:29.506 09:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzAwNDQ2ZmI2Yjg1ZjA4N2Q1ZmFlNzFkMGI5YWU1NzVjMmYzMDQ4N2RiZWY0NTNl8Eag5g==: --dhchap-ctrl-secret DHHC-1:03:YTk5ZDc5MWQ4MDAxNWU2NmFkMTE2NjA2NmJjYzY3MzA0NTljOWIyZjk4MTkwMzAxMTVlZDk5MTFjMjQ4YjE3ZC7/qWM=: 00:22:29.506 09:40:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MzAwNDQ2ZmI2Yjg1ZjA4N2Q1ZmFlNzFkMGI5YWU1NzVjMmYzMDQ4N2RiZWY0NTNl8Eag5g==: --dhchap-ctrl-secret DHHC-1:03:YTk5ZDc5MWQ4MDAxNWU2NmFkMTE2NjA2NmJjYzY3MzA0NTljOWIyZjk4MTkwMzAxMTVlZDk5MTFjMjQ4YjE3ZC7/qWM=: 00:22:30.076 09:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:30.076 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:30.076 09:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:30.077 09:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.077 09:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.077 09:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.077 09:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:30.077 09:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:30.077 09:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:30.338 09:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:22:30.338 09:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:30.338 09:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:30.338 09:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:30.338 09:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:30.338 09:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:30.338 09:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:30.338 09:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.338 09:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.338 09:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.338 09:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:30.338 09:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:30.338 09:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:30.338 00:22:30.598 09:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:30.598 09:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:30.598 09:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:30.598 09:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:30.598 09:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:30.598 09:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.598 09:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.598 09:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.598 09:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:30.598 { 00:22:30.598 "cntlid": 115, 00:22:30.598 "qid": 0, 00:22:30.598 "state": "enabled", 00:22:30.598 "thread": "nvmf_tgt_poll_group_000", 00:22:30.598 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:30.598 "listen_address": { 00:22:30.598 "trtype": "TCP", 00:22:30.598 "adrfam": "IPv4", 00:22:30.598 "traddr": "10.0.0.2", 00:22:30.598 "trsvcid": "4420" 00:22:30.598 }, 00:22:30.598 "peer_address": { 00:22:30.598 "trtype": "TCP", 00:22:30.598 "adrfam": "IPv4", 00:22:30.599 "traddr": "10.0.0.1", 00:22:30.599 "trsvcid": "39462" 00:22:30.599 }, 00:22:30.599 "auth": { 00:22:30.599 "state": "completed", 00:22:30.599 "digest": "sha512", 00:22:30.599 "dhgroup": "ffdhe3072" 00:22:30.599 } 00:22:30.599 } 00:22:30.599 ]' 00:22:30.599 09:40:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:30.599 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:30.599 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:30.860 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:30.860 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:30.860 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:30.860 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:30.860 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:30.860 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Yzc5YTQyNTlkNmU5Y2Y5NDZkODFiN2Y5ZjkxMmNjM2SbHOxD: --dhchap-ctrl-secret DHHC-1:02:YTAzMTg5ODdlN2I0MzdlZDBmYmEwMGMyNzlmY2M1OGZmZWExYjRjNGY4YWIwZjk4FIJFjA==: 00:22:30.860 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:Yzc5YTQyNTlkNmU5Y2Y5NDZkODFiN2Y5ZjkxMmNjM2SbHOxD: --dhchap-ctrl-secret DHHC-1:02:YTAzMTg5ODdlN2I0MzdlZDBmYmEwMGMyNzlmY2M1OGZmZWExYjRjNGY4YWIwZjk4FIJFjA==: 00:22:31.801 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:31.801 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:31.801 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:31.801 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.801 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.801 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.801 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:31.801 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:31.801 09:40:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:31.801 09:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:22:31.801 09:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:31.801 09:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:31.801 09:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:31.801 09:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:31.801 09:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:31.801 09:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:31.801 09:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.801 09:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.801 09:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.801 09:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:31.801 09:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:31.801 09:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:32.061 00:22:32.061 09:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:32.061 09:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:32.061 09:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:32.321 09:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:32.321 09:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:32.321 09:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.321 09:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.321 09:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.321 09:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:32.321 { 00:22:32.321 "cntlid": 117, 00:22:32.321 "qid": 0, 00:22:32.321 "state": "enabled", 00:22:32.321 "thread": "nvmf_tgt_poll_group_000", 00:22:32.321 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:32.321 "listen_address": { 00:22:32.321 "trtype": "TCP", 00:22:32.321 "adrfam": "IPv4", 00:22:32.321 "traddr": "10.0.0.2", 00:22:32.321 "trsvcid": "4420" 00:22:32.321 }, 00:22:32.321 "peer_address": { 00:22:32.321 "trtype": "TCP", 00:22:32.321 "adrfam": "IPv4", 00:22:32.321 "traddr": "10.0.0.1", 00:22:32.321 "trsvcid": "39478" 00:22:32.321 }, 00:22:32.321 "auth": { 00:22:32.321 "state": "completed", 00:22:32.321 "digest": "sha512", 00:22:32.321 "dhgroup": "ffdhe3072" 00:22:32.321 } 00:22:32.321 } 00:22:32.321 ]' 00:22:32.321 09:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:32.321 09:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:32.321 09:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:32.321 09:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:32.321 09:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:32.321 09:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:32.321 09:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:32.321 09:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:32.581 09:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWY0NTlhMzdlYzhiODFlNzllMjY2MTI5MzRhY2JhODJiNDJhOTQ3MDA4MTViZmVjs/6f6w==: --dhchap-ctrl-secret DHHC-1:01:ZDU1Y2E1NDcyYTc0Njg4MzVmMmY1NjFhMjU0Y2Y2OTGEHHys: 00:22:32.581 09:40:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YWY0NTlhMzdlYzhiODFlNzllMjY2MTI5MzRhY2JhODJiNDJhOTQ3MDA4MTViZmVjs/6f6w==: --dhchap-ctrl-secret DHHC-1:01:ZDU1Y2E1NDcyYTc0Njg4MzVmMmY1NjFhMjU0Y2Y2OTGEHHys: 00:22:33.151 09:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:33.151 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:33.151 09:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:33.151 09:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.151 09:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.151 09:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.151 09:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:33.151 09:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:33.151 09:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:33.411 09:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:22:33.411 09:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:33.411 09:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:33.411 09:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:33.411 09:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:33.411 09:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:33.411 09:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:22:33.411 09:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.411 09:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.411 09:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.411 09:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:33.411 09:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:33.411 09:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:33.671 00:22:33.671 09:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:33.671 09:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:33.671 09:40:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:33.932 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:33.932 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:33.932 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.932 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.932 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.932 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:33.932 { 00:22:33.932 "cntlid": 119, 00:22:33.932 "qid": 0, 00:22:33.932 "state": "enabled", 00:22:33.932 "thread": "nvmf_tgt_poll_group_000", 00:22:33.932 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:33.932 "listen_address": { 00:22:33.932 "trtype": "TCP", 00:22:33.932 "adrfam": "IPv4", 00:22:33.932 "traddr": "10.0.0.2", 00:22:33.932 "trsvcid": "4420" 00:22:33.932 }, 00:22:33.932 "peer_address": { 00:22:33.932 "trtype": "TCP", 00:22:33.932 "adrfam": "IPv4", 00:22:33.932 "traddr": "10.0.0.1", 00:22:33.932 "trsvcid": "39512" 00:22:33.932 }, 00:22:33.932 "auth": { 00:22:33.932 "state": "completed", 00:22:33.932 "digest": "sha512", 00:22:33.932 "dhgroup": "ffdhe3072" 00:22:33.932 } 00:22:33.932 } 00:22:33.932 ]' 00:22:33.932 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:33.932 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:33.932 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:33.932 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:33.932 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:33.932 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:33.932 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:33.932 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:34.193 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTJhMjUxMTViYjI1OGRjMWU4OTM0ZjMwNjVhNTg3OWMwYTQzZTJkM2M3NWEzYjdmNGVmZjgxMTc5MDU2ODYyYa+Fpr4=: 00:22:34.193 09:40:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NTJhMjUxMTViYjI1OGRjMWU4OTM0ZjMwNjVhNTg3OWMwYTQzZTJkM2M3NWEzYjdmNGVmZjgxMTc5MDU2ODYyYa+Fpr4=: 00:22:34.777 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:34.777 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:34.777 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:34.777 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.777 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.777 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.777 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:34.777 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:34.777 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:34.778 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:35.043 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:22:35.043 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:35.043 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:35.043 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:35.043 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:35.043 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:35.043 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:35.043 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.043 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.043 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.043 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:35.043 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:35.043 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:35.302 00:22:35.302 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:35.302 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:35.302 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:35.302 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:35.302 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:35.302 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.302 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.563 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.563 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:35.563 { 00:22:35.563 "cntlid": 121, 00:22:35.563 "qid": 0, 00:22:35.563 "state": "enabled", 00:22:35.563 "thread": "nvmf_tgt_poll_group_000", 00:22:35.563 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:35.563 "listen_address": { 00:22:35.563 "trtype": "TCP", 00:22:35.563 "adrfam": "IPv4", 00:22:35.563 "traddr": "10.0.0.2", 00:22:35.563 "trsvcid": "4420" 00:22:35.563 }, 00:22:35.563 "peer_address": { 00:22:35.563 "trtype": "TCP", 00:22:35.563 "adrfam": "IPv4", 00:22:35.563 "traddr": "10.0.0.1", 00:22:35.563 "trsvcid": "39540" 00:22:35.563 }, 00:22:35.563 "auth": { 00:22:35.563 "state": "completed", 00:22:35.563 "digest": "sha512", 00:22:35.563 "dhgroup": "ffdhe4096" 00:22:35.563 } 00:22:35.563 } 00:22:35.563 ]' 00:22:35.563 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:35.563 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:35.563 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:35.563 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:35.563 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:35.563 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:35.563 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:35.563 09:40:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:35.823 09:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzAwNDQ2ZmI2Yjg1ZjA4N2Q1ZmFlNzFkMGI5YWU1NzVjMmYzMDQ4N2RiZWY0NTNl8Eag5g==: --dhchap-ctrl-secret DHHC-1:03:YTk5ZDc5MWQ4MDAxNWU2NmFkMTE2NjA2NmJjYzY3MzA0NTljOWIyZjk4MTkwMzAxMTVlZDk5MTFjMjQ4YjE3ZC7/qWM=: 00:22:35.823 09:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MzAwNDQ2ZmI2Yjg1ZjA4N2Q1ZmFlNzFkMGI5YWU1NzVjMmYzMDQ4N2RiZWY0NTNl8Eag5g==: --dhchap-ctrl-secret DHHC-1:03:YTk5ZDc5MWQ4MDAxNWU2NmFkMTE2NjA2NmJjYzY3MzA0NTljOWIyZjk4MTkwMzAxMTVlZDk5MTFjMjQ4YjE3ZC7/qWM=: 00:22:36.396 09:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:36.396 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:36.396 09:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:36.396 09:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.396 09:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.396 09:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.396 09:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:36.396 09:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:36.396 09:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:36.656 09:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:22:36.656 09:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:36.657 09:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:36.657 09:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:36.657 09:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:36.657 09:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:36.657 09:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:36.657 09:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.657 09:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.657 09:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.657 09:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:36.657 09:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:36.657 09:40:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:36.918 00:22:36.918 09:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:36.918 09:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:36.918 09:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:37.178 09:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:37.178 09:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:37.178 09:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.178 09:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.178 09:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.178 09:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:37.178 { 00:22:37.178 "cntlid": 123, 00:22:37.178 "qid": 0, 00:22:37.178 "state": "enabled", 00:22:37.178 "thread": "nvmf_tgt_poll_group_000", 00:22:37.178 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:37.178 "listen_address": { 00:22:37.178 "trtype": "TCP", 00:22:37.178 "adrfam": "IPv4", 00:22:37.178 "traddr": "10.0.0.2", 00:22:37.178 "trsvcid": "4420" 00:22:37.178 }, 00:22:37.178 "peer_address": { 00:22:37.178 "trtype": "TCP", 00:22:37.178 "adrfam": "IPv4", 00:22:37.178 "traddr": "10.0.0.1", 00:22:37.178 "trsvcid": "39566" 00:22:37.178 }, 00:22:37.178 "auth": { 00:22:37.178 "state": "completed", 00:22:37.178 "digest": "sha512", 00:22:37.178 "dhgroup": "ffdhe4096" 00:22:37.178 } 00:22:37.178 } 00:22:37.178 ]' 00:22:37.178 09:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:37.178 09:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:37.178 09:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:37.178 09:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:37.178 09:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:37.178 09:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:37.178 09:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:37.178 09:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:37.438 09:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Yzc5YTQyNTlkNmU5Y2Y5NDZkODFiN2Y5ZjkxMmNjM2SbHOxD: --dhchap-ctrl-secret DHHC-1:02:YTAzMTg5ODdlN2I0MzdlZDBmYmEwMGMyNzlmY2M1OGZmZWExYjRjNGY4YWIwZjk4FIJFjA==: 00:22:37.438 09:40:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:Yzc5YTQyNTlkNmU5Y2Y5NDZkODFiN2Y5ZjkxMmNjM2SbHOxD: --dhchap-ctrl-secret DHHC-1:02:YTAzMTg5ODdlN2I0MzdlZDBmYmEwMGMyNzlmY2M1OGZmZWExYjRjNGY4YWIwZjk4FIJFjA==: 00:22:38.008 09:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:38.008 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:38.008 09:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:38.008 09:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.008 09:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.008 09:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.008 09:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:38.008 09:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:38.008 09:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:38.269 09:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:22:38.269 09:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:38.269 09:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:38.269 09:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:38.269 09:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:38.269 09:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:38.269 09:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:38.269 09:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.269 09:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.269 09:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.269 09:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:38.269 09:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:38.269 09:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:38.529 00:22:38.529 09:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:38.529 09:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:38.529 09:40:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:38.790 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:38.790 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:38.790 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.790 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.790 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.790 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:38.790 { 00:22:38.790 "cntlid": 125, 00:22:38.790 "qid": 0, 00:22:38.790 "state": "enabled", 00:22:38.790 "thread": "nvmf_tgt_poll_group_000", 00:22:38.790 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:38.790 "listen_address": { 00:22:38.790 "trtype": "TCP", 00:22:38.790 "adrfam": "IPv4", 00:22:38.790 "traddr": "10.0.0.2", 00:22:38.790 "trsvcid": "4420" 00:22:38.790 }, 00:22:38.790 "peer_address": { 00:22:38.790 "trtype": "TCP", 00:22:38.790 "adrfam": "IPv4", 00:22:38.790 "traddr": "10.0.0.1", 00:22:38.790 "trsvcid": "52624" 00:22:38.790 }, 00:22:38.790 "auth": { 00:22:38.790 "state": "completed", 00:22:38.790 "digest": "sha512", 00:22:38.790 "dhgroup": "ffdhe4096" 00:22:38.790 } 00:22:38.790 } 00:22:38.790 ]' 00:22:38.790 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:38.790 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:38.790 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:38.790 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:38.790 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:38.790 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:38.790 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:38.790 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:39.050 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWY0NTlhMzdlYzhiODFlNzllMjY2MTI5MzRhY2JhODJiNDJhOTQ3MDA4MTViZmVjs/6f6w==: --dhchap-ctrl-secret DHHC-1:01:ZDU1Y2E1NDcyYTc0Njg4MzVmMmY1NjFhMjU0Y2Y2OTGEHHys: 00:22:39.050 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YWY0NTlhMzdlYzhiODFlNzllMjY2MTI5MzRhY2JhODJiNDJhOTQ3MDA4MTViZmVjs/6f6w==: --dhchap-ctrl-secret DHHC-1:01:ZDU1Y2E1NDcyYTc0Njg4MzVmMmY1NjFhMjU0Y2Y2OTGEHHys: 00:22:39.622 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:39.622 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:39.622 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:39.622 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.622 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.622 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.622 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:39.622 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:39.622 09:40:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:39.882 09:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:22:39.882 09:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:39.882 09:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:39.882 09:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:39.882 09:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:39.882 09:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:39.882 09:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:22:39.882 09:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.882 09:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.882 09:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.882 09:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:39.882 09:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:39.882 09:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:40.143 00:22:40.143 09:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:40.143 09:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:40.143 09:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:40.403 09:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:40.403 09:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:40.403 09:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.403 09:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.403 09:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.403 09:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:40.403 { 00:22:40.403 "cntlid": 127, 00:22:40.403 "qid": 0, 00:22:40.403 "state": "enabled", 00:22:40.403 "thread": "nvmf_tgt_poll_group_000", 00:22:40.403 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:40.403 "listen_address": { 00:22:40.403 "trtype": "TCP", 00:22:40.403 "adrfam": "IPv4", 00:22:40.403 "traddr": "10.0.0.2", 00:22:40.403 "trsvcid": "4420" 00:22:40.403 }, 00:22:40.403 "peer_address": { 00:22:40.403 "trtype": "TCP", 00:22:40.403 "adrfam": "IPv4", 00:22:40.403 "traddr": "10.0.0.1", 00:22:40.403 "trsvcid": "52640" 00:22:40.403 }, 00:22:40.403 "auth": { 00:22:40.403 "state": "completed", 00:22:40.403 "digest": "sha512", 00:22:40.403 "dhgroup": "ffdhe4096" 00:22:40.403 } 00:22:40.403 } 00:22:40.403 ]' 00:22:40.403 09:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:40.403 09:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:40.403 09:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:40.403 09:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:40.403 09:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:40.403 09:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:40.403 09:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:40.403 09:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:40.664 09:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTJhMjUxMTViYjI1OGRjMWU4OTM0ZjMwNjVhNTg3OWMwYTQzZTJkM2M3NWEzYjdmNGVmZjgxMTc5MDU2ODYyYa+Fpr4=: 00:22:40.664 09:40:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NTJhMjUxMTViYjI1OGRjMWU4OTM0ZjMwNjVhNTg3OWMwYTQzZTJkM2M3NWEzYjdmNGVmZjgxMTc5MDU2ODYyYa+Fpr4=: 00:22:41.235 09:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:41.235 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:41.235 09:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:41.235 09:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.235 09:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.235 09:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.235 09:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:41.235 09:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:41.235 09:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:41.235 09:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:41.496 09:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:22:41.496 09:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:41.496 09:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:41.496 09:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:41.496 09:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:41.496 09:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:41.496 09:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:41.496 09:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.496 09:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.496 09:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.496 09:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:41.496 09:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:41.496 09:40:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:41.757 00:22:41.757 09:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:41.757 09:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:41.757 09:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:42.019 09:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:42.019 09:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:42.019 09:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.019 09:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.019 09:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.019 09:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:42.019 { 00:22:42.019 "cntlid": 129, 00:22:42.019 "qid": 0, 00:22:42.019 "state": "enabled", 00:22:42.019 "thread": "nvmf_tgt_poll_group_000", 00:22:42.019 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:42.019 "listen_address": { 00:22:42.019 "trtype": "TCP", 00:22:42.019 "adrfam": "IPv4", 00:22:42.019 "traddr": "10.0.0.2", 00:22:42.019 "trsvcid": "4420" 00:22:42.019 }, 00:22:42.019 "peer_address": { 00:22:42.019 "trtype": "TCP", 00:22:42.019 "adrfam": "IPv4", 00:22:42.019 "traddr": "10.0.0.1", 00:22:42.019 "trsvcid": "52662" 00:22:42.019 }, 00:22:42.019 "auth": { 00:22:42.019 "state": "completed", 00:22:42.019 "digest": "sha512", 00:22:42.019 "dhgroup": "ffdhe6144" 00:22:42.019 } 00:22:42.019 } 00:22:42.019 ]' 00:22:42.019 09:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:42.019 09:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:42.019 09:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:42.019 09:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:42.019 09:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:42.019 09:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:42.019 09:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:42.019 09:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:42.279 09:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzAwNDQ2ZmI2Yjg1ZjA4N2Q1ZmFlNzFkMGI5YWU1NzVjMmYzMDQ4N2RiZWY0NTNl8Eag5g==: --dhchap-ctrl-secret DHHC-1:03:YTk5ZDc5MWQ4MDAxNWU2NmFkMTE2NjA2NmJjYzY3MzA0NTljOWIyZjk4MTkwMzAxMTVlZDk5MTFjMjQ4YjE3ZC7/qWM=: 00:22:42.280 09:40:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MzAwNDQ2ZmI2Yjg1ZjA4N2Q1ZmFlNzFkMGI5YWU1NzVjMmYzMDQ4N2RiZWY0NTNl8Eag5g==: --dhchap-ctrl-secret DHHC-1:03:YTk5ZDc5MWQ4MDAxNWU2NmFkMTE2NjA2NmJjYzY3MzA0NTljOWIyZjk4MTkwMzAxMTVlZDk5MTFjMjQ4YjE3ZC7/qWM=: 00:22:42.850 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:42.850 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:42.850 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:42.850 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.850 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.850 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.850 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:42.850 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:42.850 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:43.111 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:22:43.111 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:43.111 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:43.111 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:43.111 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:43.111 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:43.111 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:43.111 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.111 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.111 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.111 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:43.111 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:43.111 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:43.373 00:22:43.373 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:43.373 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:43.373 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:43.634 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:43.634 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:43.634 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.634 09:40:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.634 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.634 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:43.634 { 00:22:43.634 "cntlid": 131, 00:22:43.634 "qid": 0, 00:22:43.634 "state": "enabled", 00:22:43.634 "thread": "nvmf_tgt_poll_group_000", 00:22:43.634 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:43.634 "listen_address": { 00:22:43.634 "trtype": "TCP", 00:22:43.634 "adrfam": "IPv4", 00:22:43.634 "traddr": "10.0.0.2", 00:22:43.634 "trsvcid": "4420" 00:22:43.634 }, 00:22:43.634 "peer_address": { 00:22:43.634 "trtype": "TCP", 00:22:43.634 "adrfam": "IPv4", 00:22:43.634 "traddr": "10.0.0.1", 00:22:43.634 "trsvcid": "52684" 00:22:43.634 }, 00:22:43.634 "auth": { 00:22:43.634 "state": "completed", 00:22:43.634 "digest": "sha512", 00:22:43.634 "dhgroup": "ffdhe6144" 00:22:43.634 } 00:22:43.634 } 00:22:43.634 ]' 00:22:43.634 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:43.634 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:43.634 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:43.634 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:43.634 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:43.894 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:43.894 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:43.894 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:43.894 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Yzc5YTQyNTlkNmU5Y2Y5NDZkODFiN2Y5ZjkxMmNjM2SbHOxD: --dhchap-ctrl-secret DHHC-1:02:YTAzMTg5ODdlN2I0MzdlZDBmYmEwMGMyNzlmY2M1OGZmZWExYjRjNGY4YWIwZjk4FIJFjA==: 00:22:43.894 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:Yzc5YTQyNTlkNmU5Y2Y5NDZkODFiN2Y5ZjkxMmNjM2SbHOxD: --dhchap-ctrl-secret DHHC-1:02:YTAzMTg5ODdlN2I0MzdlZDBmYmEwMGMyNzlmY2M1OGZmZWExYjRjNGY4YWIwZjk4FIJFjA==: 00:22:44.840 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:44.840 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:44.840 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:44.840 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.840 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.840 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.840 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:44.840 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:44.840 09:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:44.840 09:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:22:44.840 09:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:44.840 09:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:44.840 09:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:44.840 09:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:44.840 09:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:44.840 09:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:44.840 09:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.840 09:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.840 09:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.840 09:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:44.840 09:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:44.840 09:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:45.100 00:22:45.100 09:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:45.100 09:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:45.100 09:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:45.360 09:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:45.360 09:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:45.360 09:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.360 09:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.360 09:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.360 09:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:45.360 { 00:22:45.360 "cntlid": 133, 00:22:45.360 "qid": 0, 00:22:45.360 "state": "enabled", 00:22:45.360 "thread": "nvmf_tgt_poll_group_000", 00:22:45.360 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:45.360 "listen_address": { 00:22:45.360 "trtype": "TCP", 00:22:45.360 "adrfam": "IPv4", 00:22:45.360 "traddr": "10.0.0.2", 00:22:45.360 "trsvcid": "4420" 00:22:45.360 }, 00:22:45.360 "peer_address": { 00:22:45.360 "trtype": "TCP", 00:22:45.360 "adrfam": "IPv4", 00:22:45.360 "traddr": "10.0.0.1", 00:22:45.360 "trsvcid": "52712" 00:22:45.360 }, 00:22:45.360 "auth": { 00:22:45.360 "state": "completed", 00:22:45.360 "digest": "sha512", 00:22:45.360 "dhgroup": "ffdhe6144" 00:22:45.360 } 00:22:45.360 } 00:22:45.360 ]' 00:22:45.360 09:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:45.360 09:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:45.360 09:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:45.360 09:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:45.360 09:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:45.360 09:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:45.360 09:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:45.360 09:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:45.620 09:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWY0NTlhMzdlYzhiODFlNzllMjY2MTI5MzRhY2JhODJiNDJhOTQ3MDA4MTViZmVjs/6f6w==: --dhchap-ctrl-secret DHHC-1:01:ZDU1Y2E1NDcyYTc0Njg4MzVmMmY1NjFhMjU0Y2Y2OTGEHHys: 00:22:45.620 09:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YWY0NTlhMzdlYzhiODFlNzllMjY2MTI5MzRhY2JhODJiNDJhOTQ3MDA4MTViZmVjs/6f6w==: --dhchap-ctrl-secret DHHC-1:01:ZDU1Y2E1NDcyYTc0Njg4MzVmMmY1NjFhMjU0Y2Y2OTGEHHys: 00:22:46.189 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:46.189 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:46.189 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:46.189 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.189 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.449 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.449 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:46.449 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:46.449 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:46.449 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:22:46.449 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:46.449 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:46.449 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:46.449 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:46.449 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:46.449 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:22:46.449 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.449 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.449 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.449 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:46.449 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:46.449 09:40:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:46.709 00:22:46.969 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:46.969 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:46.969 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:46.969 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:46.969 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:46.969 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.969 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.969 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.969 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:46.969 { 00:22:46.969 "cntlid": 135, 00:22:46.969 "qid": 0, 00:22:46.969 "state": "enabled", 00:22:46.969 "thread": "nvmf_tgt_poll_group_000", 00:22:46.969 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:46.969 "listen_address": { 00:22:46.969 "trtype": "TCP", 00:22:46.969 "adrfam": "IPv4", 00:22:46.969 "traddr": "10.0.0.2", 00:22:46.969 "trsvcid": "4420" 00:22:46.969 }, 00:22:46.969 "peer_address": { 00:22:46.969 "trtype": "TCP", 00:22:46.969 "adrfam": "IPv4", 00:22:46.969 "traddr": "10.0.0.1", 00:22:46.969 "trsvcid": "52748" 00:22:46.969 }, 00:22:46.969 "auth": { 00:22:46.969 "state": "completed", 00:22:46.969 "digest": "sha512", 00:22:46.969 "dhgroup": "ffdhe6144" 00:22:46.969 } 00:22:46.969 } 00:22:46.969 ]' 00:22:46.969 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:46.969 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:46.969 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:47.229 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:47.229 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:47.229 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:47.229 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:47.229 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:47.229 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTJhMjUxMTViYjI1OGRjMWU4OTM0ZjMwNjVhNTg3OWMwYTQzZTJkM2M3NWEzYjdmNGVmZjgxMTc5MDU2ODYyYa+Fpr4=: 00:22:47.229 09:40:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NTJhMjUxMTViYjI1OGRjMWU4OTM0ZjMwNjVhNTg3OWMwYTQzZTJkM2M3NWEzYjdmNGVmZjgxMTc5MDU2ODYyYa+Fpr4=: 00:22:48.168 09:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:48.168 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:48.168 09:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:48.168 09:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.168 09:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.168 09:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.168 09:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:48.168 09:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:48.168 09:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:48.168 09:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:48.168 09:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:22:48.168 09:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:48.168 09:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:48.168 09:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:48.168 09:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:48.168 09:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:48.168 09:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:48.168 09:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.168 09:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.168 09:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.168 09:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:48.168 09:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:48.168 09:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:48.739 00:22:48.739 09:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:48.739 09:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:48.739 09:40:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:48.739 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:48.739 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:48.739 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.739 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.739 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.739 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:48.739 { 00:22:48.739 "cntlid": 137, 00:22:48.739 "qid": 0, 00:22:48.739 "state": "enabled", 00:22:48.739 "thread": "nvmf_tgt_poll_group_000", 00:22:48.739 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:48.739 "listen_address": { 00:22:48.739 "trtype": "TCP", 00:22:48.739 "adrfam": "IPv4", 00:22:48.739 "traddr": "10.0.0.2", 00:22:48.739 "trsvcid": "4420" 00:22:48.739 }, 00:22:48.739 "peer_address": { 00:22:48.739 "trtype": "TCP", 00:22:48.739 "adrfam": "IPv4", 00:22:48.739 "traddr": "10.0.0.1", 00:22:48.739 "trsvcid": "39618" 00:22:48.739 }, 00:22:48.739 "auth": { 00:22:48.739 "state": "completed", 00:22:48.739 "digest": "sha512", 00:22:48.739 "dhgroup": "ffdhe8192" 00:22:48.739 } 00:22:48.739 } 00:22:48.739 ]' 00:22:48.739 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:49.000 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:49.000 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:49.000 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:49.000 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:49.000 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:49.000 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:49.000 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:49.000 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzAwNDQ2ZmI2Yjg1ZjA4N2Q1ZmFlNzFkMGI5YWU1NzVjMmYzMDQ4N2RiZWY0NTNl8Eag5g==: --dhchap-ctrl-secret DHHC-1:03:YTk5ZDc5MWQ4MDAxNWU2NmFkMTE2NjA2NmJjYzY3MzA0NTljOWIyZjk4MTkwMzAxMTVlZDk5MTFjMjQ4YjE3ZC7/qWM=: 00:22:49.000 09:40:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MzAwNDQ2ZmI2Yjg1ZjA4N2Q1ZmFlNzFkMGI5YWU1NzVjMmYzMDQ4N2RiZWY0NTNl8Eag5g==: --dhchap-ctrl-secret DHHC-1:03:YTk5ZDc5MWQ4MDAxNWU2NmFkMTE2NjA2NmJjYzY3MzA0NTljOWIyZjk4MTkwMzAxMTVlZDk5MTFjMjQ4YjE3ZC7/qWM=: 00:22:49.942 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:49.942 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:49.942 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:49.942 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.942 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:49.942 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.942 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:49.942 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:49.942 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:49.942 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:22:49.942 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:49.942 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:49.942 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:49.942 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:49.942 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:49.942 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:49.942 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.942 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:49.943 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.943 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:49.943 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:49.943 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:50.525 00:22:50.525 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:50.525 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:50.525 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:50.525 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:50.525 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:50.525 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.525 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.525 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.525 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:50.525 { 00:22:50.525 "cntlid": 139, 00:22:50.525 "qid": 0, 00:22:50.525 "state": "enabled", 00:22:50.525 "thread": "nvmf_tgt_poll_group_000", 00:22:50.525 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:50.525 "listen_address": { 00:22:50.525 "trtype": "TCP", 00:22:50.525 "adrfam": "IPv4", 00:22:50.525 "traddr": "10.0.0.2", 00:22:50.525 "trsvcid": "4420" 00:22:50.525 }, 00:22:50.525 "peer_address": { 00:22:50.525 "trtype": "TCP", 00:22:50.525 "adrfam": "IPv4", 00:22:50.525 "traddr": "10.0.0.1", 00:22:50.525 "trsvcid": "39650" 00:22:50.525 }, 00:22:50.525 "auth": { 00:22:50.525 "state": "completed", 00:22:50.525 "digest": "sha512", 00:22:50.525 "dhgroup": "ffdhe8192" 00:22:50.525 } 00:22:50.525 } 00:22:50.525 ]' 00:22:50.525 09:40:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:50.850 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:50.850 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:50.850 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:50.850 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:50.850 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:50.850 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:50.850 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:50.850 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Yzc5YTQyNTlkNmU5Y2Y5NDZkODFiN2Y5ZjkxMmNjM2SbHOxD: --dhchap-ctrl-secret DHHC-1:02:YTAzMTg5ODdlN2I0MzdlZDBmYmEwMGMyNzlmY2M1OGZmZWExYjRjNGY4YWIwZjk4FIJFjA==: 00:22:50.851 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:Yzc5YTQyNTlkNmU5Y2Y5NDZkODFiN2Y5ZjkxMmNjM2SbHOxD: --dhchap-ctrl-secret DHHC-1:02:YTAzMTg5ODdlN2I0MzdlZDBmYmEwMGMyNzlmY2M1OGZmZWExYjRjNGY4YWIwZjk4FIJFjA==: 00:22:51.502 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:51.502 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:51.502 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:51.502 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.502 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.502 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.502 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:51.502 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:51.502 09:40:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:51.762 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:22:51.762 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:51.762 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:51.762 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:51.762 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:51.762 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:51.762 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:51.762 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.762 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.762 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.762 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:51.762 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:51.762 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:52.332 00:22:52.333 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:52.333 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:52.333 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:52.333 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:52.333 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:52.333 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.333 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:52.333 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.333 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:52.333 { 00:22:52.333 "cntlid": 141, 00:22:52.333 "qid": 0, 00:22:52.333 "state": "enabled", 00:22:52.333 "thread": "nvmf_tgt_poll_group_000", 00:22:52.333 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:52.333 "listen_address": { 00:22:52.333 "trtype": "TCP", 00:22:52.333 "adrfam": "IPv4", 00:22:52.333 "traddr": "10.0.0.2", 00:22:52.333 "trsvcid": "4420" 00:22:52.333 }, 00:22:52.333 "peer_address": { 00:22:52.333 "trtype": "TCP", 00:22:52.333 "adrfam": "IPv4", 00:22:52.333 "traddr": "10.0.0.1", 00:22:52.333 "trsvcid": "39678" 00:22:52.333 }, 00:22:52.333 "auth": { 00:22:52.333 "state": "completed", 00:22:52.333 "digest": "sha512", 00:22:52.333 "dhgroup": "ffdhe8192" 00:22:52.333 } 00:22:52.333 } 00:22:52.333 ]' 00:22:52.333 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:52.593 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:52.594 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:52.594 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:52.594 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:52.594 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:52.594 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:52.594 09:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:52.855 09:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWY0NTlhMzdlYzhiODFlNzllMjY2MTI5MzRhY2JhODJiNDJhOTQ3MDA4MTViZmVjs/6f6w==: --dhchap-ctrl-secret DHHC-1:01:ZDU1Y2E1NDcyYTc0Njg4MzVmMmY1NjFhMjU0Y2Y2OTGEHHys: 00:22:52.855 09:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YWY0NTlhMzdlYzhiODFlNzllMjY2MTI5MzRhY2JhODJiNDJhOTQ3MDA4MTViZmVjs/6f6w==: --dhchap-ctrl-secret DHHC-1:01:ZDU1Y2E1NDcyYTc0Njg4MzVmMmY1NjFhMjU0Y2Y2OTGEHHys: 00:22:53.427 09:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:53.427 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:53.427 09:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:53.427 09:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.427 09:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:53.427 09:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.427 09:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:53.427 09:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:53.427 09:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:53.688 09:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:22:53.688 09:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:53.688 09:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:53.688 09:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:53.688 09:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:53.688 09:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:53.688 09:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:22:53.688 09:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.688 09:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:53.688 09:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.688 09:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:53.688 09:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:53.688 09:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:53.950 00:22:53.950 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:53.950 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:53.950 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:54.210 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:54.211 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:54.211 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.211 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.211 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.211 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:54.211 { 00:22:54.211 "cntlid": 143, 00:22:54.211 "qid": 0, 00:22:54.211 "state": "enabled", 00:22:54.211 "thread": "nvmf_tgt_poll_group_000", 00:22:54.211 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:54.211 "listen_address": { 00:22:54.211 "trtype": "TCP", 00:22:54.211 "adrfam": "IPv4", 00:22:54.211 "traddr": "10.0.0.2", 00:22:54.211 "trsvcid": "4420" 00:22:54.211 }, 00:22:54.211 "peer_address": { 00:22:54.211 "trtype": "TCP", 00:22:54.211 "adrfam": "IPv4", 00:22:54.211 "traddr": "10.0.0.1", 00:22:54.211 "trsvcid": "39700" 00:22:54.211 }, 00:22:54.211 "auth": { 00:22:54.211 "state": "completed", 00:22:54.211 "digest": "sha512", 00:22:54.211 "dhgroup": "ffdhe8192" 00:22:54.211 } 00:22:54.211 } 00:22:54.211 ]' 00:22:54.211 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:54.211 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:54.211 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:54.211 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:54.211 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:54.471 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:54.471 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:54.471 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:54.471 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTJhMjUxMTViYjI1OGRjMWU4OTM0ZjMwNjVhNTg3OWMwYTQzZTJkM2M3NWEzYjdmNGVmZjgxMTc5MDU2ODYyYa+Fpr4=: 00:22:54.471 09:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NTJhMjUxMTViYjI1OGRjMWU4OTM0ZjMwNjVhNTg3OWMwYTQzZTJkM2M3NWEzYjdmNGVmZjgxMTc5MDU2ODYyYa+Fpr4=: 00:22:55.042 09:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:55.042 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:55.042 09:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:55.042 09:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.042 09:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:55.303 09:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.303 09:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:22:55.303 09:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:22:55.303 09:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:22:55.303 09:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:55.303 09:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:55.303 09:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:55.303 09:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:22:55.303 09:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:55.303 09:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:55.303 09:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:55.303 09:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:55.303 09:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:55.303 09:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:55.303 09:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.303 09:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:55.303 09:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.303 09:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:55.303 09:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:55.303 09:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:55.873 00:22:55.873 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:55.873 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:55.873 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:56.134 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:56.134 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:56.134 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.134 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:56.134 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.134 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:56.134 { 00:22:56.134 "cntlid": 145, 00:22:56.134 "qid": 0, 00:22:56.134 "state": "enabled", 00:22:56.134 "thread": "nvmf_tgt_poll_group_000", 00:22:56.134 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:56.134 "listen_address": { 00:22:56.134 "trtype": "TCP", 00:22:56.134 "adrfam": "IPv4", 00:22:56.134 "traddr": "10.0.0.2", 00:22:56.134 "trsvcid": "4420" 00:22:56.134 }, 00:22:56.134 "peer_address": { 00:22:56.134 "trtype": "TCP", 00:22:56.134 "adrfam": "IPv4", 00:22:56.134 "traddr": "10.0.0.1", 00:22:56.134 "trsvcid": "39720" 00:22:56.134 }, 00:22:56.134 "auth": { 00:22:56.134 "state": "completed", 00:22:56.134 "digest": "sha512", 00:22:56.134 "dhgroup": "ffdhe8192" 00:22:56.134 } 00:22:56.134 } 00:22:56.134 ]' 00:22:56.134 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:56.134 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:56.134 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:56.134 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:56.134 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:56.134 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:56.134 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:56.134 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:56.394 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzAwNDQ2ZmI2Yjg1ZjA4N2Q1ZmFlNzFkMGI5YWU1NzVjMmYzMDQ4N2RiZWY0NTNl8Eag5g==: --dhchap-ctrl-secret DHHC-1:03:YTk5ZDc5MWQ4MDAxNWU2NmFkMTE2NjA2NmJjYzY3MzA0NTljOWIyZjk4MTkwMzAxMTVlZDk5MTFjMjQ4YjE3ZC7/qWM=: 00:22:56.394 09:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:MzAwNDQ2ZmI2Yjg1ZjA4N2Q1ZmFlNzFkMGI5YWU1NzVjMmYzMDQ4N2RiZWY0NTNl8Eag5g==: --dhchap-ctrl-secret DHHC-1:03:YTk5ZDc5MWQ4MDAxNWU2NmFkMTE2NjA2NmJjYzY3MzA0NTljOWIyZjk4MTkwMzAxMTVlZDk5MTFjMjQ4YjE3ZC7/qWM=: 00:22:56.964 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:56.964 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:56.964 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:56.964 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.964 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:56.964 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.964 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:22:56.964 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.964 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:56.964 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.964 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:22:56.964 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:56.964 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:22:56.964 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:56.964 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:56.964 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:56.964 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:56.964 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:22:56.964 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:22:56.964 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:22:57.537 request: 00:22:57.537 { 00:22:57.537 "name": "nvme0", 00:22:57.537 "trtype": "tcp", 00:22:57.537 "traddr": "10.0.0.2", 00:22:57.537 "adrfam": "ipv4", 00:22:57.537 "trsvcid": "4420", 00:22:57.537 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:57.537 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:57.537 "prchk_reftag": false, 00:22:57.537 "prchk_guard": false, 00:22:57.537 "hdgst": false, 00:22:57.537 "ddgst": false, 00:22:57.538 "dhchap_key": "key2", 00:22:57.538 "allow_unrecognized_csi": false, 00:22:57.538 "method": "bdev_nvme_attach_controller", 00:22:57.538 "req_id": 1 00:22:57.538 } 00:22:57.538 Got JSON-RPC error response 00:22:57.538 response: 00:22:57.538 { 00:22:57.538 "code": -5, 00:22:57.538 "message": "Input/output error" 00:22:57.538 } 00:22:57.538 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:57.538 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:57.538 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:57.538 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:57.538 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:57.538 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.538 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:57.538 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.538 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:57.538 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.538 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:57.538 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.538 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:57.538 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:57.538 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:57.538 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:57.538 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:57.538 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:57.538 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:57.538 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:57.538 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:57.538 09:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:58.108 request: 00:22:58.108 { 00:22:58.108 "name": "nvme0", 00:22:58.108 "trtype": "tcp", 00:22:58.108 "traddr": "10.0.0.2", 00:22:58.108 "adrfam": "ipv4", 00:22:58.108 "trsvcid": "4420", 00:22:58.108 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:58.108 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:58.108 "prchk_reftag": false, 00:22:58.108 "prchk_guard": false, 00:22:58.108 "hdgst": false, 00:22:58.108 "ddgst": false, 00:22:58.108 "dhchap_key": "key1", 00:22:58.108 "dhchap_ctrlr_key": "ckey2", 00:22:58.108 "allow_unrecognized_csi": false, 00:22:58.108 "method": "bdev_nvme_attach_controller", 00:22:58.108 "req_id": 1 00:22:58.108 } 00:22:58.108 Got JSON-RPC error response 00:22:58.108 response: 00:22:58.108 { 00:22:58.108 "code": -5, 00:22:58.108 "message": "Input/output error" 00:22:58.108 } 00:22:58.108 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:58.108 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:58.108 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:58.108 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:58.108 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:58.108 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.108 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:58.108 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.108 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:22:58.108 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.108 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:58.108 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.108 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:58.108 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:58.108 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:58.108 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:58.108 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:58.108 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:58.108 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:58.108 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:58.109 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:58.109 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:58.369 request: 00:22:58.369 { 00:22:58.369 "name": "nvme0", 00:22:58.369 "trtype": "tcp", 00:22:58.369 "traddr": "10.0.0.2", 00:22:58.369 "adrfam": "ipv4", 00:22:58.369 "trsvcid": "4420", 00:22:58.369 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:58.369 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:58.369 "prchk_reftag": false, 00:22:58.369 "prchk_guard": false, 00:22:58.369 "hdgst": false, 00:22:58.369 "ddgst": false, 00:22:58.369 "dhchap_key": "key1", 00:22:58.369 "dhchap_ctrlr_key": "ckey1", 00:22:58.369 "allow_unrecognized_csi": false, 00:22:58.369 "method": "bdev_nvme_attach_controller", 00:22:58.369 "req_id": 1 00:22:58.369 } 00:22:58.369 Got JSON-RPC error response 00:22:58.369 response: 00:22:58.369 { 00:22:58.369 "code": -5, 00:22:58.369 "message": "Input/output error" 00:22:58.369 } 00:22:58.369 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:58.369 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:58.369 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:58.369 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:58.369 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:58.369 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.369 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:58.369 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.369 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 2790759 00:22:58.369 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2790759 ']' 00:22:58.369 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2790759 00:22:58.369 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:22:58.369 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:58.369 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2790759 00:22:58.369 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:58.369 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:58.369 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2790759' 00:22:58.369 killing process with pid 2790759 00:22:58.369 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2790759 00:22:58.629 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2790759 00:22:58.629 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:22:58.629 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:58.629 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:58.629 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:58.629 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2816530 00:22:58.629 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2816530 00:22:58.629 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:22:58.629 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2816530 ']' 00:22:58.629 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:58.629 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:58.629 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:58.629 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:58.629 09:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:58.889 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:58.890 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:22:58.890 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:58.890 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:58.890 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:58.890 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:58.890 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:22:58.890 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 2816530 00:22:58.890 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2816530 ']' 00:22:58.890 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:58.890 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:58.890 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:58.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:58.890 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:58.890 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:58.890 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:58.890 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:22:58.890 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:22:58.890 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.890 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:59.150 null0 00:22:59.150 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.150 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:59.150 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.76K 00:22:59.150 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.150 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:59.150 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.150 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.onq ]] 00:22:59.150 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.onq 00:22:59.150 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.150 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:59.150 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.151 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:59.151 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.9BV 00:22:59.151 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.151 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:59.151 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.151 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.0rU ]] 00:22:59.151 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.0rU 00:22:59.151 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.151 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:59.151 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.151 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:59.151 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.YPd 00:22:59.151 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.151 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:59.151 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.151 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.Num ]] 00:22:59.151 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Num 00:22:59.151 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.151 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:59.151 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.151 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:59.151 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.bEy 00:22:59.151 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.151 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:59.151 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.151 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:22:59.151 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:22:59.151 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:59.151 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:59.151 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:59.151 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:59.151 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:59.151 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:22:59.151 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.151 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:59.151 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.151 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:59.151 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:59.151 09:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:00.094 nvme0n1 00:23:00.094 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:00.094 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:00.094 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:00.094 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:00.094 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:00.094 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.094 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:00.094 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.094 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:00.094 { 00:23:00.094 "cntlid": 1, 00:23:00.094 "qid": 0, 00:23:00.094 "state": "enabled", 00:23:00.094 "thread": "nvmf_tgt_poll_group_000", 00:23:00.094 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:00.094 "listen_address": { 00:23:00.094 "trtype": "TCP", 00:23:00.094 "adrfam": "IPv4", 00:23:00.094 "traddr": "10.0.0.2", 00:23:00.094 "trsvcid": "4420" 00:23:00.094 }, 00:23:00.094 "peer_address": { 00:23:00.094 "trtype": "TCP", 00:23:00.094 "adrfam": "IPv4", 00:23:00.094 "traddr": "10.0.0.1", 00:23:00.094 "trsvcid": "43660" 00:23:00.094 }, 00:23:00.094 "auth": { 00:23:00.094 "state": "completed", 00:23:00.094 "digest": "sha512", 00:23:00.094 "dhgroup": "ffdhe8192" 00:23:00.094 } 00:23:00.094 } 00:23:00.094 ]' 00:23:00.094 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:00.094 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:00.094 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:00.094 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:00.094 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:00.354 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:00.354 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:00.354 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:00.354 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTJhMjUxMTViYjI1OGRjMWU4OTM0ZjMwNjVhNTg3OWMwYTQzZTJkM2M3NWEzYjdmNGVmZjgxMTc5MDU2ODYyYa+Fpr4=: 00:23:00.354 09:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NTJhMjUxMTViYjI1OGRjMWU4OTM0ZjMwNjVhNTg3OWMwYTQzZTJkM2M3NWEzYjdmNGVmZjgxMTc5MDU2ODYyYa+Fpr4=: 00:23:00.925 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:01.184 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:01.184 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:01.184 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.184 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:01.184 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.184 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:23:01.184 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.184 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:01.184 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.184 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:23:01.184 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:23:01.184 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:23:01.184 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:23:01.184 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:23:01.185 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:23:01.185 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:01.185 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:23:01.185 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:01.185 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:01.185 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:01.185 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:01.444 request: 00:23:01.444 { 00:23:01.444 "name": "nvme0", 00:23:01.444 "trtype": "tcp", 00:23:01.444 "traddr": "10.0.0.2", 00:23:01.444 "adrfam": "ipv4", 00:23:01.444 "trsvcid": "4420", 00:23:01.444 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:01.444 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:01.444 "prchk_reftag": false, 00:23:01.444 "prchk_guard": false, 00:23:01.444 "hdgst": false, 00:23:01.444 "ddgst": false, 00:23:01.444 "dhchap_key": "key3", 00:23:01.444 "allow_unrecognized_csi": false, 00:23:01.444 "method": "bdev_nvme_attach_controller", 00:23:01.444 "req_id": 1 00:23:01.444 } 00:23:01.444 Got JSON-RPC error response 00:23:01.444 response: 00:23:01.444 { 00:23:01.444 "code": -5, 00:23:01.444 "message": "Input/output error" 00:23:01.444 } 00:23:01.444 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:23:01.444 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:01.444 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:01.444 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:01.444 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:23:01.444 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:23:01.444 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:23:01.444 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:23:01.706 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:23:01.706 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:23:01.706 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:23:01.706 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:23:01.706 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:01.706 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:23:01.706 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:01.706 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:01.706 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:01.706 09:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:01.706 request: 00:23:01.706 { 00:23:01.706 "name": "nvme0", 00:23:01.706 "trtype": "tcp", 00:23:01.706 "traddr": "10.0.0.2", 00:23:01.706 "adrfam": "ipv4", 00:23:01.706 "trsvcid": "4420", 00:23:01.706 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:01.706 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:01.706 "prchk_reftag": false, 00:23:01.706 "prchk_guard": false, 00:23:01.706 "hdgst": false, 00:23:01.706 "ddgst": false, 00:23:01.706 "dhchap_key": "key3", 00:23:01.706 "allow_unrecognized_csi": false, 00:23:01.706 "method": "bdev_nvme_attach_controller", 00:23:01.706 "req_id": 1 00:23:01.706 } 00:23:01.706 Got JSON-RPC error response 00:23:01.706 response: 00:23:01.706 { 00:23:01.706 "code": -5, 00:23:01.706 "message": "Input/output error" 00:23:01.706 } 00:23:01.706 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:23:01.706 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:01.706 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:01.706 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:01.706 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:23:01.706 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:23:01.706 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:23:01.706 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:01.706 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:01.706 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:01.967 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:01.967 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.967 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:01.967 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.967 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:01.967 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.967 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:01.967 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.967 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:01.967 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:23:01.967 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:01.967 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:23:01.967 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:01.967 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:23:01.967 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:01.967 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:01.967 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:01.967 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:02.228 request: 00:23:02.228 { 00:23:02.228 "name": "nvme0", 00:23:02.228 "trtype": "tcp", 00:23:02.228 "traddr": "10.0.0.2", 00:23:02.228 "adrfam": "ipv4", 00:23:02.228 "trsvcid": "4420", 00:23:02.228 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:02.228 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:02.228 "prchk_reftag": false, 00:23:02.228 "prchk_guard": false, 00:23:02.228 "hdgst": false, 00:23:02.228 "ddgst": false, 00:23:02.228 "dhchap_key": "key0", 00:23:02.228 "dhchap_ctrlr_key": "key1", 00:23:02.228 "allow_unrecognized_csi": false, 00:23:02.228 "method": "bdev_nvme_attach_controller", 00:23:02.228 "req_id": 1 00:23:02.228 } 00:23:02.228 Got JSON-RPC error response 00:23:02.228 response: 00:23:02.228 { 00:23:02.228 "code": -5, 00:23:02.228 "message": "Input/output error" 00:23:02.228 } 00:23:02.228 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:23:02.228 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:02.228 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:02.228 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:02.228 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:23:02.228 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:23:02.228 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:23:02.488 nvme0n1 00:23:02.488 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:23:02.488 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:02.488 09:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:23:02.748 09:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:02.748 09:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:02.748 09:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:03.009 09:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:23:03.009 09:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.009 09:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:03.009 09:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.009 09:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:23:03.009 09:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:23:03.010 09:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:23:03.582 nvme0n1 00:23:03.582 09:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:23:03.582 09:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:23:03.582 09:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:03.842 09:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:03.842 09:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:03.842 09:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.842 09:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:03.842 09:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.842 09:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:23:03.842 09:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:03.842 09:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:23:04.113 09:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:04.113 09:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:YWY0NTlhMzdlYzhiODFlNzllMjY2MTI5MzRhY2JhODJiNDJhOTQ3MDA4MTViZmVjs/6f6w==: --dhchap-ctrl-secret DHHC-1:03:NTJhMjUxMTViYjI1OGRjMWU4OTM0ZjMwNjVhNTg3OWMwYTQzZTJkM2M3NWEzYjdmNGVmZjgxMTc5MDU2ODYyYa+Fpr4=: 00:23:04.113 09:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:YWY0NTlhMzdlYzhiODFlNzllMjY2MTI5MzRhY2JhODJiNDJhOTQ3MDA4MTViZmVjs/6f6w==: --dhchap-ctrl-secret DHHC-1:03:NTJhMjUxMTViYjI1OGRjMWU4OTM0ZjMwNjVhNTg3OWMwYTQzZTJkM2M3NWEzYjdmNGVmZjgxMTc5MDU2ODYyYa+Fpr4=: 00:23:04.685 09:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:23:04.685 09:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:23:04.685 09:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:23:04.685 09:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:23:04.685 09:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:23:04.685 09:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:23:04.685 09:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:23:04.685 09:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:04.685 09:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:04.685 09:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:23:04.685 09:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:23:04.685 09:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:23:04.685 09:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:23:04.685 09:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:04.685 09:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:23:04.685 09:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:04.685 09:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:23:04.685 09:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:23:04.685 09:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:23:05.258 request: 00:23:05.258 { 00:23:05.258 "name": "nvme0", 00:23:05.258 "trtype": "tcp", 00:23:05.258 "traddr": "10.0.0.2", 00:23:05.258 "adrfam": "ipv4", 00:23:05.258 "trsvcid": "4420", 00:23:05.258 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:05.258 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:05.258 "prchk_reftag": false, 00:23:05.258 "prchk_guard": false, 00:23:05.258 "hdgst": false, 00:23:05.258 "ddgst": false, 00:23:05.258 "dhchap_key": "key1", 00:23:05.258 "allow_unrecognized_csi": false, 00:23:05.258 "method": "bdev_nvme_attach_controller", 00:23:05.258 "req_id": 1 00:23:05.258 } 00:23:05.258 Got JSON-RPC error response 00:23:05.258 response: 00:23:05.258 { 00:23:05.258 "code": -5, 00:23:05.258 "message": "Input/output error" 00:23:05.258 } 00:23:05.258 09:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:23:05.258 09:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:05.258 09:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:05.258 09:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:05.258 09:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:05.258 09:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:05.258 09:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:05.830 nvme0n1 00:23:06.090 09:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:23:06.090 09:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:23:06.090 09:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:06.090 09:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:06.090 09:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:06.090 09:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:06.350 09:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:06.350 09:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.350 09:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:06.350 09:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.350 09:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:23:06.350 09:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:23:06.350 09:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:23:06.609 nvme0n1 00:23:06.609 09:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:23:06.609 09:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:23:06.609 09:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:06.870 09:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:06.870 09:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:06.870 09:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:06.870 09:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:06.870 09:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.870 09:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:06.870 09:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.870 09:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:Yzc5YTQyNTlkNmU5Y2Y5NDZkODFiN2Y5ZjkxMmNjM2SbHOxD: '' 2s 00:23:06.870 09:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:23:06.870 09:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:23:06.870 09:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:Yzc5YTQyNTlkNmU5Y2Y5NDZkODFiN2Y5ZjkxMmNjM2SbHOxD: 00:23:06.870 09:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:23:06.870 09:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:23:06.870 09:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:23:06.870 09:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:Yzc5YTQyNTlkNmU5Y2Y5NDZkODFiN2Y5ZjkxMmNjM2SbHOxD: ]] 00:23:06.870 09:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:Yzc5YTQyNTlkNmU5Y2Y5NDZkODFiN2Y5ZjkxMmNjM2SbHOxD: 00:23:06.870 09:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:23:06.870 09:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:23:06.870 09:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:23:09.412 09:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:23:09.412 09:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:23:09.412 09:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:09.412 09:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:23:09.412 09:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:23:09.412 09:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:23:09.412 09:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:23:09.412 09:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key key2 00:23:09.412 09:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.412 09:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.412 09:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.412 09:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:YWY0NTlhMzdlYzhiODFlNzllMjY2MTI5MzRhY2JhODJiNDJhOTQ3MDA4MTViZmVjs/6f6w==: 2s 00:23:09.412 09:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:23:09.412 09:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:23:09.412 09:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:23:09.412 09:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:YWY0NTlhMzdlYzhiODFlNzllMjY2MTI5MzRhY2JhODJiNDJhOTQ3MDA4MTViZmVjs/6f6w==: 00:23:09.412 09:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:23:09.412 09:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:23:09.412 09:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:23:09.412 09:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:YWY0NTlhMzdlYzhiODFlNzllMjY2MTI5MzRhY2JhODJiNDJhOTQ3MDA4MTViZmVjs/6f6w==: ]] 00:23:09.412 09:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:YWY0NTlhMzdlYzhiODFlNzllMjY2MTI5MzRhY2JhODJiNDJhOTQ3MDA4MTViZmVjs/6f6w==: 00:23:09.412 09:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:23:09.412 09:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:23:11.325 09:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:23:11.326 09:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:23:11.326 09:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:11.326 09:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:23:11.326 09:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:23:11.326 09:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:23:11.326 09:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:23:11.326 09:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:11.326 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:11.326 09:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:11.326 09:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.326 09:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:11.326 09:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.326 09:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:11.326 09:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:11.326 09:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:11.898 nvme0n1 00:23:11.898 09:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:11.898 09:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.898 09:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:11.898 09:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.898 09:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:11.898 09:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:12.159 09:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:23:12.159 09:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:23:12.159 09:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:12.421 09:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:12.421 09:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:12.421 09:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.421 09:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:12.421 09:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.421 09:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:23:12.421 09:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:23:12.683 09:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:23:12.683 09:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:23:12.683 09:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:12.683 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:12.683 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:12.684 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.684 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:12.684 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.684 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:12.684 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:23:12.684 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:12.684 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:23:12.946 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:12.946 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:23:12.946 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:12.946 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:12.946 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:13.208 request: 00:23:13.208 { 00:23:13.208 "name": "nvme0", 00:23:13.208 "dhchap_key": "key1", 00:23:13.208 "dhchap_ctrlr_key": "key3", 00:23:13.208 "method": "bdev_nvme_set_keys", 00:23:13.208 "req_id": 1 00:23:13.208 } 00:23:13.208 Got JSON-RPC error response 00:23:13.208 response: 00:23:13.208 { 00:23:13.208 "code": -13, 00:23:13.208 "message": "Permission denied" 00:23:13.208 } 00:23:13.208 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:23:13.208 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:13.208 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:13.208 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:13.208 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:23:13.208 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:23:13.208 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:13.469 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:23:13.469 09:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:23:14.413 09:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:23:14.413 09:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:23:14.413 09:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:14.675 09:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:23:14.675 09:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:14.675 09:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.675 09:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:14.675 09:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.675 09:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:14.675 09:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:14.675 09:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:15.249 nvme0n1 00:23:15.510 09:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:15.510 09:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.510 09:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:15.510 09:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.510 09:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:15.510 09:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:23:15.510 09:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:15.510 09:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:23:15.510 09:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:15.510 09:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:23:15.510 09:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:15.510 09:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:15.510 09:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:15.772 request: 00:23:15.772 { 00:23:15.772 "name": "nvme0", 00:23:15.772 "dhchap_key": "key2", 00:23:15.772 "dhchap_ctrlr_key": "key0", 00:23:15.772 "method": "bdev_nvme_set_keys", 00:23:15.772 "req_id": 1 00:23:15.772 } 00:23:15.772 Got JSON-RPC error response 00:23:15.772 response: 00:23:15.772 { 00:23:15.772 "code": -13, 00:23:15.772 "message": "Permission denied" 00:23:15.773 } 00:23:15.773 09:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:23:15.773 09:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:15.773 09:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:15.773 09:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:15.773 09:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:23:15.773 09:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:23:15.773 09:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:16.033 09:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:23:16.033 09:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:23:16.975 09:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:23:16.975 09:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:23:16.975 09:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:17.235 09:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:23:17.235 09:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:23:17.235 09:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:23:17.235 09:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2790950 00:23:17.235 09:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2790950 ']' 00:23:17.235 09:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2790950 00:23:17.235 09:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:23:17.235 09:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:17.235 09:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2790950 00:23:17.235 09:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:17.235 09:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:17.235 09:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2790950' 00:23:17.235 killing process with pid 2790950 00:23:17.235 09:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2790950 00:23:17.235 09:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2790950 00:23:17.496 09:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:23:17.496 09:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:17.496 09:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:23:17.496 09:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:17.496 09:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:23:17.496 09:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:17.496 09:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:17.496 rmmod nvme_tcp 00:23:17.496 rmmod nvme_fabrics 00:23:17.496 rmmod nvme_keyring 00:23:17.496 09:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:17.496 09:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:23:17.496 09:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:23:17.496 09:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 2816530 ']' 00:23:17.496 09:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 2816530 00:23:17.496 09:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2816530 ']' 00:23:17.496 09:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2816530 00:23:17.496 09:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:23:17.496 09:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:17.496 09:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2816530 00:23:17.496 09:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:17.496 09:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:17.496 09:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2816530' 00:23:17.496 killing process with pid 2816530 00:23:17.496 09:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2816530 00:23:17.496 09:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2816530 00:23:17.757 09:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:17.757 09:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:17.757 09:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:17.757 09:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:23:17.757 09:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:23:17.757 09:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:17.757 09:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:23:17.757 09:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:17.757 09:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:17.757 09:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:17.757 09:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:17.757 09:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:19.670 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:19.670 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.76K /tmp/spdk.key-sha256.9BV /tmp/spdk.key-sha384.YPd /tmp/spdk.key-sha512.bEy /tmp/spdk.key-sha512.onq /tmp/spdk.key-sha384.0rU /tmp/spdk.key-sha256.Num '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:23:19.670 00:23:19.670 real 2m33.613s 00:23:19.670 user 5m45.900s 00:23:19.670 sys 0m24.335s 00:23:19.670 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:19.671 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:19.671 ************************************ 00:23:19.671 END TEST nvmf_auth_target 00:23:19.671 ************************************ 00:23:19.931 09:40:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:23:19.931 09:40:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:23:19.931 09:40:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:23:19.931 09:40:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:19.931 09:40:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:19.931 ************************************ 00:23:19.931 START TEST nvmf_bdevio_no_huge 00:23:19.931 ************************************ 00:23:19.931 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:23:19.931 * Looking for test storage... 00:23:19.931 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:19.931 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:19.931 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version 00:23:19.931 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:19.931 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:19.931 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:19.931 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:19.931 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:19.931 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:23:19.931 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:23:19.931 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:23:19.931 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:23:19.931 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:23:19.931 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:23:19.931 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:23:19.931 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:19.931 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:23:19.931 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:23:19.931 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:19.931 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:19.931 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:23:19.931 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:23:19.931 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:19.931 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:23:19.931 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:23:19.931 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:23:19.931 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:23:19.931 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:19.931 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:23:19.931 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:23:19.931 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:19.931 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:19.931 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:23:19.931 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:19.931 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:19.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:19.931 --rc genhtml_branch_coverage=1 00:23:19.931 --rc genhtml_function_coverage=1 00:23:19.931 --rc genhtml_legend=1 00:23:19.931 --rc geninfo_all_blocks=1 00:23:19.931 --rc geninfo_unexecuted_blocks=1 00:23:19.931 00:23:19.931 ' 00:23:19.931 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:19.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:19.931 --rc genhtml_branch_coverage=1 00:23:19.931 --rc genhtml_function_coverage=1 00:23:19.931 --rc genhtml_legend=1 00:23:19.931 --rc geninfo_all_blocks=1 00:23:19.931 --rc geninfo_unexecuted_blocks=1 00:23:19.931 00:23:19.931 ' 00:23:20.192 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:20.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:20.192 --rc genhtml_branch_coverage=1 00:23:20.192 --rc genhtml_function_coverage=1 00:23:20.192 --rc genhtml_legend=1 00:23:20.192 --rc geninfo_all_blocks=1 00:23:20.192 --rc geninfo_unexecuted_blocks=1 00:23:20.192 00:23:20.192 ' 00:23:20.192 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:20.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:20.192 --rc genhtml_branch_coverage=1 00:23:20.192 --rc genhtml_function_coverage=1 00:23:20.192 --rc genhtml_legend=1 00:23:20.192 --rc geninfo_all_blocks=1 00:23:20.192 --rc geninfo_unexecuted_blocks=1 00:23:20.192 00:23:20.192 ' 00:23:20.192 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:20.192 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:23:20.192 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:20.192 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:20.192 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:20.192 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:20.192 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:20.192 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:20.192 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:20.192 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:20.192 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:20.192 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:20.192 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:20.192 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:20.192 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:20.192 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:20.192 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:20.192 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:20.192 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:20.192 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:23:20.192 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:20.192 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:20.192 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:20.193 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:20.193 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:20.193 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:20.193 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:23:20.193 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:20.193 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:23:20.193 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:20.193 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:20.193 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:20.193 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:20.193 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:20.193 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:20.193 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:20.193 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:20.193 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:20.193 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:20.193 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:20.193 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:20.193 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:23:20.193 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:20.193 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:20.193 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:20.193 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:20.193 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:20.193 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:20.193 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:20.193 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:20.193 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:20.193 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:20.193 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:23:20.193 09:40:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:28.332 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:28.332 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:23:28.332 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:28.332 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:28.332 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:28.332 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:28.332 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:28.332 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:23:28.332 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:28.332 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:23:28.332 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:23:28.332 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:23:28.332 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:23:28.332 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:23:28.332 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:23:28.332 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:28.332 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:28.332 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:28.332 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:28.332 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:28.332 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:28.332 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:28.332 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:28.332 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:28.332 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:28.332 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:28.332 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:28.332 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:28.332 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:28.332 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:28.333 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:28.333 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:28.333 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:28.333 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:28.333 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:28.333 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:28.333 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:28.333 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:28.333 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:28.333 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:28.333 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:28.333 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:28.333 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:28.333 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:28.333 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:28.333 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:28.333 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:28.333 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:28.333 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:28.333 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:28.333 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:28.333 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:28.333 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:28.333 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:28.333 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:28.333 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:28.333 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:28.333 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:28.333 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:28.333 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:28.333 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:28.333 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:28.333 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:28.333 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:28.333 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:28.333 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:28.333 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:28.333 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:28.333 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:28.333 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:28.333 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:28.333 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:28.333 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:28.333 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:23:28.333 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:28.333 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:28.333 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:28.333 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:28.333 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:28.333 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:28.333 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:28.333 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:28.333 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:28.333 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:28.333 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:28.333 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:28.333 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:28.333 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:28.333 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:28.333 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:28.333 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:28.333 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:28.333 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:28.333 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:28.333 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:28.333 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:28.333 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:28.333 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:28.333 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:28.333 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:28.333 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:28.333 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.587 ms 00:23:28.333 00:23:28.333 --- 10.0.0.2 ping statistics --- 00:23:28.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:28.333 rtt min/avg/max/mdev = 0.587/0.587/0.587/0.000 ms 00:23:28.333 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:28.333 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:28.333 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.269 ms 00:23:28.333 00:23:28.333 --- 10.0.0.1 ping statistics --- 00:23:28.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:28.333 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:23:28.333 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:28.333 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:23:28.333 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:28.333 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:28.333 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:28.333 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:28.333 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:28.333 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:28.333 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:28.333 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:23:28.333 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:28.333 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:28.333 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:28.333 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=2824570 00:23:28.333 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 2824570 00:23:28.333 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:23:28.334 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 2824570 ']' 00:23:28.334 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:28.334 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:28.334 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:28.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:28.334 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:28.334 09:41:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:28.334 [2024-12-09 09:41:02.866693] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:23:28.334 [2024-12-09 09:41:02.866768] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:23:28.334 [2024-12-09 09:41:02.971465] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:28.334 [2024-12-09 09:41:03.017986] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:28.334 [2024-12-09 09:41:03.018040] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:28.334 [2024-12-09 09:41:03.018049] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:28.334 [2024-12-09 09:41:03.018057] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:28.334 [2024-12-09 09:41:03.018064] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:28.334 [2024-12-09 09:41:03.019567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:28.334 [2024-12-09 09:41:03.019708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:23:28.334 [2024-12-09 09:41:03.019871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:23:28.334 [2024-12-09 09:41:03.019872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:28.334 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:28.334 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:23:28.334 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:28.334 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:28.334 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:28.334 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:28.334 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:28.334 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.334 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:28.334 [2024-12-09 09:41:03.743197] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:28.334 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.334 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:28.334 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.334 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:28.334 Malloc0 00:23:28.334 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.334 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:28.334 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.334 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:28.334 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.334 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:28.334 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.334 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:28.596 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.596 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:28.596 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.596 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:28.597 [2024-12-09 09:41:03.797153] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:28.597 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.597 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:23:28.597 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:23:28.597 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:23:28.597 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:23:28.597 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:28.597 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:28.597 { 00:23:28.597 "params": { 00:23:28.597 "name": "Nvme$subsystem", 00:23:28.597 "trtype": "$TEST_TRANSPORT", 00:23:28.597 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:28.597 "adrfam": "ipv4", 00:23:28.597 "trsvcid": "$NVMF_PORT", 00:23:28.597 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:28.597 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:28.597 "hdgst": ${hdgst:-false}, 00:23:28.597 "ddgst": ${ddgst:-false} 00:23:28.597 }, 00:23:28.597 "method": "bdev_nvme_attach_controller" 00:23:28.597 } 00:23:28.597 EOF 00:23:28.597 )") 00:23:28.597 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:23:28.597 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:23:28.597 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:23:28.597 09:41:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:28.597 "params": { 00:23:28.597 "name": "Nvme1", 00:23:28.597 "trtype": "tcp", 00:23:28.597 "traddr": "10.0.0.2", 00:23:28.597 "adrfam": "ipv4", 00:23:28.597 "trsvcid": "4420", 00:23:28.597 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:28.597 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:28.597 "hdgst": false, 00:23:28.597 "ddgst": false 00:23:28.597 }, 00:23:28.597 "method": "bdev_nvme_attach_controller" 00:23:28.597 }' 00:23:28.597 [2024-12-09 09:41:03.856462] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:23:28.597 [2024-12-09 09:41:03.856535] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2824700 ] 00:23:28.597 [2024-12-09 09:41:03.950215] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:28.597 [2024-12-09 09:41:03.996117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:28.597 [2024-12-09 09:41:03.996247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:28.597 [2024-12-09 09:41:03.996249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:29.170 I/O targets: 00:23:29.170 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:23:29.170 00:23:29.170 00:23:29.170 CUnit - A unit testing framework for C - Version 2.1-3 00:23:29.170 http://cunit.sourceforge.net/ 00:23:29.170 00:23:29.170 00:23:29.170 Suite: bdevio tests on: Nvme1n1 00:23:29.170 Test: blockdev write read block ...passed 00:23:29.170 Test: blockdev write zeroes read block ...passed 00:23:29.170 Test: blockdev write zeroes read no split ...passed 00:23:29.170 Test: blockdev write zeroes read split ...passed 00:23:29.170 Test: blockdev write zeroes read split partial ...passed 00:23:29.170 Test: blockdev reset ...[2024-12-09 09:41:04.540863] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:29.170 [2024-12-09 09:41:04.540943] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x96e5e0 (9): Bad file descriptor 00:23:29.170 [2024-12-09 09:41:04.596673] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:23:29.170 passed 00:23:29.170 Test: blockdev write read 8 blocks ...passed 00:23:29.170 Test: blockdev write read size > 128k ...passed 00:23:29.170 Test: blockdev write read invalid size ...passed 00:23:29.432 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:23:29.432 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:23:29.432 Test: blockdev write read max offset ...passed 00:23:29.432 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:23:29.432 Test: blockdev writev readv 8 blocks ...passed 00:23:29.432 Test: blockdev writev readv 30 x 1block ...passed 00:23:29.432 Test: blockdev writev readv block ...passed 00:23:29.432 Test: blockdev writev readv size > 128k ...passed 00:23:29.432 Test: blockdev writev readv size > 128k in two iovs ...passed 00:23:29.432 Test: blockdev comparev and writev ...[2024-12-09 09:41:04.780187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:29.432 [2024-12-09 09:41:04.780219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:29.432 [2024-12-09 09:41:04.780230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:29.432 [2024-12-09 09:41:04.780236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:29.432 [2024-12-09 09:41:04.780690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:29.432 [2024-12-09 09:41:04.780699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:29.432 [2024-12-09 09:41:04.780709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:29.432 [2024-12-09 09:41:04.780714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:29.432 [2024-12-09 09:41:04.781208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:29.432 [2024-12-09 09:41:04.781216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:29.432 [2024-12-09 09:41:04.781226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:29.432 [2024-12-09 09:41:04.781231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:29.432 [2024-12-09 09:41:04.781719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:29.432 [2024-12-09 09:41:04.781727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:29.432 [2024-12-09 09:41:04.781737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:29.432 [2024-12-09 09:41:04.781742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:29.432 passed 00:23:29.432 Test: blockdev nvme passthru rw ...passed 00:23:29.432 Test: blockdev nvme passthru vendor specific ...[2024-12-09 09:41:04.865429] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:29.432 [2024-12-09 09:41:04.865440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:29.432 [2024-12-09 09:41:04.865807] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:29.432 [2024-12-09 09:41:04.865817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:29.432 [2024-12-09 09:41:04.866152] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:29.432 [2024-12-09 09:41:04.866161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:29.432 [2024-12-09 09:41:04.866490] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:29.432 [2024-12-09 09:41:04.866500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:29.432 passed 00:23:29.432 Test: blockdev nvme admin passthru ...passed 00:23:29.693 Test: blockdev copy ...passed 00:23:29.693 00:23:29.693 Run Summary: Type Total Ran Passed Failed Inactive 00:23:29.693 suites 1 1 n/a 0 0 00:23:29.693 tests 23 23 23 0 0 00:23:29.693 asserts 152 152 152 0 n/a 00:23:29.693 00:23:29.693 Elapsed time = 1.208 seconds 00:23:29.955 09:41:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:29.955 09:41:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.955 09:41:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:29.955 09:41:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.955 09:41:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:23:29.955 09:41:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:23:29.955 09:41:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:29.955 09:41:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:23:29.955 09:41:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:29.955 09:41:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:23:29.955 09:41:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:29.955 09:41:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:29.955 rmmod nvme_tcp 00:23:29.955 rmmod nvme_fabrics 00:23:29.955 rmmod nvme_keyring 00:23:29.955 09:41:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:29.955 09:41:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:23:29.955 09:41:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:23:29.955 09:41:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 2824570 ']' 00:23:29.955 09:41:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 2824570 00:23:29.955 09:41:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 2824570 ']' 00:23:29.955 09:41:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 2824570 00:23:29.955 09:41:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:23:29.955 09:41:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:29.955 09:41:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2824570 00:23:29.955 09:41:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:23:29.955 09:41:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:23:29.955 09:41:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2824570' 00:23:29.955 killing process with pid 2824570 00:23:29.955 09:41:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 2824570 00:23:29.955 09:41:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 2824570 00:23:30.526 09:41:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:30.526 09:41:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:30.526 09:41:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:30.526 09:41:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:23:30.526 09:41:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:30.526 09:41:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:23:30.526 09:41:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:23:30.526 09:41:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:30.526 09:41:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:30.526 09:41:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:30.526 09:41:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:30.526 09:41:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:32.440 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:32.440 00:23:32.440 real 0m12.556s 00:23:32.440 user 0m14.848s 00:23:32.440 sys 0m6.670s 00:23:32.440 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:32.441 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:32.441 ************************************ 00:23:32.441 END TEST nvmf_bdevio_no_huge 00:23:32.441 ************************************ 00:23:32.441 09:41:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:23:32.441 09:41:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:32.441 09:41:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:32.441 09:41:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:32.441 ************************************ 00:23:32.441 START TEST nvmf_tls 00:23:32.441 ************************************ 00:23:32.441 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:23:32.703 * Looking for test storage... 00:23:32.703 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:32.703 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:32.703 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version 00:23:32.703 09:41:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:32.703 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:32.703 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:32.703 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:32.703 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:32.703 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:23:32.703 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:23:32.703 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:23:32.703 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:23:32.703 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:23:32.703 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:23:32.703 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:23:32.703 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:32.703 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:23:32.703 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:23:32.703 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:32.703 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:32.703 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:23:32.703 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:23:32.703 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:32.703 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:23:32.703 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:23:32.703 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:23:32.703 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:23:32.703 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:32.703 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:23:32.703 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:23:32.703 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:32.703 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:32.703 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:23:32.703 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:32.703 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:32.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:32.703 --rc genhtml_branch_coverage=1 00:23:32.703 --rc genhtml_function_coverage=1 00:23:32.703 --rc genhtml_legend=1 00:23:32.703 --rc geninfo_all_blocks=1 00:23:32.703 --rc geninfo_unexecuted_blocks=1 00:23:32.703 00:23:32.703 ' 00:23:32.703 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:32.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:32.703 --rc genhtml_branch_coverage=1 00:23:32.703 --rc genhtml_function_coverage=1 00:23:32.703 --rc genhtml_legend=1 00:23:32.703 --rc geninfo_all_blocks=1 00:23:32.703 --rc geninfo_unexecuted_blocks=1 00:23:32.703 00:23:32.703 ' 00:23:32.703 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:32.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:32.703 --rc genhtml_branch_coverage=1 00:23:32.703 --rc genhtml_function_coverage=1 00:23:32.703 --rc genhtml_legend=1 00:23:32.703 --rc geninfo_all_blocks=1 00:23:32.703 --rc geninfo_unexecuted_blocks=1 00:23:32.703 00:23:32.703 ' 00:23:32.703 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:32.703 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:32.703 --rc genhtml_branch_coverage=1 00:23:32.703 --rc genhtml_function_coverage=1 00:23:32.703 --rc genhtml_legend=1 00:23:32.703 --rc geninfo_all_blocks=1 00:23:32.703 --rc geninfo_unexecuted_blocks=1 00:23:32.703 00:23:32.704 ' 00:23:32.704 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:32.704 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:23:32.704 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:32.704 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:32.704 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:32.704 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:32.704 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:32.704 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:32.704 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:32.704 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:32.704 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:32.704 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:32.704 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:32.704 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:32.704 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:32.704 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:32.704 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:32.704 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:32.704 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:32.704 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:23:32.704 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:32.704 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:32.704 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:32.704 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:32.704 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:32.704 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:32.704 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:23:32.704 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:32.704 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:23:32.704 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:32.704 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:32.704 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:32.704 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:32.704 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:32.704 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:32.704 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:32.704 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:32.704 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:32.704 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:32.704 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:32.704 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:23:32.704 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:32.704 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:32.704 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:32.704 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:32.704 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:32.704 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:32.704 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:32.704 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:32.704 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:32.704 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:32.704 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:23:32.704 09:41:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:40.911 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:40.911 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:23:40.911 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:40.911 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:40.911 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:40.911 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:40.911 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:40.911 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:23:40.911 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:40.911 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:23:40.911 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:23:40.911 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:40.912 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:40.912 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:40.912 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:40.912 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:40.912 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:40.912 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.628 ms 00:23:40.912 00:23:40.912 --- 10.0.0.2 ping statistics --- 00:23:40.912 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:40.912 rtt min/avg/max/mdev = 0.628/0.628/0.628/0.000 ms 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:40.912 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:40.912 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:23:40.912 00:23:40.912 --- 10.0.0.1 ping statistics --- 00:23:40.912 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:40.912 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2829362 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2829362 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2829362 ']' 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:40.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:40.912 09:41:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:40.912 [2024-12-09 09:41:15.668319] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:23:40.912 [2024-12-09 09:41:15.668386] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:40.912 [2024-12-09 09:41:15.768297] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:40.912 [2024-12-09 09:41:15.794300] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:40.912 [2024-12-09 09:41:15.794347] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:40.912 [2024-12-09 09:41:15.794361] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:40.912 [2024-12-09 09:41:15.794367] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:40.912 [2024-12-09 09:41:15.794374] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:40.912 [2024-12-09 09:41:15.795143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:41.173 09:41:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:41.173 09:41:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:41.173 09:41:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:41.173 09:41:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:41.173 09:41:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:41.173 09:41:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:41.173 09:41:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:23:41.173 09:41:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:23:41.434 true 00:23:41.434 09:41:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:41.434 09:41:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:23:41.696 09:41:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:23:41.696 09:41:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:23:41.696 09:41:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:23:41.696 09:41:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:41.696 09:41:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:23:41.957 09:41:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:23:41.957 09:41:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:23:41.957 09:41:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:23:42.217 09:41:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:42.217 09:41:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:23:42.217 09:41:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:23:42.217 09:41:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:23:42.217 09:41:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:42.217 09:41:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:23:42.493 09:41:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:23:42.493 09:41:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:23:42.493 09:41:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:23:42.834 09:41:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:42.834 09:41:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:23:42.834 09:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:23:42.835 09:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:23:42.835 09:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:23:43.149 09:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:43.149 09:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:23:43.149 09:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:23:43.149 09:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:23:43.149 09:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:23:43.149 09:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:23:43.149 09:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:23:43.149 09:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:23:43.149 09:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:23:43.149 09:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:23:43.149 09:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:23:43.149 09:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:43.149 09:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:23:43.149 09:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:23:43.149 09:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:23:43.149 09:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:23:43.149 09:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:23:43.149 09:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:23:43.149 09:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:23:43.410 09:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:23:43.410 09:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:23:43.410 09:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.709AaRasRl 00:23:43.410 09:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:23:43.410 09:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.Qse5bJRum1 00:23:43.410 09:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:43.410 09:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:23:43.410 09:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.709AaRasRl 00:23:43.410 09:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.Qse5bJRum1 00:23:43.410 09:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:23:43.410 09:41:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:23:43.671 09:41:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.709AaRasRl 00:23:43.671 09:41:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.709AaRasRl 00:23:43.671 09:41:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:43.932 [2024-12-09 09:41:19.200524] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:43.932 09:41:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:43.932 09:41:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:44.192 [2024-12-09 09:41:19.533327] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:44.192 [2024-12-09 09:41:19.533533] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:44.192 09:41:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:44.451 malloc0 00:23:44.451 09:41:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:44.451 09:41:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.709AaRasRl 00:23:44.711 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:44.971 09:41:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.709AaRasRl 00:23:54.973 Initializing NVMe Controllers 00:23:54.973 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:54.974 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:54.974 Initialization complete. Launching workers. 00:23:54.974 ======================================================== 00:23:54.974 Latency(us) 00:23:54.974 Device Information : IOPS MiB/s Average min max 00:23:54.974 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18632.84 72.78 3435.03 1097.71 4957.97 00:23:54.974 ======================================================== 00:23:54.974 Total : 18632.84 72.78 3435.03 1097.71 4957.97 00:23:54.974 00:23:54.974 09:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.709AaRasRl 00:23:54.974 09:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:54.974 09:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:54.974 09:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:54.974 09:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.709AaRasRl 00:23:54.974 09:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:54.974 09:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2832119 00:23:54.974 09:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:54.974 09:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2832119 /var/tmp/bdevperf.sock 00:23:54.974 09:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:54.974 09:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2832119 ']' 00:23:54.974 09:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:54.974 09:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:54.974 09:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:54.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:54.974 09:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:54.974 09:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:54.974 [2024-12-09 09:41:30.393925] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:23:54.974 [2024-12-09 09:41:30.393982] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2832119 ] 00:23:55.235 [2024-12-09 09:41:30.450553] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:55.235 [2024-12-09 09:41:30.466677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:55.235 09:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:55.235 09:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:55.235 09:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.709AaRasRl 00:23:55.497 09:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:55.497 [2024-12-09 09:41:30.843312] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:55.497 TLSTESTn1 00:23:55.497 09:41:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:55.758 Running I/O for 10 seconds... 00:23:57.642 5014.00 IOPS, 19.59 MiB/s [2024-12-09T08:41:34.034Z] 5143.00 IOPS, 20.09 MiB/s [2024-12-09T08:41:35.412Z] 5111.67 IOPS, 19.97 MiB/s [2024-12-09T08:41:36.350Z] 5421.75 IOPS, 21.18 MiB/s [2024-12-09T08:41:37.289Z] 5370.40 IOPS, 20.98 MiB/s [2024-12-09T08:41:38.230Z] 5225.17 IOPS, 20.41 MiB/s [2024-12-09T08:41:39.172Z] 5267.14 IOPS, 20.57 MiB/s [2024-12-09T08:41:40.114Z] 5425.00 IOPS, 21.19 MiB/s [2024-12-09T08:41:41.057Z] 5437.44 IOPS, 21.24 MiB/s [2024-12-09T08:41:41.319Z] 5372.10 IOPS, 20.98 MiB/s 00:24:05.866 Latency(us) 00:24:05.866 [2024-12-09T08:41:41.319Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:05.866 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:05.866 Verification LBA range: start 0x0 length 0x2000 00:24:05.866 TLSTESTn1 : 10.03 5371.04 20.98 0.00 0.00 23789.16 4478.29 97867.09 00:24:05.866 [2024-12-09T08:41:41.319Z] =================================================================================================================== 00:24:05.866 [2024-12-09T08:41:41.319Z] Total : 5371.04 20.98 0.00 0.00 23789.16 4478.29 97867.09 00:24:05.866 { 00:24:05.866 "results": [ 00:24:05.866 { 00:24:05.866 "job": "TLSTESTn1", 00:24:05.866 "core_mask": "0x4", 00:24:05.866 "workload": "verify", 00:24:05.866 "status": "finished", 00:24:05.866 "verify_range": { 00:24:05.866 "start": 0, 00:24:05.866 "length": 8192 00:24:05.866 }, 00:24:05.866 "queue_depth": 128, 00:24:05.866 "io_size": 4096, 00:24:05.866 "runtime": 10.025611, 00:24:05.866 "iops": 5371.044218651611, 00:24:05.866 "mibps": 20.980641479107856, 00:24:05.866 "io_failed": 0, 00:24:05.866 "io_timeout": 0, 00:24:05.866 "avg_latency_us": 23789.15516466102, 00:24:05.866 "min_latency_us": 4478.293333333333, 00:24:05.866 "max_latency_us": 97867.09333333334 00:24:05.866 } 00:24:05.866 ], 00:24:05.866 "core_count": 1 00:24:05.866 } 00:24:05.866 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:05.866 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2832119 00:24:05.866 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2832119 ']' 00:24:05.866 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2832119 00:24:05.866 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:05.866 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:05.866 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2832119 00:24:05.866 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:05.866 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:05.866 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2832119' 00:24:05.866 killing process with pid 2832119 00:24:05.866 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2832119 00:24:05.866 Received shutdown signal, test time was about 10.000000 seconds 00:24:05.866 00:24:05.866 Latency(us) 00:24:05.866 [2024-12-09T08:41:41.319Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:05.866 [2024-12-09T08:41:41.319Z] =================================================================================================================== 00:24:05.866 [2024-12-09T08:41:41.319Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:05.866 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2832119 00:24:05.866 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Qse5bJRum1 00:24:05.866 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:24:05.866 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Qse5bJRum1 00:24:05.866 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:24:05.866 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:05.866 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:24:05.866 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:05.866 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Qse5bJRum1 00:24:05.866 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:05.866 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:05.866 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:05.866 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Qse5bJRum1 00:24:05.866 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:05.866 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2834212 00:24:05.866 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:05.866 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2834212 /var/tmp/bdevperf.sock 00:24:05.866 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:05.866 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2834212 ']' 00:24:05.866 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:05.866 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:05.866 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:05.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:05.866 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:05.866 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:05.866 [2024-12-09 09:41:41.313814] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:24:05.866 [2024-12-09 09:41:41.313872] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2834212 ] 00:24:06.127 [2024-12-09 09:41:41.388944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:06.127 [2024-12-09 09:41:41.404708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:06.127 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:06.127 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:06.127 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Qse5bJRum1 00:24:06.388 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:06.388 [2024-12-09 09:41:41.793321] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:06.388 [2024-12-09 09:41:41.799697] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:06.388 [2024-12-09 09:41:41.800534] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270420 (107): Transport endpoint is not connected 00:24:06.388 [2024-12-09 09:41:41.801529] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1270420 (9): Bad file descriptor 00:24:06.388 [2024-12-09 09:41:41.802531] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:24:06.388 [2024-12-09 09:41:41.802540] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:06.388 [2024-12-09 09:41:41.802546] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:24:06.388 [2024-12-09 09:41:41.802555] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:24:06.388 request: 00:24:06.388 { 00:24:06.388 "name": "TLSTEST", 00:24:06.388 "trtype": "tcp", 00:24:06.388 "traddr": "10.0.0.2", 00:24:06.388 "adrfam": "ipv4", 00:24:06.388 "trsvcid": "4420", 00:24:06.388 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:06.388 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:06.388 "prchk_reftag": false, 00:24:06.388 "prchk_guard": false, 00:24:06.388 "hdgst": false, 00:24:06.388 "ddgst": false, 00:24:06.388 "psk": "key0", 00:24:06.388 "allow_unrecognized_csi": false, 00:24:06.388 "method": "bdev_nvme_attach_controller", 00:24:06.388 "req_id": 1 00:24:06.388 } 00:24:06.388 Got JSON-RPC error response 00:24:06.388 response: 00:24:06.388 { 00:24:06.388 "code": -5, 00:24:06.388 "message": "Input/output error" 00:24:06.388 } 00:24:06.388 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2834212 00:24:06.388 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2834212 ']' 00:24:06.388 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2834212 00:24:06.388 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:06.388 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:06.388 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2834212 00:24:06.649 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:06.649 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:06.649 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2834212' 00:24:06.649 killing process with pid 2834212 00:24:06.649 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2834212 00:24:06.649 Received shutdown signal, test time was about 10.000000 seconds 00:24:06.649 00:24:06.649 Latency(us) 00:24:06.649 [2024-12-09T08:41:42.102Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:06.649 [2024-12-09T08:41:42.102Z] =================================================================================================================== 00:24:06.649 [2024-12-09T08:41:42.102Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:06.649 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2834212 00:24:06.649 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:24:06.649 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:24:06.649 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:06.649 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:06.649 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:06.649 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.709AaRasRl 00:24:06.649 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:24:06.649 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.709AaRasRl 00:24:06.649 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:24:06.649 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:06.649 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:24:06.649 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:06.649 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.709AaRasRl 00:24:06.649 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:06.649 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:06.649 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:24:06.649 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.709AaRasRl 00:24:06.649 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:06.649 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2834465 00:24:06.649 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:06.649 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2834465 /var/tmp/bdevperf.sock 00:24:06.649 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:06.649 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2834465 ']' 00:24:06.649 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:06.649 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:06.649 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:06.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:06.649 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:06.649 09:41:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:06.649 [2024-12-09 09:41:42.028220] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:24:06.649 [2024-12-09 09:41:42.028278] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2834465 ] 00:24:06.649 [2024-12-09 09:41:42.084803] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:06.649 [2024-12-09 09:41:42.100595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:06.910 09:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:06.910 09:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:06.910 09:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.709AaRasRl 00:24:06.910 09:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:24:07.171 [2024-12-09 09:41:42.485355] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:07.171 [2024-12-09 09:41:42.491268] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:24:07.171 [2024-12-09 09:41:42.491288] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:24:07.171 [2024-12-09 09:41:42.491308] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:07.171 [2024-12-09 09:41:42.491341] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a8420 (107): Transport endpoint is not connected 00:24:07.171 [2024-12-09 09:41:42.492328] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a8420 (9): Bad file descriptor 00:24:07.171 [2024-12-09 09:41:42.493330] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:24:07.171 [2024-12-09 09:41:42.493340] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:07.171 [2024-12-09 09:41:42.493346] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:24:07.171 [2024-12-09 09:41:42.493354] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:24:07.171 request: 00:24:07.171 { 00:24:07.171 "name": "TLSTEST", 00:24:07.171 "trtype": "tcp", 00:24:07.171 "traddr": "10.0.0.2", 00:24:07.171 "adrfam": "ipv4", 00:24:07.171 "trsvcid": "4420", 00:24:07.171 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:07.171 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:07.171 "prchk_reftag": false, 00:24:07.171 "prchk_guard": false, 00:24:07.171 "hdgst": false, 00:24:07.171 "ddgst": false, 00:24:07.171 "psk": "key0", 00:24:07.171 "allow_unrecognized_csi": false, 00:24:07.171 "method": "bdev_nvme_attach_controller", 00:24:07.171 "req_id": 1 00:24:07.171 } 00:24:07.171 Got JSON-RPC error response 00:24:07.171 response: 00:24:07.171 { 00:24:07.171 "code": -5, 00:24:07.171 "message": "Input/output error" 00:24:07.171 } 00:24:07.171 09:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2834465 00:24:07.171 09:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2834465 ']' 00:24:07.172 09:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2834465 00:24:07.172 09:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:07.172 09:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:07.172 09:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2834465 00:24:07.172 09:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:07.172 09:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:07.172 09:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2834465' 00:24:07.172 killing process with pid 2834465 00:24:07.172 09:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2834465 00:24:07.172 Received shutdown signal, test time was about 10.000000 seconds 00:24:07.172 00:24:07.172 Latency(us) 00:24:07.172 [2024-12-09T08:41:42.625Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:07.172 [2024-12-09T08:41:42.625Z] =================================================================================================================== 00:24:07.172 [2024-12-09T08:41:42.625Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:07.172 09:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2834465 00:24:07.433 09:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:24:07.433 09:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:24:07.433 09:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:07.433 09:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:07.433 09:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:07.433 09:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.709AaRasRl 00:24:07.433 09:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:24:07.433 09:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.709AaRasRl 00:24:07.433 09:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:24:07.433 09:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:07.433 09:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:24:07.433 09:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:07.433 09:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.709AaRasRl 00:24:07.434 09:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:07.434 09:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:24:07.434 09:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:07.434 09:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.709AaRasRl 00:24:07.434 09:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:07.434 09:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2834482 00:24:07.434 09:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:07.434 09:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2834482 /var/tmp/bdevperf.sock 00:24:07.434 09:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:07.434 09:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2834482 ']' 00:24:07.434 09:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:07.434 09:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:07.434 09:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:07.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:07.434 09:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:07.434 09:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:07.434 [2024-12-09 09:41:42.745712] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:24:07.434 [2024-12-09 09:41:42.745767] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2834482 ] 00:24:07.434 [2024-12-09 09:41:42.804543] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:07.434 [2024-12-09 09:41:42.818877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:07.695 09:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:07.695 09:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:07.695 09:41:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.709AaRasRl 00:24:07.695 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:07.956 [2024-12-09 09:41:43.235759] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:07.956 [2024-12-09 09:41:43.243105] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:24:07.956 [2024-12-09 09:41:43.243122] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:24:07.956 [2024-12-09 09:41:43.243141] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:07.956 [2024-12-09 09:41:43.244075] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16a9420 (107): Transport endpoint is not connected 00:24:07.956 [2024-12-09 09:41:43.245071] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16a9420 (9): Bad file descriptor 00:24:07.956 [2024-12-09 09:41:43.246073] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:24:07.956 [2024-12-09 09:41:43.246082] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:07.956 [2024-12-09 09:41:43.246088] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:24:07.956 [2024-12-09 09:41:43.246096] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:24:07.956 request: 00:24:07.956 { 00:24:07.956 "name": "TLSTEST", 00:24:07.956 "trtype": "tcp", 00:24:07.956 "traddr": "10.0.0.2", 00:24:07.956 "adrfam": "ipv4", 00:24:07.956 "trsvcid": "4420", 00:24:07.956 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:07.956 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:07.956 "prchk_reftag": false, 00:24:07.956 "prchk_guard": false, 00:24:07.956 "hdgst": false, 00:24:07.956 "ddgst": false, 00:24:07.956 "psk": "key0", 00:24:07.956 "allow_unrecognized_csi": false, 00:24:07.956 "method": "bdev_nvme_attach_controller", 00:24:07.956 "req_id": 1 00:24:07.956 } 00:24:07.956 Got JSON-RPC error response 00:24:07.956 response: 00:24:07.956 { 00:24:07.956 "code": -5, 00:24:07.956 "message": "Input/output error" 00:24:07.956 } 00:24:07.956 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2834482 00:24:07.956 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2834482 ']' 00:24:07.956 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2834482 00:24:07.956 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:07.956 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:07.956 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2834482 00:24:07.956 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:07.956 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:07.956 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2834482' 00:24:07.956 killing process with pid 2834482 00:24:07.956 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2834482 00:24:07.956 Received shutdown signal, test time was about 10.000000 seconds 00:24:07.956 00:24:07.956 Latency(us) 00:24:07.956 [2024-12-09T08:41:43.409Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:07.956 [2024-12-09T08:41:43.409Z] =================================================================================================================== 00:24:07.956 [2024-12-09T08:41:43.409Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:07.956 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2834482 00:24:08.217 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:24:08.217 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:24:08.217 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:08.217 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:08.217 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:08.217 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:24:08.217 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:24:08.217 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:24:08.217 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:24:08.217 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:08.217 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:24:08.217 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:08.217 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:24:08.217 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:08.217 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:08.217 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:08.217 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:24:08.217 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:08.217 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2834795 00:24:08.217 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:08.217 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2834795 /var/tmp/bdevperf.sock 00:24:08.217 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:08.217 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2834795 ']' 00:24:08.217 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:08.217 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:08.217 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:08.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:08.217 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:08.217 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:08.217 [2024-12-09 09:41:43.490715] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:24:08.217 [2024-12-09 09:41:43.490773] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2834795 ] 00:24:08.217 [2024-12-09 09:41:43.547369] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:08.217 [2024-12-09 09:41:43.563170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:08.217 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:08.217 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:08.217 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:24:08.477 [2024-12-09 09:41:43.779386] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:24:08.477 [2024-12-09 09:41:43.779407] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:24:08.477 request: 00:24:08.477 { 00:24:08.477 "name": "key0", 00:24:08.477 "path": "", 00:24:08.477 "method": "keyring_file_add_key", 00:24:08.477 "req_id": 1 00:24:08.477 } 00:24:08.477 Got JSON-RPC error response 00:24:08.477 response: 00:24:08.477 { 00:24:08.477 "code": -1, 00:24:08.477 "message": "Operation not permitted" 00:24:08.477 } 00:24:08.477 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:08.737 [2024-12-09 09:41:43.931850] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:08.737 [2024-12-09 09:41:43.931873] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:24:08.737 request: 00:24:08.737 { 00:24:08.737 "name": "TLSTEST", 00:24:08.737 "trtype": "tcp", 00:24:08.737 "traddr": "10.0.0.2", 00:24:08.737 "adrfam": "ipv4", 00:24:08.737 "trsvcid": "4420", 00:24:08.738 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:08.738 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:08.738 "prchk_reftag": false, 00:24:08.738 "prchk_guard": false, 00:24:08.738 "hdgst": false, 00:24:08.738 "ddgst": false, 00:24:08.738 "psk": "key0", 00:24:08.738 "allow_unrecognized_csi": false, 00:24:08.738 "method": "bdev_nvme_attach_controller", 00:24:08.738 "req_id": 1 00:24:08.738 } 00:24:08.738 Got JSON-RPC error response 00:24:08.738 response: 00:24:08.738 { 00:24:08.738 "code": -126, 00:24:08.738 "message": "Required key not available" 00:24:08.738 } 00:24:08.738 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2834795 00:24:08.738 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2834795 ']' 00:24:08.738 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2834795 00:24:08.738 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:08.738 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:08.738 09:41:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2834795 00:24:08.738 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:08.738 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:08.738 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2834795' 00:24:08.738 killing process with pid 2834795 00:24:08.738 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2834795 00:24:08.738 Received shutdown signal, test time was about 10.000000 seconds 00:24:08.738 00:24:08.738 Latency(us) 00:24:08.738 [2024-12-09T08:41:44.191Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:08.738 [2024-12-09T08:41:44.191Z] =================================================================================================================== 00:24:08.738 [2024-12-09T08:41:44.191Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:08.738 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2834795 00:24:08.738 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:24:08.738 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:24:08.738 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:08.738 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:08.738 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:08.738 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 2829362 00:24:08.738 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2829362 ']' 00:24:08.738 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2829362 00:24:08.738 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:08.738 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:08.738 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2829362 00:24:08.738 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:08.738 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:08.738 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2829362' 00:24:08.738 killing process with pid 2829362 00:24:08.738 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2829362 00:24:08.738 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2829362 00:24:08.998 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:24:08.998 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:24:08.998 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:24:08.998 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:24:08.998 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:24:08.998 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:24:08.998 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:24:08.998 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:24:08.998 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:24:08.998 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.D6ZEjKx6Va 00:24:08.998 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:24:08.998 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.D6ZEjKx6Va 00:24:08.998 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:24:08.998 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:08.998 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:08.998 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:08.998 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2834843 00:24:08.998 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2834843 00:24:08.998 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:08.998 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2834843 ']' 00:24:08.998 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:08.998 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:08.998 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:08.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:08.998 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:08.998 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:08.998 [2024-12-09 09:41:44.386621] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:24:08.998 [2024-12-09 09:41:44.386689] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:09.258 [2024-12-09 09:41:44.476472] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:09.258 [2024-12-09 09:41:44.492646] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:09.258 [2024-12-09 09:41:44.492678] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:09.258 [2024-12-09 09:41:44.492684] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:09.258 [2024-12-09 09:41:44.492689] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:09.258 [2024-12-09 09:41:44.492694] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:09.258 [2024-12-09 09:41:44.493165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:09.258 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:09.258 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:09.258 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:09.258 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:09.258 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:09.258 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:09.258 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.D6ZEjKx6Va 00:24:09.258 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.D6ZEjKx6Va 00:24:09.258 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:09.517 [2024-12-09 09:41:44.768773] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:09.517 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:09.517 09:41:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:09.777 [2024-12-09 09:41:45.093563] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:09.777 [2024-12-09 09:41:45.093780] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:09.777 09:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:10.036 malloc0 00:24:10.036 09:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:10.036 09:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.D6ZEjKx6Va 00:24:10.295 09:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:10.556 09:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.D6ZEjKx6Va 00:24:10.556 09:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:10.556 09:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:10.556 09:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:10.556 09:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.D6ZEjKx6Va 00:24:10.556 09:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:10.556 09:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2835203 00:24:10.556 09:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:10.556 09:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2835203 /var/tmp/bdevperf.sock 00:24:10.556 09:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:10.556 09:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2835203 ']' 00:24:10.556 09:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:10.556 09:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:10.556 09:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:10.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:10.557 09:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:10.557 09:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:10.557 [2024-12-09 09:41:45.824046] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:24:10.557 [2024-12-09 09:41:45.824102] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2835203 ] 00:24:10.557 [2024-12-09 09:41:45.882582] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:10.557 [2024-12-09 09:41:45.898652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:10.557 09:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:10.557 09:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:10.557 09:41:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.D6ZEjKx6Va 00:24:10.817 09:41:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:11.077 [2024-12-09 09:41:46.311537] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:11.077 TLSTESTn1 00:24:11.077 09:41:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:11.077 Running I/O for 10 seconds... 00:24:13.401 6245.00 IOPS, 24.39 MiB/s [2024-12-09T08:41:49.797Z] 6358.50 IOPS, 24.84 MiB/s [2024-12-09T08:41:50.737Z] 6414.33 IOPS, 25.06 MiB/s [2024-12-09T08:41:51.678Z] 6396.50 IOPS, 24.99 MiB/s [2024-12-09T08:41:52.621Z] 6138.60 IOPS, 23.98 MiB/s [2024-12-09T08:41:53.563Z] 5950.33 IOPS, 23.24 MiB/s [2024-12-09T08:41:54.948Z] 5822.86 IOPS, 22.75 MiB/s [2024-12-09T08:41:55.888Z] 5723.50 IOPS, 22.36 MiB/s [2024-12-09T08:41:56.831Z] 5645.67 IOPS, 22.05 MiB/s [2024-12-09T08:41:56.831Z] 5574.60 IOPS, 21.78 MiB/s 00:24:21.378 Latency(us) 00:24:21.378 [2024-12-09T08:41:56.831Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:21.378 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:21.378 Verification LBA range: start 0x0 length 0x2000 00:24:21.379 TLSTESTn1 : 10.03 5573.35 21.77 0.00 0.00 22924.58 6362.45 23920.64 00:24:21.379 [2024-12-09T08:41:56.832Z] =================================================================================================================== 00:24:21.379 [2024-12-09T08:41:56.832Z] Total : 5573.35 21.77 0.00 0.00 22924.58 6362.45 23920.64 00:24:21.379 { 00:24:21.379 "results": [ 00:24:21.379 { 00:24:21.379 "job": "TLSTESTn1", 00:24:21.379 "core_mask": "0x4", 00:24:21.379 "workload": "verify", 00:24:21.379 "status": "finished", 00:24:21.379 "verify_range": { 00:24:21.379 "start": 0, 00:24:21.379 "length": 8192 00:24:21.379 }, 00:24:21.379 "queue_depth": 128, 00:24:21.379 "io_size": 4096, 00:24:21.379 "runtime": 10.025204, 00:24:21.379 "iops": 5573.3529212971625, 00:24:21.379 "mibps": 21.77090984881704, 00:24:21.379 "io_failed": 0, 00:24:21.379 "io_timeout": 0, 00:24:21.379 "avg_latency_us": 22924.58248964933, 00:24:21.379 "min_latency_us": 6362.453333333333, 00:24:21.379 "max_latency_us": 23920.64 00:24:21.379 } 00:24:21.379 ], 00:24:21.379 "core_count": 1 00:24:21.379 } 00:24:21.379 09:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:21.379 09:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2835203 00:24:21.379 09:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2835203 ']' 00:24:21.379 09:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2835203 00:24:21.379 09:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:21.379 09:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:21.379 09:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2835203 00:24:21.379 09:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:21.379 09:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:21.379 09:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2835203' 00:24:21.379 killing process with pid 2835203 00:24:21.379 09:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2835203 00:24:21.379 Received shutdown signal, test time was about 10.000000 seconds 00:24:21.379 00:24:21.379 Latency(us) 00:24:21.379 [2024-12-09T08:41:56.832Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:21.379 [2024-12-09T08:41:56.832Z] =================================================================================================================== 00:24:21.379 [2024-12-09T08:41:56.832Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:21.379 09:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2835203 00:24:21.379 09:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.D6ZEjKx6Va 00:24:21.379 09:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.D6ZEjKx6Va 00:24:21.379 09:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:24:21.379 09:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.D6ZEjKx6Va 00:24:21.379 09:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:24:21.379 09:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:21.379 09:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:24:21.379 09:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:21.379 09:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.D6ZEjKx6Va 00:24:21.379 09:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:21.379 09:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:21.379 09:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:21.379 09:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.D6ZEjKx6Va 00:24:21.379 09:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:21.379 09:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2837222 00:24:21.379 09:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:21.379 09:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2837222 /var/tmp/bdevperf.sock 00:24:21.379 09:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:21.379 09:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2837222 ']' 00:24:21.379 09:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:21.379 09:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:21.379 09:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:21.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:21.379 09:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:21.379 09:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:21.379 [2024-12-09 09:41:56.792512] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:24:21.379 [2024-12-09 09:41:56.792566] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2837222 ] 00:24:21.640 [2024-12-09 09:41:56.851239] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:21.640 [2024-12-09 09:41:56.865738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:21.640 09:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:21.640 09:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:21.640 09:41:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.D6ZEjKx6Va 00:24:21.902 [2024-12-09 09:41:57.098252] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.D6ZEjKx6Va': 0100666 00:24:21.903 [2024-12-09 09:41:57.098282] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:24:21.903 request: 00:24:21.903 { 00:24:21.903 "name": "key0", 00:24:21.903 "path": "/tmp/tmp.D6ZEjKx6Va", 00:24:21.903 "method": "keyring_file_add_key", 00:24:21.903 "req_id": 1 00:24:21.903 } 00:24:21.903 Got JSON-RPC error response 00:24:21.903 response: 00:24:21.903 { 00:24:21.903 "code": -1, 00:24:21.903 "message": "Operation not permitted" 00:24:21.903 } 00:24:21.903 09:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:21.903 [2024-12-09 09:41:57.278769] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:21.903 [2024-12-09 09:41:57.278792] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:24:21.903 request: 00:24:21.903 { 00:24:21.903 "name": "TLSTEST", 00:24:21.903 "trtype": "tcp", 00:24:21.903 "traddr": "10.0.0.2", 00:24:21.903 "adrfam": "ipv4", 00:24:21.903 "trsvcid": "4420", 00:24:21.903 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:21.903 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:21.903 "prchk_reftag": false, 00:24:21.903 "prchk_guard": false, 00:24:21.903 "hdgst": false, 00:24:21.903 "ddgst": false, 00:24:21.903 "psk": "key0", 00:24:21.903 "allow_unrecognized_csi": false, 00:24:21.903 "method": "bdev_nvme_attach_controller", 00:24:21.903 "req_id": 1 00:24:21.903 } 00:24:21.903 Got JSON-RPC error response 00:24:21.903 response: 00:24:21.903 { 00:24:21.903 "code": -126, 00:24:21.903 "message": "Required key not available" 00:24:21.903 } 00:24:21.903 09:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2837222 00:24:21.903 09:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2837222 ']' 00:24:21.903 09:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2837222 00:24:21.903 09:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:21.903 09:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:21.903 09:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2837222 00:24:22.164 09:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:22.164 09:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:22.164 09:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2837222' 00:24:22.164 killing process with pid 2837222 00:24:22.164 09:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2837222 00:24:22.164 Received shutdown signal, test time was about 10.000000 seconds 00:24:22.164 00:24:22.164 Latency(us) 00:24:22.164 [2024-12-09T08:41:57.617Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:22.164 [2024-12-09T08:41:57.617Z] =================================================================================================================== 00:24:22.164 [2024-12-09T08:41:57.617Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:22.164 09:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2837222 00:24:22.164 09:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:24:22.164 09:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:24:22.164 09:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:22.164 09:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:22.164 09:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:22.164 09:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 2834843 00:24:22.164 09:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2834843 ']' 00:24:22.164 09:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2834843 00:24:22.164 09:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:22.164 09:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:22.164 09:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2834843 00:24:22.164 09:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:22.164 09:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:22.164 09:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2834843' 00:24:22.164 killing process with pid 2834843 00:24:22.164 09:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2834843 00:24:22.164 09:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2834843 00:24:22.426 09:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:24:22.426 09:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:22.426 09:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:22.426 09:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:22.426 09:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2837554 00:24:22.426 09:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2837554 00:24:22.426 09:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:22.426 09:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2837554 ']' 00:24:22.426 09:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:22.426 09:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:22.426 09:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:22.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:22.426 09:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:22.426 09:41:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:22.426 [2024-12-09 09:41:57.703077] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:24:22.426 [2024-12-09 09:41:57.703135] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:22.426 [2024-12-09 09:41:57.793254] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:22.426 [2024-12-09 09:41:57.808296] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:22.426 [2024-12-09 09:41:57.808330] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:22.426 [2024-12-09 09:41:57.808337] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:22.426 [2024-12-09 09:41:57.808342] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:22.426 [2024-12-09 09:41:57.808347] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:22.426 [2024-12-09 09:41:57.808816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:23.369 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:23.369 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:23.369 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:23.369 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:23.369 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:23.369 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:23.369 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.D6ZEjKx6Va 00:24:23.369 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:24:23.369 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.D6ZEjKx6Va 00:24:23.369 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:24:23.369 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:23.369 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:24:23.369 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:23.369 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.D6ZEjKx6Va 00:24:23.369 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.D6ZEjKx6Va 00:24:23.369 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:23.369 [2024-12-09 09:41:58.679845] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:23.369 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:23.631 09:41:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:23.631 [2024-12-09 09:41:59.012664] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:23.631 [2024-12-09 09:41:59.012869] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:23.631 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:23.891 malloc0 00:24:23.891 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:24.152 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.D6ZEjKx6Va 00:24:24.152 [2024-12-09 09:41:59.491678] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.D6ZEjKx6Va': 0100666 00:24:24.152 [2024-12-09 09:41:59.491697] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:24:24.152 request: 00:24:24.152 { 00:24:24.152 "name": "key0", 00:24:24.152 "path": "/tmp/tmp.D6ZEjKx6Va", 00:24:24.152 "method": "keyring_file_add_key", 00:24:24.152 "req_id": 1 00:24:24.152 } 00:24:24.152 Got JSON-RPC error response 00:24:24.152 response: 00:24:24.152 { 00:24:24.152 "code": -1, 00:24:24.152 "message": "Operation not permitted" 00:24:24.152 } 00:24:24.152 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:24.412 [2024-12-09 09:41:59.648077] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:24:24.412 [2024-12-09 09:41:59.648102] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:24:24.412 request: 00:24:24.412 { 00:24:24.412 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:24.412 "host": "nqn.2016-06.io.spdk:host1", 00:24:24.412 "psk": "key0", 00:24:24.412 "method": "nvmf_subsystem_add_host", 00:24:24.412 "req_id": 1 00:24:24.412 } 00:24:24.412 Got JSON-RPC error response 00:24:24.412 response: 00:24:24.412 { 00:24:24.412 "code": -32603, 00:24:24.412 "message": "Internal error" 00:24:24.412 } 00:24:24.412 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:24:24.412 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:24.412 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:24.412 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:24.412 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 2837554 00:24:24.412 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2837554 ']' 00:24:24.412 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2837554 00:24:24.412 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:24.412 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:24.412 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2837554 00:24:24.412 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:24.412 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:24.412 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2837554' 00:24:24.412 killing process with pid 2837554 00:24:24.412 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2837554 00:24:24.412 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2837554 00:24:24.412 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.D6ZEjKx6Va 00:24:24.412 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:24:24.412 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:24.412 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:24.412 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:24.412 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2837928 00:24:24.412 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2837928 00:24:24.412 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:24.412 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2837928 ']' 00:24:24.412 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:24.412 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:24.412 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:24.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:24.412 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:24.412 09:41:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:24.673 [2024-12-09 09:41:59.897767] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:24:24.673 [2024-12-09 09:41:59.897820] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:24.673 [2024-12-09 09:41:59.988370] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:24.673 [2024-12-09 09:42:00.003622] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:24.673 [2024-12-09 09:42:00.003668] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:24.673 [2024-12-09 09:42:00.003675] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:24.673 [2024-12-09 09:42:00.003681] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:24.673 [2024-12-09 09:42:00.003685] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:24.673 [2024-12-09 09:42:00.004055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:25.244 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:25.244 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:25.244 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:25.244 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:25.245 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:25.505 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:25.505 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.D6ZEjKx6Va 00:24:25.505 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.D6ZEjKx6Va 00:24:25.505 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:25.505 [2024-12-09 09:42:00.880420] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:25.505 09:42:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:25.766 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:25.766 [2024-12-09 09:42:01.217231] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:25.766 [2024-12-09 09:42:01.217444] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:26.026 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:26.026 malloc0 00:24:26.026 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:26.286 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.D6ZEjKx6Va 00:24:26.286 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:26.546 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=2838403 00:24:26.546 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:26.546 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:26.546 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 2838403 /var/tmp/bdevperf.sock 00:24:26.546 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2838403 ']' 00:24:26.546 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:26.546 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:26.546 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:26.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:26.546 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:26.546 09:42:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:26.546 [2024-12-09 09:42:01.954382] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:24:26.546 [2024-12-09 09:42:01.954440] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2838403 ] 00:24:26.806 [2024-12-09 09:42:02.011499] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:26.807 [2024-12-09 09:42:02.027643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:26.807 09:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:26.807 09:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:26.807 09:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.D6ZEjKx6Va 00:24:27.067 09:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:27.067 [2024-12-09 09:42:02.432416] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:27.067 TLSTESTn1 00:24:27.328 09:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:24:27.590 09:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:24:27.590 "subsystems": [ 00:24:27.590 { 00:24:27.590 "subsystem": "keyring", 00:24:27.590 "config": [ 00:24:27.590 { 00:24:27.590 "method": "keyring_file_add_key", 00:24:27.590 "params": { 00:24:27.590 "name": "key0", 00:24:27.590 "path": "/tmp/tmp.D6ZEjKx6Va" 00:24:27.590 } 00:24:27.590 } 00:24:27.590 ] 00:24:27.590 }, 00:24:27.590 { 00:24:27.590 "subsystem": "iobuf", 00:24:27.590 "config": [ 00:24:27.590 { 00:24:27.590 "method": "iobuf_set_options", 00:24:27.590 "params": { 00:24:27.590 "small_pool_count": 8192, 00:24:27.590 "large_pool_count": 1024, 00:24:27.590 "small_bufsize": 8192, 00:24:27.590 "large_bufsize": 135168, 00:24:27.590 "enable_numa": false 00:24:27.590 } 00:24:27.590 } 00:24:27.590 ] 00:24:27.590 }, 00:24:27.590 { 00:24:27.590 "subsystem": "sock", 00:24:27.590 "config": [ 00:24:27.590 { 00:24:27.590 "method": "sock_set_default_impl", 00:24:27.590 "params": { 00:24:27.590 "impl_name": "posix" 00:24:27.590 } 00:24:27.590 }, 00:24:27.590 { 00:24:27.590 "method": "sock_impl_set_options", 00:24:27.590 "params": { 00:24:27.590 "impl_name": "ssl", 00:24:27.590 "recv_buf_size": 4096, 00:24:27.590 "send_buf_size": 4096, 00:24:27.590 "enable_recv_pipe": true, 00:24:27.590 "enable_quickack": false, 00:24:27.590 "enable_placement_id": 0, 00:24:27.590 "enable_zerocopy_send_server": true, 00:24:27.590 "enable_zerocopy_send_client": false, 00:24:27.590 "zerocopy_threshold": 0, 00:24:27.590 "tls_version": 0, 00:24:27.590 "enable_ktls": false 00:24:27.590 } 00:24:27.590 }, 00:24:27.590 { 00:24:27.590 "method": "sock_impl_set_options", 00:24:27.590 "params": { 00:24:27.590 "impl_name": "posix", 00:24:27.590 "recv_buf_size": 2097152, 00:24:27.590 "send_buf_size": 2097152, 00:24:27.590 "enable_recv_pipe": true, 00:24:27.590 "enable_quickack": false, 00:24:27.590 "enable_placement_id": 0, 00:24:27.590 "enable_zerocopy_send_server": true, 00:24:27.590 "enable_zerocopy_send_client": false, 00:24:27.590 "zerocopy_threshold": 0, 00:24:27.590 "tls_version": 0, 00:24:27.590 "enable_ktls": false 00:24:27.590 } 00:24:27.590 } 00:24:27.590 ] 00:24:27.590 }, 00:24:27.590 { 00:24:27.590 "subsystem": "vmd", 00:24:27.590 "config": [] 00:24:27.590 }, 00:24:27.590 { 00:24:27.590 "subsystem": "accel", 00:24:27.590 "config": [ 00:24:27.590 { 00:24:27.590 "method": "accel_set_options", 00:24:27.590 "params": { 00:24:27.590 "small_cache_size": 128, 00:24:27.590 "large_cache_size": 16, 00:24:27.590 "task_count": 2048, 00:24:27.590 "sequence_count": 2048, 00:24:27.590 "buf_count": 2048 00:24:27.590 } 00:24:27.590 } 00:24:27.590 ] 00:24:27.590 }, 00:24:27.590 { 00:24:27.590 "subsystem": "bdev", 00:24:27.590 "config": [ 00:24:27.590 { 00:24:27.590 "method": "bdev_set_options", 00:24:27.590 "params": { 00:24:27.590 "bdev_io_pool_size": 65535, 00:24:27.590 "bdev_io_cache_size": 256, 00:24:27.590 "bdev_auto_examine": true, 00:24:27.590 "iobuf_small_cache_size": 128, 00:24:27.590 "iobuf_large_cache_size": 16 00:24:27.590 } 00:24:27.590 }, 00:24:27.590 { 00:24:27.590 "method": "bdev_raid_set_options", 00:24:27.590 "params": { 00:24:27.590 "process_window_size_kb": 1024, 00:24:27.590 "process_max_bandwidth_mb_sec": 0 00:24:27.590 } 00:24:27.590 }, 00:24:27.590 { 00:24:27.590 "method": "bdev_iscsi_set_options", 00:24:27.590 "params": { 00:24:27.590 "timeout_sec": 30 00:24:27.590 } 00:24:27.590 }, 00:24:27.590 { 00:24:27.590 "method": "bdev_nvme_set_options", 00:24:27.590 "params": { 00:24:27.590 "action_on_timeout": "none", 00:24:27.590 "timeout_us": 0, 00:24:27.590 "timeout_admin_us": 0, 00:24:27.590 "keep_alive_timeout_ms": 10000, 00:24:27.590 "arbitration_burst": 0, 00:24:27.590 "low_priority_weight": 0, 00:24:27.590 "medium_priority_weight": 0, 00:24:27.590 "high_priority_weight": 0, 00:24:27.590 "nvme_adminq_poll_period_us": 10000, 00:24:27.590 "nvme_ioq_poll_period_us": 0, 00:24:27.590 "io_queue_requests": 0, 00:24:27.590 "delay_cmd_submit": true, 00:24:27.590 "transport_retry_count": 4, 00:24:27.590 "bdev_retry_count": 3, 00:24:27.590 "transport_ack_timeout": 0, 00:24:27.590 "ctrlr_loss_timeout_sec": 0, 00:24:27.591 "reconnect_delay_sec": 0, 00:24:27.591 "fast_io_fail_timeout_sec": 0, 00:24:27.591 "disable_auto_failback": false, 00:24:27.591 "generate_uuids": false, 00:24:27.591 "transport_tos": 0, 00:24:27.591 "nvme_error_stat": false, 00:24:27.591 "rdma_srq_size": 0, 00:24:27.591 "io_path_stat": false, 00:24:27.591 "allow_accel_sequence": false, 00:24:27.591 "rdma_max_cq_size": 0, 00:24:27.591 "rdma_cm_event_timeout_ms": 0, 00:24:27.591 "dhchap_digests": [ 00:24:27.591 "sha256", 00:24:27.591 "sha384", 00:24:27.591 "sha512" 00:24:27.591 ], 00:24:27.591 "dhchap_dhgroups": [ 00:24:27.591 "null", 00:24:27.591 "ffdhe2048", 00:24:27.591 "ffdhe3072", 00:24:27.591 "ffdhe4096", 00:24:27.591 "ffdhe6144", 00:24:27.591 "ffdhe8192" 00:24:27.591 ] 00:24:27.591 } 00:24:27.591 }, 00:24:27.591 { 00:24:27.591 "method": "bdev_nvme_set_hotplug", 00:24:27.591 "params": { 00:24:27.591 "period_us": 100000, 00:24:27.591 "enable": false 00:24:27.591 } 00:24:27.591 }, 00:24:27.591 { 00:24:27.591 "method": "bdev_malloc_create", 00:24:27.591 "params": { 00:24:27.591 "name": "malloc0", 00:24:27.591 "num_blocks": 8192, 00:24:27.591 "block_size": 4096, 00:24:27.591 "physical_block_size": 4096, 00:24:27.591 "uuid": "c3ff3de7-45d5-4f7e-a997-9111dd4d5939", 00:24:27.591 "optimal_io_boundary": 0, 00:24:27.591 "md_size": 0, 00:24:27.591 "dif_type": 0, 00:24:27.591 "dif_is_head_of_md": false, 00:24:27.591 "dif_pi_format": 0 00:24:27.591 } 00:24:27.591 }, 00:24:27.591 { 00:24:27.591 "method": "bdev_wait_for_examine" 00:24:27.591 } 00:24:27.591 ] 00:24:27.591 }, 00:24:27.591 { 00:24:27.591 "subsystem": "nbd", 00:24:27.591 "config": [] 00:24:27.591 }, 00:24:27.591 { 00:24:27.591 "subsystem": "scheduler", 00:24:27.591 "config": [ 00:24:27.591 { 00:24:27.591 "method": "framework_set_scheduler", 00:24:27.591 "params": { 00:24:27.591 "name": "static" 00:24:27.591 } 00:24:27.591 } 00:24:27.591 ] 00:24:27.591 }, 00:24:27.591 { 00:24:27.591 "subsystem": "nvmf", 00:24:27.591 "config": [ 00:24:27.591 { 00:24:27.591 "method": "nvmf_set_config", 00:24:27.591 "params": { 00:24:27.591 "discovery_filter": "match_any", 00:24:27.591 "admin_cmd_passthru": { 00:24:27.591 "identify_ctrlr": false 00:24:27.591 }, 00:24:27.591 "dhchap_digests": [ 00:24:27.591 "sha256", 00:24:27.591 "sha384", 00:24:27.591 "sha512" 00:24:27.591 ], 00:24:27.591 "dhchap_dhgroups": [ 00:24:27.591 "null", 00:24:27.591 "ffdhe2048", 00:24:27.591 "ffdhe3072", 00:24:27.591 "ffdhe4096", 00:24:27.591 "ffdhe6144", 00:24:27.591 "ffdhe8192" 00:24:27.591 ] 00:24:27.591 } 00:24:27.591 }, 00:24:27.591 { 00:24:27.591 "method": "nvmf_set_max_subsystems", 00:24:27.591 "params": { 00:24:27.591 "max_subsystems": 1024 00:24:27.591 } 00:24:27.591 }, 00:24:27.591 { 00:24:27.591 "method": "nvmf_set_crdt", 00:24:27.591 "params": { 00:24:27.591 "crdt1": 0, 00:24:27.591 "crdt2": 0, 00:24:27.591 "crdt3": 0 00:24:27.591 } 00:24:27.591 }, 00:24:27.591 { 00:24:27.591 "method": "nvmf_create_transport", 00:24:27.591 "params": { 00:24:27.591 "trtype": "TCP", 00:24:27.591 "max_queue_depth": 128, 00:24:27.591 "max_io_qpairs_per_ctrlr": 127, 00:24:27.591 "in_capsule_data_size": 4096, 00:24:27.591 "max_io_size": 131072, 00:24:27.591 "io_unit_size": 131072, 00:24:27.591 "max_aq_depth": 128, 00:24:27.591 "num_shared_buffers": 511, 00:24:27.591 "buf_cache_size": 4294967295, 00:24:27.591 "dif_insert_or_strip": false, 00:24:27.591 "zcopy": false, 00:24:27.591 "c2h_success": false, 00:24:27.591 "sock_priority": 0, 00:24:27.591 "abort_timeout_sec": 1, 00:24:27.591 "ack_timeout": 0, 00:24:27.591 "data_wr_pool_size": 0 00:24:27.591 } 00:24:27.591 }, 00:24:27.591 { 00:24:27.591 "method": "nvmf_create_subsystem", 00:24:27.591 "params": { 00:24:27.591 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:27.591 "allow_any_host": false, 00:24:27.591 "serial_number": "SPDK00000000000001", 00:24:27.591 "model_number": "SPDK bdev Controller", 00:24:27.591 "max_namespaces": 10, 00:24:27.591 "min_cntlid": 1, 00:24:27.591 "max_cntlid": 65519, 00:24:27.591 "ana_reporting": false 00:24:27.591 } 00:24:27.591 }, 00:24:27.591 { 00:24:27.591 "method": "nvmf_subsystem_add_host", 00:24:27.591 "params": { 00:24:27.591 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:27.591 "host": "nqn.2016-06.io.spdk:host1", 00:24:27.591 "psk": "key0" 00:24:27.591 } 00:24:27.591 }, 00:24:27.591 { 00:24:27.591 "method": "nvmf_subsystem_add_ns", 00:24:27.591 "params": { 00:24:27.591 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:27.591 "namespace": { 00:24:27.591 "nsid": 1, 00:24:27.591 "bdev_name": "malloc0", 00:24:27.591 "nguid": "C3FF3DE745D54F7EA9979111DD4D5939", 00:24:27.591 "uuid": "c3ff3de7-45d5-4f7e-a997-9111dd4d5939", 00:24:27.591 "no_auto_visible": false 00:24:27.591 } 00:24:27.591 } 00:24:27.591 }, 00:24:27.591 { 00:24:27.591 "method": "nvmf_subsystem_add_listener", 00:24:27.591 "params": { 00:24:27.591 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:27.591 "listen_address": { 00:24:27.591 "trtype": "TCP", 00:24:27.591 "adrfam": "IPv4", 00:24:27.591 "traddr": "10.0.0.2", 00:24:27.591 "trsvcid": "4420" 00:24:27.591 }, 00:24:27.591 "secure_channel": true 00:24:27.591 } 00:24:27.591 } 00:24:27.591 ] 00:24:27.591 } 00:24:27.591 ] 00:24:27.591 }' 00:24:27.591 09:42:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:27.591 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:24:27.591 "subsystems": [ 00:24:27.591 { 00:24:27.591 "subsystem": "keyring", 00:24:27.591 "config": [ 00:24:27.591 { 00:24:27.591 "method": "keyring_file_add_key", 00:24:27.591 "params": { 00:24:27.591 "name": "key0", 00:24:27.591 "path": "/tmp/tmp.D6ZEjKx6Va" 00:24:27.591 } 00:24:27.591 } 00:24:27.591 ] 00:24:27.591 }, 00:24:27.591 { 00:24:27.591 "subsystem": "iobuf", 00:24:27.591 "config": [ 00:24:27.591 { 00:24:27.591 "method": "iobuf_set_options", 00:24:27.591 "params": { 00:24:27.591 "small_pool_count": 8192, 00:24:27.591 "large_pool_count": 1024, 00:24:27.591 "small_bufsize": 8192, 00:24:27.591 "large_bufsize": 135168, 00:24:27.591 "enable_numa": false 00:24:27.591 } 00:24:27.591 } 00:24:27.591 ] 00:24:27.591 }, 00:24:27.591 { 00:24:27.591 "subsystem": "sock", 00:24:27.591 "config": [ 00:24:27.591 { 00:24:27.591 "method": "sock_set_default_impl", 00:24:27.591 "params": { 00:24:27.591 "impl_name": "posix" 00:24:27.591 } 00:24:27.591 }, 00:24:27.591 { 00:24:27.591 "method": "sock_impl_set_options", 00:24:27.591 "params": { 00:24:27.591 "impl_name": "ssl", 00:24:27.591 "recv_buf_size": 4096, 00:24:27.591 "send_buf_size": 4096, 00:24:27.591 "enable_recv_pipe": true, 00:24:27.591 "enable_quickack": false, 00:24:27.591 "enable_placement_id": 0, 00:24:27.591 "enable_zerocopy_send_server": true, 00:24:27.591 "enable_zerocopy_send_client": false, 00:24:27.591 "zerocopy_threshold": 0, 00:24:27.591 "tls_version": 0, 00:24:27.591 "enable_ktls": false 00:24:27.591 } 00:24:27.591 }, 00:24:27.591 { 00:24:27.591 "method": "sock_impl_set_options", 00:24:27.591 "params": { 00:24:27.591 "impl_name": "posix", 00:24:27.591 "recv_buf_size": 2097152, 00:24:27.591 "send_buf_size": 2097152, 00:24:27.592 "enable_recv_pipe": true, 00:24:27.592 "enable_quickack": false, 00:24:27.592 "enable_placement_id": 0, 00:24:27.592 "enable_zerocopy_send_server": true, 00:24:27.592 "enable_zerocopy_send_client": false, 00:24:27.592 "zerocopy_threshold": 0, 00:24:27.592 "tls_version": 0, 00:24:27.592 "enable_ktls": false 00:24:27.592 } 00:24:27.592 } 00:24:27.592 ] 00:24:27.592 }, 00:24:27.592 { 00:24:27.592 "subsystem": "vmd", 00:24:27.592 "config": [] 00:24:27.592 }, 00:24:27.592 { 00:24:27.592 "subsystem": "accel", 00:24:27.592 "config": [ 00:24:27.592 { 00:24:27.592 "method": "accel_set_options", 00:24:27.592 "params": { 00:24:27.592 "small_cache_size": 128, 00:24:27.592 "large_cache_size": 16, 00:24:27.592 "task_count": 2048, 00:24:27.592 "sequence_count": 2048, 00:24:27.592 "buf_count": 2048 00:24:27.592 } 00:24:27.592 } 00:24:27.592 ] 00:24:27.592 }, 00:24:27.592 { 00:24:27.592 "subsystem": "bdev", 00:24:27.592 "config": [ 00:24:27.592 { 00:24:27.592 "method": "bdev_set_options", 00:24:27.592 "params": { 00:24:27.592 "bdev_io_pool_size": 65535, 00:24:27.592 "bdev_io_cache_size": 256, 00:24:27.592 "bdev_auto_examine": true, 00:24:27.592 "iobuf_small_cache_size": 128, 00:24:27.592 "iobuf_large_cache_size": 16 00:24:27.592 } 00:24:27.592 }, 00:24:27.592 { 00:24:27.592 "method": "bdev_raid_set_options", 00:24:27.592 "params": { 00:24:27.592 "process_window_size_kb": 1024, 00:24:27.592 "process_max_bandwidth_mb_sec": 0 00:24:27.592 } 00:24:27.592 }, 00:24:27.592 { 00:24:27.592 "method": "bdev_iscsi_set_options", 00:24:27.592 "params": { 00:24:27.592 "timeout_sec": 30 00:24:27.592 } 00:24:27.592 }, 00:24:27.592 { 00:24:27.592 "method": "bdev_nvme_set_options", 00:24:27.592 "params": { 00:24:27.592 "action_on_timeout": "none", 00:24:27.592 "timeout_us": 0, 00:24:27.592 "timeout_admin_us": 0, 00:24:27.592 "keep_alive_timeout_ms": 10000, 00:24:27.592 "arbitration_burst": 0, 00:24:27.592 "low_priority_weight": 0, 00:24:27.592 "medium_priority_weight": 0, 00:24:27.592 "high_priority_weight": 0, 00:24:27.592 "nvme_adminq_poll_period_us": 10000, 00:24:27.592 "nvme_ioq_poll_period_us": 0, 00:24:27.592 "io_queue_requests": 512, 00:24:27.592 "delay_cmd_submit": true, 00:24:27.592 "transport_retry_count": 4, 00:24:27.592 "bdev_retry_count": 3, 00:24:27.592 "transport_ack_timeout": 0, 00:24:27.592 "ctrlr_loss_timeout_sec": 0, 00:24:27.592 "reconnect_delay_sec": 0, 00:24:27.592 "fast_io_fail_timeout_sec": 0, 00:24:27.592 "disable_auto_failback": false, 00:24:27.592 "generate_uuids": false, 00:24:27.592 "transport_tos": 0, 00:24:27.592 "nvme_error_stat": false, 00:24:27.592 "rdma_srq_size": 0, 00:24:27.592 "io_path_stat": false, 00:24:27.592 "allow_accel_sequence": false, 00:24:27.592 "rdma_max_cq_size": 0, 00:24:27.592 "rdma_cm_event_timeout_ms": 0, 00:24:27.592 "dhchap_digests": [ 00:24:27.592 "sha256", 00:24:27.592 "sha384", 00:24:27.592 "sha512" 00:24:27.592 ], 00:24:27.592 "dhchap_dhgroups": [ 00:24:27.592 "null", 00:24:27.592 "ffdhe2048", 00:24:27.592 "ffdhe3072", 00:24:27.592 "ffdhe4096", 00:24:27.592 "ffdhe6144", 00:24:27.592 "ffdhe8192" 00:24:27.592 ] 00:24:27.592 } 00:24:27.592 }, 00:24:27.592 { 00:24:27.592 "method": "bdev_nvme_attach_controller", 00:24:27.592 "params": { 00:24:27.592 "name": "TLSTEST", 00:24:27.592 "trtype": "TCP", 00:24:27.592 "adrfam": "IPv4", 00:24:27.592 "traddr": "10.0.0.2", 00:24:27.592 "trsvcid": "4420", 00:24:27.592 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:27.592 "prchk_reftag": false, 00:24:27.592 "prchk_guard": false, 00:24:27.592 "ctrlr_loss_timeout_sec": 0, 00:24:27.592 "reconnect_delay_sec": 0, 00:24:27.592 "fast_io_fail_timeout_sec": 0, 00:24:27.592 "psk": "key0", 00:24:27.592 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:27.592 "hdgst": false, 00:24:27.592 "ddgst": false, 00:24:27.592 "multipath": "multipath" 00:24:27.592 } 00:24:27.592 }, 00:24:27.592 { 00:24:27.592 "method": "bdev_nvme_set_hotplug", 00:24:27.592 "params": { 00:24:27.592 "period_us": 100000, 00:24:27.592 "enable": false 00:24:27.592 } 00:24:27.592 }, 00:24:27.592 { 00:24:27.592 "method": "bdev_wait_for_examine" 00:24:27.592 } 00:24:27.592 ] 00:24:27.592 }, 00:24:27.592 { 00:24:27.592 "subsystem": "nbd", 00:24:27.592 "config": [] 00:24:27.592 } 00:24:27.592 ] 00:24:27.592 }' 00:24:27.592 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 2838403 00:24:27.592 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2838403 ']' 00:24:27.592 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2838403 00:24:27.853 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:27.853 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:27.853 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2838403 00:24:27.853 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:27.853 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:27.853 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2838403' 00:24:27.853 killing process with pid 2838403 00:24:27.853 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2838403 00:24:27.853 Received shutdown signal, test time was about 10.000000 seconds 00:24:27.853 00:24:27.853 Latency(us) 00:24:27.853 [2024-12-09T08:42:03.306Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:27.853 [2024-12-09T08:42:03.306Z] =================================================================================================================== 00:24:27.853 [2024-12-09T08:42:03.306Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:27.853 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2838403 00:24:27.853 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 2837928 00:24:27.853 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2837928 ']' 00:24:27.853 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2837928 00:24:27.853 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:27.853 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:27.853 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2837928 00:24:27.853 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:27.853 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:27.853 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2837928' 00:24:27.853 killing process with pid 2837928 00:24:27.853 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2837928 00:24:27.853 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2837928 00:24:28.115 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:24:28.115 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:28.115 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:28.115 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:28.115 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:24:28.115 "subsystems": [ 00:24:28.115 { 00:24:28.115 "subsystem": "keyring", 00:24:28.115 "config": [ 00:24:28.115 { 00:24:28.115 "method": "keyring_file_add_key", 00:24:28.115 "params": { 00:24:28.115 "name": "key0", 00:24:28.115 "path": "/tmp/tmp.D6ZEjKx6Va" 00:24:28.115 } 00:24:28.115 } 00:24:28.115 ] 00:24:28.115 }, 00:24:28.115 { 00:24:28.115 "subsystem": "iobuf", 00:24:28.115 "config": [ 00:24:28.115 { 00:24:28.115 "method": "iobuf_set_options", 00:24:28.115 "params": { 00:24:28.115 "small_pool_count": 8192, 00:24:28.115 "large_pool_count": 1024, 00:24:28.115 "small_bufsize": 8192, 00:24:28.115 "large_bufsize": 135168, 00:24:28.115 "enable_numa": false 00:24:28.115 } 00:24:28.115 } 00:24:28.115 ] 00:24:28.115 }, 00:24:28.115 { 00:24:28.115 "subsystem": "sock", 00:24:28.115 "config": [ 00:24:28.115 { 00:24:28.115 "method": "sock_set_default_impl", 00:24:28.115 "params": { 00:24:28.115 "impl_name": "posix" 00:24:28.115 } 00:24:28.115 }, 00:24:28.115 { 00:24:28.115 "method": "sock_impl_set_options", 00:24:28.115 "params": { 00:24:28.115 "impl_name": "ssl", 00:24:28.115 "recv_buf_size": 4096, 00:24:28.115 "send_buf_size": 4096, 00:24:28.115 "enable_recv_pipe": true, 00:24:28.115 "enable_quickack": false, 00:24:28.115 "enable_placement_id": 0, 00:24:28.115 "enable_zerocopy_send_server": true, 00:24:28.115 "enable_zerocopy_send_client": false, 00:24:28.115 "zerocopy_threshold": 0, 00:24:28.115 "tls_version": 0, 00:24:28.115 "enable_ktls": false 00:24:28.115 } 00:24:28.115 }, 00:24:28.115 { 00:24:28.115 "method": "sock_impl_set_options", 00:24:28.115 "params": { 00:24:28.115 "impl_name": "posix", 00:24:28.115 "recv_buf_size": 2097152, 00:24:28.115 "send_buf_size": 2097152, 00:24:28.115 "enable_recv_pipe": true, 00:24:28.115 "enable_quickack": false, 00:24:28.115 "enable_placement_id": 0, 00:24:28.115 "enable_zerocopy_send_server": true, 00:24:28.115 "enable_zerocopy_send_client": false, 00:24:28.115 "zerocopy_threshold": 0, 00:24:28.115 "tls_version": 0, 00:24:28.115 "enable_ktls": false 00:24:28.115 } 00:24:28.115 } 00:24:28.115 ] 00:24:28.115 }, 00:24:28.115 { 00:24:28.115 "subsystem": "vmd", 00:24:28.115 "config": [] 00:24:28.115 }, 00:24:28.115 { 00:24:28.115 "subsystem": "accel", 00:24:28.115 "config": [ 00:24:28.115 { 00:24:28.115 "method": "accel_set_options", 00:24:28.115 "params": { 00:24:28.115 "small_cache_size": 128, 00:24:28.115 "large_cache_size": 16, 00:24:28.115 "task_count": 2048, 00:24:28.115 "sequence_count": 2048, 00:24:28.115 "buf_count": 2048 00:24:28.115 } 00:24:28.115 } 00:24:28.115 ] 00:24:28.115 }, 00:24:28.115 { 00:24:28.115 "subsystem": "bdev", 00:24:28.115 "config": [ 00:24:28.115 { 00:24:28.115 "method": "bdev_set_options", 00:24:28.115 "params": { 00:24:28.115 "bdev_io_pool_size": 65535, 00:24:28.115 "bdev_io_cache_size": 256, 00:24:28.115 "bdev_auto_examine": true, 00:24:28.115 "iobuf_small_cache_size": 128, 00:24:28.115 "iobuf_large_cache_size": 16 00:24:28.115 } 00:24:28.115 }, 00:24:28.115 { 00:24:28.115 "method": "bdev_raid_set_options", 00:24:28.115 "params": { 00:24:28.115 "process_window_size_kb": 1024, 00:24:28.115 "process_max_bandwidth_mb_sec": 0 00:24:28.115 } 00:24:28.115 }, 00:24:28.115 { 00:24:28.115 "method": "bdev_iscsi_set_options", 00:24:28.115 "params": { 00:24:28.115 "timeout_sec": 30 00:24:28.115 } 00:24:28.115 }, 00:24:28.115 { 00:24:28.115 "method": "bdev_nvme_set_options", 00:24:28.115 "params": { 00:24:28.115 "action_on_timeout": "none", 00:24:28.115 "timeout_us": 0, 00:24:28.115 "timeout_admin_us": 0, 00:24:28.115 "keep_alive_timeout_ms": 10000, 00:24:28.115 "arbitration_burst": 0, 00:24:28.115 "low_priority_weight": 0, 00:24:28.115 "medium_priority_weight": 0, 00:24:28.115 "high_priority_weight": 0, 00:24:28.115 "nvme_adminq_poll_period_us": 10000, 00:24:28.115 "nvme_ioq_poll_period_us": 0, 00:24:28.115 "io_queue_requests": 0, 00:24:28.115 "delay_cmd_submit": true, 00:24:28.115 "transport_retry_count": 4, 00:24:28.115 "bdev_retry_count": 3, 00:24:28.115 "transport_ack_timeout": 0, 00:24:28.115 "ctrlr_loss_timeout_sec": 0, 00:24:28.115 "reconnect_delay_sec": 0, 00:24:28.115 "fast_io_fail_timeout_sec": 0, 00:24:28.115 "disable_auto_failback": false, 00:24:28.115 "generate_uuids": false, 00:24:28.115 "transport_tos": 0, 00:24:28.115 "nvme_error_stat": false, 00:24:28.115 "rdma_srq_size": 0, 00:24:28.115 "io_path_stat": false, 00:24:28.115 "allow_accel_sequence": false, 00:24:28.115 "rdma_max_cq_size": 0, 00:24:28.115 "rdma_cm_event_timeout_ms": 0, 00:24:28.115 "dhchap_digests": [ 00:24:28.115 "sha256", 00:24:28.115 "sha384", 00:24:28.115 "sha512" 00:24:28.115 ], 00:24:28.115 "dhchap_dhgroups": [ 00:24:28.115 "null", 00:24:28.115 "ffdhe2048", 00:24:28.115 "ffdhe3072", 00:24:28.115 "ffdhe4096", 00:24:28.115 "ffdhe6144", 00:24:28.115 "ffdhe8192" 00:24:28.115 ] 00:24:28.115 } 00:24:28.115 }, 00:24:28.115 { 00:24:28.115 "method": "bdev_nvme_set_hotplug", 00:24:28.115 "params": { 00:24:28.115 "period_us": 100000, 00:24:28.115 "enable": false 00:24:28.115 } 00:24:28.115 }, 00:24:28.115 { 00:24:28.115 "method": "bdev_malloc_create", 00:24:28.115 "params": { 00:24:28.115 "name": "malloc0", 00:24:28.115 "num_blocks": 8192, 00:24:28.115 "block_size": 4096, 00:24:28.115 "physical_block_size": 4096, 00:24:28.116 "uuid": "c3ff3de7-45d5-4f7e-a997-9111dd4d5939", 00:24:28.116 "optimal_io_boundary": 0, 00:24:28.116 "md_size": 0, 00:24:28.116 "dif_type": 0, 00:24:28.116 "dif_is_head_of_md": false, 00:24:28.116 "dif_pi_format": 0 00:24:28.116 } 00:24:28.116 }, 00:24:28.116 { 00:24:28.116 "method": "bdev_wait_for_examine" 00:24:28.116 } 00:24:28.116 ] 00:24:28.116 }, 00:24:28.116 { 00:24:28.116 "subsystem": "nbd", 00:24:28.116 "config": [] 00:24:28.116 }, 00:24:28.116 { 00:24:28.116 "subsystem": "scheduler", 00:24:28.116 "config": [ 00:24:28.116 { 00:24:28.116 "method": "framework_set_scheduler", 00:24:28.116 "params": { 00:24:28.116 "name": "static" 00:24:28.116 } 00:24:28.116 } 00:24:28.116 ] 00:24:28.116 }, 00:24:28.116 { 00:24:28.116 "subsystem": "nvmf", 00:24:28.116 "config": [ 00:24:28.116 { 00:24:28.116 "method": "nvmf_set_config", 00:24:28.116 "params": { 00:24:28.116 "discovery_filter": "match_any", 00:24:28.116 "admin_cmd_passthru": { 00:24:28.116 "identify_ctrlr": false 00:24:28.116 }, 00:24:28.116 "dhchap_digests": [ 00:24:28.116 "sha256", 00:24:28.116 "sha384", 00:24:28.116 "sha512" 00:24:28.116 ], 00:24:28.116 "dhchap_dhgroups": [ 00:24:28.116 "null", 00:24:28.116 "ffdhe2048", 00:24:28.116 "ffdhe3072", 00:24:28.116 "ffdhe4096", 00:24:28.116 "ffdhe6144", 00:24:28.116 "ffdhe8192" 00:24:28.116 ] 00:24:28.116 } 00:24:28.116 }, 00:24:28.116 { 00:24:28.116 "method": "nvmf_set_max_subsystems", 00:24:28.116 "params": { 00:24:28.116 "max_subsystems": 1024 00:24:28.116 } 00:24:28.116 }, 00:24:28.116 { 00:24:28.116 "method": "nvmf_set_crdt", 00:24:28.116 "params": { 00:24:28.116 "crdt1": 0, 00:24:28.116 "crdt2": 0, 00:24:28.116 "crdt3": 0 00:24:28.116 } 00:24:28.116 }, 00:24:28.116 { 00:24:28.116 "method": "nvmf_create_transport", 00:24:28.116 "params": { 00:24:28.116 "trtype": "TCP", 00:24:28.116 "max_queue_depth": 128, 00:24:28.116 "max_io_qpairs_per_ctrlr": 127, 00:24:28.116 "in_capsule_data_size": 4096, 00:24:28.116 "max_io_size": 131072, 00:24:28.116 "io_unit_size": 131072, 00:24:28.116 "max_aq_depth": 128, 00:24:28.116 "num_shared_buffers": 511, 00:24:28.116 "buf_cache_size": 4294967295, 00:24:28.116 "dif_insert_or_strip": false, 00:24:28.116 "zcopy": false, 00:24:28.116 "c2h_success": false, 00:24:28.116 "sock_priority": 0, 00:24:28.116 "abort_timeout_sec": 1, 00:24:28.116 "ack_timeout": 0, 00:24:28.116 "data_wr_pool_size": 0 00:24:28.116 } 00:24:28.116 }, 00:24:28.116 { 00:24:28.116 "method": "nvmf_create_subsystem", 00:24:28.116 "params": { 00:24:28.116 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:28.116 "allow_any_host": false, 00:24:28.116 "serial_number": "SPDK00000000000001", 00:24:28.116 "model_number": "SPDK bdev Controller", 00:24:28.116 "max_namespaces": 10, 00:24:28.116 "min_cntlid": 1, 00:24:28.116 "max_cntlid": 65519, 00:24:28.116 "ana_reporting": false 00:24:28.116 } 00:24:28.116 }, 00:24:28.116 { 00:24:28.116 "method": "nvmf_subsystem_add_host", 00:24:28.116 "params": { 00:24:28.116 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:28.116 "host": "nqn.2016-06.io.spdk:host1", 00:24:28.116 "psk": "key0" 00:24:28.116 } 00:24:28.116 }, 00:24:28.116 { 00:24:28.116 "method": "nvmf_subsystem_add_ns", 00:24:28.116 "params": { 00:24:28.116 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:28.116 "namespace": { 00:24:28.116 "nsid": 1, 00:24:28.116 "bdev_name": "malloc0", 00:24:28.116 "nguid": "C3FF3DE745D54F7EA9979111DD4D5939", 00:24:28.116 "uuid": "c3ff3de7-45d5-4f7e-a997-9111dd4d5939", 00:24:28.116 "no_auto_visible": false 00:24:28.116 } 00:24:28.116 } 00:24:28.116 }, 00:24:28.116 { 00:24:28.116 "method": "nvmf_subsystem_add_listener", 00:24:28.116 "params": { 00:24:28.116 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:28.116 "listen_address": { 00:24:28.116 "trtype": "TCP", 00:24:28.116 "adrfam": "IPv4", 00:24:28.116 "traddr": "10.0.0.2", 00:24:28.116 "trsvcid": "4420" 00:24:28.116 }, 00:24:28.116 "secure_channel": true 00:24:28.116 } 00:24:28.116 } 00:24:28.116 ] 00:24:28.116 } 00:24:28.116 ] 00:24:28.116 }' 00:24:28.116 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2838751 00:24:28.116 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2838751 00:24:28.116 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:24:28.116 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2838751 ']' 00:24:28.116 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:28.116 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:28.116 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:28.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:28.116 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:28.116 09:42:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:28.116 [2024-12-09 09:42:03.429412] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:24:28.116 [2024-12-09 09:42:03.429464] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:28.116 [2024-12-09 09:42:03.520886] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:28.116 [2024-12-09 09:42:03.534865] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:28.116 [2024-12-09 09:42:03.534896] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:28.116 [2024-12-09 09:42:03.534902] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:28.116 [2024-12-09 09:42:03.534907] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:28.116 [2024-12-09 09:42:03.534912] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:28.116 [2024-12-09 09:42:03.535435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:28.377 [2024-12-09 09:42:03.723734] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:28.377 [2024-12-09 09:42:03.755751] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:28.377 [2024-12-09 09:42:03.755958] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:28.950 09:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:28.950 09:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:28.950 09:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:28.950 09:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:28.950 09:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:28.950 09:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:28.950 09:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=2839060 00:24:28.950 09:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 2839060 /var/tmp/bdevperf.sock 00:24:28.950 09:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2839060 ']' 00:24:28.950 09:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:28.950 09:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:28.950 09:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:28.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:28.950 09:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:24:28.950 09:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:28.950 09:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:28.950 09:42:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:24:28.950 "subsystems": [ 00:24:28.950 { 00:24:28.950 "subsystem": "keyring", 00:24:28.950 "config": [ 00:24:28.950 { 00:24:28.950 "method": "keyring_file_add_key", 00:24:28.950 "params": { 00:24:28.950 "name": "key0", 00:24:28.950 "path": "/tmp/tmp.D6ZEjKx6Va" 00:24:28.950 } 00:24:28.950 } 00:24:28.950 ] 00:24:28.950 }, 00:24:28.950 { 00:24:28.950 "subsystem": "iobuf", 00:24:28.950 "config": [ 00:24:28.950 { 00:24:28.950 "method": "iobuf_set_options", 00:24:28.950 "params": { 00:24:28.950 "small_pool_count": 8192, 00:24:28.950 "large_pool_count": 1024, 00:24:28.950 "small_bufsize": 8192, 00:24:28.950 "large_bufsize": 135168, 00:24:28.950 "enable_numa": false 00:24:28.950 } 00:24:28.950 } 00:24:28.950 ] 00:24:28.950 }, 00:24:28.950 { 00:24:28.950 "subsystem": "sock", 00:24:28.950 "config": [ 00:24:28.950 { 00:24:28.950 "method": "sock_set_default_impl", 00:24:28.950 "params": { 00:24:28.950 "impl_name": "posix" 00:24:28.950 } 00:24:28.950 }, 00:24:28.950 { 00:24:28.950 "method": "sock_impl_set_options", 00:24:28.950 "params": { 00:24:28.950 "impl_name": "ssl", 00:24:28.950 "recv_buf_size": 4096, 00:24:28.950 "send_buf_size": 4096, 00:24:28.950 "enable_recv_pipe": true, 00:24:28.950 "enable_quickack": false, 00:24:28.950 "enable_placement_id": 0, 00:24:28.950 "enable_zerocopy_send_server": true, 00:24:28.950 "enable_zerocopy_send_client": false, 00:24:28.950 "zerocopy_threshold": 0, 00:24:28.950 "tls_version": 0, 00:24:28.950 "enable_ktls": false 00:24:28.950 } 00:24:28.950 }, 00:24:28.950 { 00:24:28.950 "method": "sock_impl_set_options", 00:24:28.950 "params": { 00:24:28.950 "impl_name": "posix", 00:24:28.950 "recv_buf_size": 2097152, 00:24:28.950 "send_buf_size": 2097152, 00:24:28.950 "enable_recv_pipe": true, 00:24:28.950 "enable_quickack": false, 00:24:28.950 "enable_placement_id": 0, 00:24:28.950 "enable_zerocopy_send_server": true, 00:24:28.950 "enable_zerocopy_send_client": false, 00:24:28.950 "zerocopy_threshold": 0, 00:24:28.950 "tls_version": 0, 00:24:28.951 "enable_ktls": false 00:24:28.951 } 00:24:28.951 } 00:24:28.951 ] 00:24:28.951 }, 00:24:28.951 { 00:24:28.951 "subsystem": "vmd", 00:24:28.951 "config": [] 00:24:28.951 }, 00:24:28.951 { 00:24:28.951 "subsystem": "accel", 00:24:28.951 "config": [ 00:24:28.951 { 00:24:28.951 "method": "accel_set_options", 00:24:28.951 "params": { 00:24:28.951 "small_cache_size": 128, 00:24:28.951 "large_cache_size": 16, 00:24:28.951 "task_count": 2048, 00:24:28.951 "sequence_count": 2048, 00:24:28.951 "buf_count": 2048 00:24:28.951 } 00:24:28.951 } 00:24:28.951 ] 00:24:28.951 }, 00:24:28.951 { 00:24:28.951 "subsystem": "bdev", 00:24:28.951 "config": [ 00:24:28.951 { 00:24:28.951 "method": "bdev_set_options", 00:24:28.951 "params": { 00:24:28.951 "bdev_io_pool_size": 65535, 00:24:28.951 "bdev_io_cache_size": 256, 00:24:28.951 "bdev_auto_examine": true, 00:24:28.951 "iobuf_small_cache_size": 128, 00:24:28.951 "iobuf_large_cache_size": 16 00:24:28.951 } 00:24:28.951 }, 00:24:28.951 { 00:24:28.951 "method": "bdev_raid_set_options", 00:24:28.951 "params": { 00:24:28.951 "process_window_size_kb": 1024, 00:24:28.951 "process_max_bandwidth_mb_sec": 0 00:24:28.951 } 00:24:28.951 }, 00:24:28.951 { 00:24:28.951 "method": "bdev_iscsi_set_options", 00:24:28.951 "params": { 00:24:28.951 "timeout_sec": 30 00:24:28.951 } 00:24:28.951 }, 00:24:28.951 { 00:24:28.951 "method": "bdev_nvme_set_options", 00:24:28.951 "params": { 00:24:28.951 "action_on_timeout": "none", 00:24:28.951 "timeout_us": 0, 00:24:28.951 "timeout_admin_us": 0, 00:24:28.951 "keep_alive_timeout_ms": 10000, 00:24:28.951 "arbitration_burst": 0, 00:24:28.951 "low_priority_weight": 0, 00:24:28.951 "medium_priority_weight": 0, 00:24:28.951 "high_priority_weight": 0, 00:24:28.951 "nvme_adminq_poll_period_us": 10000, 00:24:28.951 "nvme_ioq_poll_period_us": 0, 00:24:28.951 "io_queue_requests": 512, 00:24:28.951 "delay_cmd_submit": true, 00:24:28.951 "transport_retry_count": 4, 00:24:28.951 "bdev_retry_count": 3, 00:24:28.951 "transport_ack_timeout": 0, 00:24:28.951 "ctrlr_loss_timeout_sec": 0, 00:24:28.951 "reconnect_delay_sec": 0, 00:24:28.951 "fast_io_fail_timeout_sec": 0, 00:24:28.951 "disable_auto_failback": false, 00:24:28.951 "generate_uuids": false, 00:24:28.951 "transport_tos": 0, 00:24:28.951 "nvme_error_stat": false, 00:24:28.951 "rdma_srq_size": 0, 00:24:28.951 "io_path_stat": false, 00:24:28.951 "allow_accel_sequence": false, 00:24:28.951 "rdma_max_cq_size": 0, 00:24:28.951 "rdma_cm_event_timeout_ms": 0, 00:24:28.951 "dhchap_digests": [ 00:24:28.951 "sha256", 00:24:28.951 "sha384", 00:24:28.951 "sha512" 00:24:28.951 ], 00:24:28.951 "dhchap_dhgroups": [ 00:24:28.951 "null", 00:24:28.951 "ffdhe2048", 00:24:28.951 "ffdhe3072", 00:24:28.951 "ffdhe4096", 00:24:28.951 "ffdhe6144", 00:24:28.951 "ffdhe8192" 00:24:28.951 ] 00:24:28.951 } 00:24:28.951 }, 00:24:28.951 { 00:24:28.951 "method": "bdev_nvme_attach_controller", 00:24:28.951 "params": { 00:24:28.951 "name": "TLSTEST", 00:24:28.951 "trtype": "TCP", 00:24:28.951 "adrfam": "IPv4", 00:24:28.951 "traddr": "10.0.0.2", 00:24:28.951 "trsvcid": "4420", 00:24:28.951 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:28.951 "prchk_reftag": false, 00:24:28.951 "prchk_guard": false, 00:24:28.951 "ctrlr_loss_timeout_sec": 0, 00:24:28.951 "reconnect_delay_sec": 0, 00:24:28.951 "fast_io_fail_timeout_sec": 0, 00:24:28.951 "psk": "key0", 00:24:28.951 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:28.951 "hdgst": false, 00:24:28.951 "ddgst": false, 00:24:28.951 "multipath": "multipath" 00:24:28.951 } 00:24:28.951 }, 00:24:28.951 { 00:24:28.951 "method": "bdev_nvme_set_hotplug", 00:24:28.951 "params": { 00:24:28.951 "period_us": 100000, 00:24:28.951 "enable": false 00:24:28.951 } 00:24:28.951 }, 00:24:28.951 { 00:24:28.951 "method": "bdev_wait_for_examine" 00:24:28.951 } 00:24:28.951 ] 00:24:28.951 }, 00:24:28.951 { 00:24:28.951 "subsystem": "nbd", 00:24:28.951 "config": [] 00:24:28.951 } 00:24:28.951 ] 00:24:28.951 }' 00:24:28.951 [2024-12-09 09:42:04.312611] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:24:28.951 [2024-12-09 09:42:04.312690] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2839060 ] 00:24:28.951 [2024-12-09 09:42:04.375956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:28.951 [2024-12-09 09:42:04.392154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:29.213 [2024-12-09 09:42:04.521606] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:29.794 09:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:29.794 09:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:29.794 09:42:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:29.794 Running I/O for 10 seconds... 00:24:31.762 5720.00 IOPS, 22.34 MiB/s [2024-12-09T08:42:08.603Z] 6155.50 IOPS, 24.04 MiB/s [2024-12-09T08:42:09.547Z] 6260.33 IOPS, 24.45 MiB/s [2024-12-09T08:42:10.493Z] 6085.75 IOPS, 23.77 MiB/s [2024-12-09T08:42:11.434Z] 6196.00 IOPS, 24.20 MiB/s [2024-12-09T08:42:12.376Z] 6272.83 IOPS, 24.50 MiB/s [2024-12-09T08:42:13.338Z] 6256.43 IOPS, 24.44 MiB/s [2024-12-09T08:42:14.277Z] 6254.12 IOPS, 24.43 MiB/s [2024-12-09T08:42:15.226Z] 6262.00 IOPS, 24.46 MiB/s [2024-12-09T08:42:15.486Z] 6299.70 IOPS, 24.61 MiB/s 00:24:40.033 Latency(us) 00:24:40.033 [2024-12-09T08:42:15.486Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:40.033 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:40.033 Verification LBA range: start 0x0 length 0x2000 00:24:40.033 TLSTESTn1 : 10.04 6285.21 24.55 0.00 0.00 20307.08 4751.36 42379.95 00:24:40.033 [2024-12-09T08:42:15.486Z] =================================================================================================================== 00:24:40.033 [2024-12-09T08:42:15.486Z] Total : 6285.21 24.55 0.00 0.00 20307.08 4751.36 42379.95 00:24:40.033 { 00:24:40.033 "results": [ 00:24:40.033 { 00:24:40.033 "job": "TLSTESTn1", 00:24:40.033 "core_mask": "0x4", 00:24:40.033 "workload": "verify", 00:24:40.033 "status": "finished", 00:24:40.033 "verify_range": { 00:24:40.033 "start": 0, 00:24:40.033 "length": 8192 00:24:40.033 }, 00:24:40.033 "queue_depth": 128, 00:24:40.033 "io_size": 4096, 00:24:40.033 "runtime": 10.043255, 00:24:40.033 "iops": 6285.2133098283375, 00:24:40.033 "mibps": 24.551614491516943, 00:24:40.033 "io_failed": 0, 00:24:40.033 "io_timeout": 0, 00:24:40.033 "avg_latency_us": 20307.08063282851, 00:24:40.033 "min_latency_us": 4751.36, 00:24:40.033 "max_latency_us": 42379.94666666666 00:24:40.033 } 00:24:40.033 ], 00:24:40.033 "core_count": 1 00:24:40.033 } 00:24:40.033 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:40.033 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 2839060 00:24:40.033 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2839060 ']' 00:24:40.033 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2839060 00:24:40.033 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:40.033 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:40.033 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2839060 00:24:40.033 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:40.033 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:40.033 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2839060' 00:24:40.033 killing process with pid 2839060 00:24:40.033 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2839060 00:24:40.033 Received shutdown signal, test time was about 10.000000 seconds 00:24:40.033 00:24:40.033 Latency(us) 00:24:40.033 [2024-12-09T08:42:15.486Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:40.033 [2024-12-09T08:42:15.486Z] =================================================================================================================== 00:24:40.033 [2024-12-09T08:42:15.486Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:40.033 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2839060 00:24:40.033 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 2838751 00:24:40.033 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2838751 ']' 00:24:40.033 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2838751 00:24:40.033 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:40.033 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:40.033 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2838751 00:24:40.293 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:40.293 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:40.293 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2838751' 00:24:40.293 killing process with pid 2838751 00:24:40.293 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2838751 00:24:40.293 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2838751 00:24:40.293 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:24:40.293 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:40.293 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:40.293 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:40.293 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2841586 00:24:40.293 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2841586 00:24:40.293 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:40.293 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2841586 ']' 00:24:40.293 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:40.293 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:40.293 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:40.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:40.293 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:40.293 09:42:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:40.293 [2024-12-09 09:42:15.670479] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:24:40.293 [2024-12-09 09:42:15.670531] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:40.553 [2024-12-09 09:42:15.766954] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:40.553 [2024-12-09 09:42:15.783080] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:40.553 [2024-12-09 09:42:15.783118] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:40.553 [2024-12-09 09:42:15.783126] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:40.553 [2024-12-09 09:42:15.783133] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:40.553 [2024-12-09 09:42:15.783138] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:40.553 [2024-12-09 09:42:15.783745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:41.123 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:41.123 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:41.123 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:41.123 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:41.123 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:41.123 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:41.123 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.D6ZEjKx6Va 00:24:41.123 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.D6ZEjKx6Va 00:24:41.123 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:41.383 [2024-12-09 09:42:16.679837] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:41.383 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:41.643 09:42:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:41.643 [2024-12-09 09:42:17.064784] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:41.643 [2024-12-09 09:42:17.065130] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:41.903 09:42:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:41.903 malloc0 00:24:41.903 09:42:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:42.163 09:42:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.D6ZEjKx6Va 00:24:42.424 09:42:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:42.686 09:42:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=2842058 00:24:42.686 09:42:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:42.686 09:42:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:42.686 09:42:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 2842058 /var/tmp/bdevperf.sock 00:24:42.686 09:42:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2842058 ']' 00:24:42.686 09:42:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:42.686 09:42:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:42.686 09:42:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:42.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:42.686 09:42:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:42.686 09:42:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:42.686 [2024-12-09 09:42:17.926193] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:24:42.686 [2024-12-09 09:42:17.926261] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2842058 ] 00:24:42.686 [2024-12-09 09:42:18.014723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:42.686 [2024-12-09 09:42:18.033683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:42.686 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:42.686 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:42.686 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.D6ZEjKx6Va 00:24:42.948 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:43.210 [2024-12-09 09:42:18.418427] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:43.210 nvme0n1 00:24:43.210 09:42:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:43.210 Running I/O for 1 seconds... 00:24:44.154 5776.00 IOPS, 22.56 MiB/s 00:24:44.154 Latency(us) 00:24:44.154 [2024-12-09T08:42:19.607Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:44.154 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:44.154 Verification LBA range: start 0x0 length 0x2000 00:24:44.154 nvme0n1 : 1.02 5816.25 22.72 0.00 0.00 21830.70 5870.93 32331.09 00:24:44.154 [2024-12-09T08:42:19.607Z] =================================================================================================================== 00:24:44.154 [2024-12-09T08:42:19.607Z] Total : 5816.25 22.72 0.00 0.00 21830.70 5870.93 32331.09 00:24:44.154 { 00:24:44.154 "results": [ 00:24:44.154 { 00:24:44.154 "job": "nvme0n1", 00:24:44.154 "core_mask": "0x2", 00:24:44.154 "workload": "verify", 00:24:44.154 "status": "finished", 00:24:44.154 "verify_range": { 00:24:44.154 "start": 0, 00:24:44.154 "length": 8192 00:24:44.154 }, 00:24:44.154 "queue_depth": 128, 00:24:44.154 "io_size": 4096, 00:24:44.154 "runtime": 1.015259, 00:24:44.154 "iops": 5816.249843635959, 00:24:44.154 "mibps": 22.719725951702966, 00:24:44.154 "io_failed": 0, 00:24:44.154 "io_timeout": 0, 00:24:44.154 "avg_latency_us": 21830.697329946375, 00:24:44.154 "min_latency_us": 5870.933333333333, 00:24:44.154 "max_latency_us": 32331.093333333334 00:24:44.154 } 00:24:44.154 ], 00:24:44.154 "core_count": 1 00:24:44.154 } 00:24:44.416 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 2842058 00:24:44.416 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2842058 ']' 00:24:44.416 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2842058 00:24:44.416 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:44.416 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:44.416 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2842058 00:24:44.416 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:44.416 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:44.416 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2842058' 00:24:44.416 killing process with pid 2842058 00:24:44.416 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2842058 00:24:44.416 Received shutdown signal, test time was about 1.000000 seconds 00:24:44.416 00:24:44.416 Latency(us) 00:24:44.416 [2024-12-09T08:42:19.869Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:44.416 [2024-12-09T08:42:19.869Z] =================================================================================================================== 00:24:44.416 [2024-12-09T08:42:19.869Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:44.416 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2842058 00:24:44.416 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 2841586 00:24:44.416 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2841586 ']' 00:24:44.416 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2841586 00:24:44.416 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:44.416 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:44.416 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2841586 00:24:44.416 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:44.416 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:44.416 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2841586' 00:24:44.416 killing process with pid 2841586 00:24:44.416 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2841586 00:24:44.416 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2841586 00:24:44.677 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:24:44.677 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:44.677 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:44.677 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:44.677 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2842555 00:24:44.677 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2842555 00:24:44.677 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:44.677 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2842555 ']' 00:24:44.677 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:44.677 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:44.677 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:44.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:44.677 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:44.677 09:42:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:44.677 [2024-12-09 09:42:20.028239] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:24:44.677 [2024-12-09 09:42:20.028294] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:44.677 [2024-12-09 09:42:20.113505] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:44.938 [2024-12-09 09:42:20.132437] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:44.938 [2024-12-09 09:42:20.132476] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:44.938 [2024-12-09 09:42:20.132484] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:44.938 [2024-12-09 09:42:20.132491] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:44.938 [2024-12-09 09:42:20.132497] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:44.938 [2024-12-09 09:42:20.133104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:45.512 09:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:45.512 09:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:45.512 09:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:45.512 09:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:45.512 09:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:45.512 09:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:45.512 09:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:24:45.512 09:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.512 09:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:45.512 [2024-12-09 09:42:20.891149] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:45.512 malloc0 00:24:45.512 [2024-12-09 09:42:20.921341] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:45.512 [2024-12-09 09:42:20.921685] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:45.512 09:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.512 09:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=2842651 00:24:45.512 09:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 2842651 /var/tmp/bdevperf.sock 00:24:45.512 09:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:45.512 09:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2842651 ']' 00:24:45.512 09:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:45.512 09:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:45.513 09:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:45.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:45.513 09:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:45.513 09:42:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:45.774 [2024-12-09 09:42:21.006018] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:24:45.774 [2024-12-09 09:42:21.006085] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2842651 ] 00:24:45.774 [2024-12-09 09:42:21.096172] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:45.774 [2024-12-09 09:42:21.115468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:45.774 09:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:45.774 09:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:45.774 09:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.D6ZEjKx6Va 00:24:46.035 09:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:46.296 [2024-12-09 09:42:21.488506] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:46.296 nvme0n1 00:24:46.296 09:42:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:46.296 Running I/O for 1 seconds... 00:24:47.238 4204.00 IOPS, 16.42 MiB/s 00:24:47.238 Latency(us) 00:24:47.238 [2024-12-09T08:42:22.691Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:47.238 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:47.238 Verification LBA range: start 0x0 length 0x2000 00:24:47.238 nvme0n1 : 1.01 4272.94 16.69 0.00 0.00 29767.38 4642.13 76895.57 00:24:47.238 [2024-12-09T08:42:22.691Z] =================================================================================================================== 00:24:47.238 [2024-12-09T08:42:22.691Z] Total : 4272.94 16.69 0.00 0.00 29767.38 4642.13 76895.57 00:24:47.238 { 00:24:47.238 "results": [ 00:24:47.238 { 00:24:47.238 "job": "nvme0n1", 00:24:47.238 "core_mask": "0x2", 00:24:47.238 "workload": "verify", 00:24:47.238 "status": "finished", 00:24:47.238 "verify_range": { 00:24:47.238 "start": 0, 00:24:47.238 "length": 8192 00:24:47.238 }, 00:24:47.238 "queue_depth": 128, 00:24:47.238 "io_size": 4096, 00:24:47.238 "runtime": 1.013823, 00:24:47.238 "iops": 4272.935216502289, 00:24:47.238 "mibps": 16.691153189462067, 00:24:47.238 "io_failed": 0, 00:24:47.238 "io_timeout": 0, 00:24:47.238 "avg_latency_us": 29767.383736534317, 00:24:47.238 "min_latency_us": 4642.133333333333, 00:24:47.238 "max_latency_us": 76895.57333333333 00:24:47.238 } 00:24:47.238 ], 00:24:47.238 "core_count": 1 00:24:47.238 } 00:24:47.499 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:24:47.499 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.499 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:47.499 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.499 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:24:47.499 "subsystems": [ 00:24:47.499 { 00:24:47.499 "subsystem": "keyring", 00:24:47.499 "config": [ 00:24:47.500 { 00:24:47.500 "method": "keyring_file_add_key", 00:24:47.500 "params": { 00:24:47.500 "name": "key0", 00:24:47.500 "path": "/tmp/tmp.D6ZEjKx6Va" 00:24:47.500 } 00:24:47.500 } 00:24:47.500 ] 00:24:47.500 }, 00:24:47.500 { 00:24:47.500 "subsystem": "iobuf", 00:24:47.500 "config": [ 00:24:47.500 { 00:24:47.500 "method": "iobuf_set_options", 00:24:47.500 "params": { 00:24:47.500 "small_pool_count": 8192, 00:24:47.500 "large_pool_count": 1024, 00:24:47.500 "small_bufsize": 8192, 00:24:47.500 "large_bufsize": 135168, 00:24:47.500 "enable_numa": false 00:24:47.500 } 00:24:47.500 } 00:24:47.500 ] 00:24:47.500 }, 00:24:47.500 { 00:24:47.500 "subsystem": "sock", 00:24:47.500 "config": [ 00:24:47.500 { 00:24:47.500 "method": "sock_set_default_impl", 00:24:47.500 "params": { 00:24:47.500 "impl_name": "posix" 00:24:47.500 } 00:24:47.500 }, 00:24:47.500 { 00:24:47.500 "method": "sock_impl_set_options", 00:24:47.500 "params": { 00:24:47.500 "impl_name": "ssl", 00:24:47.500 "recv_buf_size": 4096, 00:24:47.500 "send_buf_size": 4096, 00:24:47.500 "enable_recv_pipe": true, 00:24:47.500 "enable_quickack": false, 00:24:47.500 "enable_placement_id": 0, 00:24:47.500 "enable_zerocopy_send_server": true, 00:24:47.500 "enable_zerocopy_send_client": false, 00:24:47.500 "zerocopy_threshold": 0, 00:24:47.500 "tls_version": 0, 00:24:47.500 "enable_ktls": false 00:24:47.500 } 00:24:47.500 }, 00:24:47.500 { 00:24:47.500 "method": "sock_impl_set_options", 00:24:47.500 "params": { 00:24:47.500 "impl_name": "posix", 00:24:47.500 "recv_buf_size": 2097152, 00:24:47.500 "send_buf_size": 2097152, 00:24:47.500 "enable_recv_pipe": true, 00:24:47.500 "enable_quickack": false, 00:24:47.500 "enable_placement_id": 0, 00:24:47.500 "enable_zerocopy_send_server": true, 00:24:47.500 "enable_zerocopy_send_client": false, 00:24:47.500 "zerocopy_threshold": 0, 00:24:47.500 "tls_version": 0, 00:24:47.500 "enable_ktls": false 00:24:47.500 } 00:24:47.500 } 00:24:47.500 ] 00:24:47.500 }, 00:24:47.500 { 00:24:47.500 "subsystem": "vmd", 00:24:47.500 "config": [] 00:24:47.500 }, 00:24:47.500 { 00:24:47.500 "subsystem": "accel", 00:24:47.500 "config": [ 00:24:47.500 { 00:24:47.500 "method": "accel_set_options", 00:24:47.500 "params": { 00:24:47.500 "small_cache_size": 128, 00:24:47.500 "large_cache_size": 16, 00:24:47.500 "task_count": 2048, 00:24:47.500 "sequence_count": 2048, 00:24:47.500 "buf_count": 2048 00:24:47.500 } 00:24:47.500 } 00:24:47.500 ] 00:24:47.500 }, 00:24:47.500 { 00:24:47.500 "subsystem": "bdev", 00:24:47.500 "config": [ 00:24:47.500 { 00:24:47.500 "method": "bdev_set_options", 00:24:47.500 "params": { 00:24:47.500 "bdev_io_pool_size": 65535, 00:24:47.500 "bdev_io_cache_size": 256, 00:24:47.500 "bdev_auto_examine": true, 00:24:47.500 "iobuf_small_cache_size": 128, 00:24:47.500 "iobuf_large_cache_size": 16 00:24:47.500 } 00:24:47.500 }, 00:24:47.500 { 00:24:47.500 "method": "bdev_raid_set_options", 00:24:47.500 "params": { 00:24:47.500 "process_window_size_kb": 1024, 00:24:47.500 "process_max_bandwidth_mb_sec": 0 00:24:47.500 } 00:24:47.500 }, 00:24:47.500 { 00:24:47.500 "method": "bdev_iscsi_set_options", 00:24:47.500 "params": { 00:24:47.500 "timeout_sec": 30 00:24:47.500 } 00:24:47.500 }, 00:24:47.500 { 00:24:47.500 "method": "bdev_nvme_set_options", 00:24:47.500 "params": { 00:24:47.500 "action_on_timeout": "none", 00:24:47.500 "timeout_us": 0, 00:24:47.500 "timeout_admin_us": 0, 00:24:47.500 "keep_alive_timeout_ms": 10000, 00:24:47.500 "arbitration_burst": 0, 00:24:47.500 "low_priority_weight": 0, 00:24:47.500 "medium_priority_weight": 0, 00:24:47.500 "high_priority_weight": 0, 00:24:47.500 "nvme_adminq_poll_period_us": 10000, 00:24:47.500 "nvme_ioq_poll_period_us": 0, 00:24:47.500 "io_queue_requests": 0, 00:24:47.500 "delay_cmd_submit": true, 00:24:47.500 "transport_retry_count": 4, 00:24:47.500 "bdev_retry_count": 3, 00:24:47.500 "transport_ack_timeout": 0, 00:24:47.500 "ctrlr_loss_timeout_sec": 0, 00:24:47.500 "reconnect_delay_sec": 0, 00:24:47.500 "fast_io_fail_timeout_sec": 0, 00:24:47.500 "disable_auto_failback": false, 00:24:47.500 "generate_uuids": false, 00:24:47.500 "transport_tos": 0, 00:24:47.500 "nvme_error_stat": false, 00:24:47.500 "rdma_srq_size": 0, 00:24:47.500 "io_path_stat": false, 00:24:47.500 "allow_accel_sequence": false, 00:24:47.500 "rdma_max_cq_size": 0, 00:24:47.500 "rdma_cm_event_timeout_ms": 0, 00:24:47.500 "dhchap_digests": [ 00:24:47.500 "sha256", 00:24:47.500 "sha384", 00:24:47.500 "sha512" 00:24:47.500 ], 00:24:47.500 "dhchap_dhgroups": [ 00:24:47.500 "null", 00:24:47.500 "ffdhe2048", 00:24:47.500 "ffdhe3072", 00:24:47.500 "ffdhe4096", 00:24:47.500 "ffdhe6144", 00:24:47.500 "ffdhe8192" 00:24:47.500 ] 00:24:47.500 } 00:24:47.500 }, 00:24:47.500 { 00:24:47.500 "method": "bdev_nvme_set_hotplug", 00:24:47.500 "params": { 00:24:47.500 "period_us": 100000, 00:24:47.500 "enable": false 00:24:47.500 } 00:24:47.500 }, 00:24:47.500 { 00:24:47.500 "method": "bdev_malloc_create", 00:24:47.500 "params": { 00:24:47.500 "name": "malloc0", 00:24:47.500 "num_blocks": 8192, 00:24:47.500 "block_size": 4096, 00:24:47.500 "physical_block_size": 4096, 00:24:47.500 "uuid": "4f068c7c-d07f-4e36-9327-28553069388c", 00:24:47.500 "optimal_io_boundary": 0, 00:24:47.500 "md_size": 0, 00:24:47.500 "dif_type": 0, 00:24:47.500 "dif_is_head_of_md": false, 00:24:47.500 "dif_pi_format": 0 00:24:47.500 } 00:24:47.500 }, 00:24:47.500 { 00:24:47.500 "method": "bdev_wait_for_examine" 00:24:47.500 } 00:24:47.500 ] 00:24:47.500 }, 00:24:47.500 { 00:24:47.500 "subsystem": "nbd", 00:24:47.500 "config": [] 00:24:47.500 }, 00:24:47.500 { 00:24:47.500 "subsystem": "scheduler", 00:24:47.500 "config": [ 00:24:47.500 { 00:24:47.500 "method": "framework_set_scheduler", 00:24:47.500 "params": { 00:24:47.500 "name": "static" 00:24:47.500 } 00:24:47.500 } 00:24:47.500 ] 00:24:47.500 }, 00:24:47.500 { 00:24:47.500 "subsystem": "nvmf", 00:24:47.500 "config": [ 00:24:47.500 { 00:24:47.500 "method": "nvmf_set_config", 00:24:47.500 "params": { 00:24:47.500 "discovery_filter": "match_any", 00:24:47.500 "admin_cmd_passthru": { 00:24:47.500 "identify_ctrlr": false 00:24:47.500 }, 00:24:47.500 "dhchap_digests": [ 00:24:47.500 "sha256", 00:24:47.500 "sha384", 00:24:47.500 "sha512" 00:24:47.500 ], 00:24:47.500 "dhchap_dhgroups": [ 00:24:47.500 "null", 00:24:47.500 "ffdhe2048", 00:24:47.500 "ffdhe3072", 00:24:47.500 "ffdhe4096", 00:24:47.500 "ffdhe6144", 00:24:47.500 "ffdhe8192" 00:24:47.500 ] 00:24:47.500 } 00:24:47.500 }, 00:24:47.500 { 00:24:47.500 "method": "nvmf_set_max_subsystems", 00:24:47.500 "params": { 00:24:47.500 "max_subsystems": 1024 00:24:47.500 } 00:24:47.500 }, 00:24:47.500 { 00:24:47.500 "method": "nvmf_set_crdt", 00:24:47.500 "params": { 00:24:47.500 "crdt1": 0, 00:24:47.500 "crdt2": 0, 00:24:47.500 "crdt3": 0 00:24:47.500 } 00:24:47.500 }, 00:24:47.500 { 00:24:47.500 "method": "nvmf_create_transport", 00:24:47.500 "params": { 00:24:47.500 "trtype": "TCP", 00:24:47.500 "max_queue_depth": 128, 00:24:47.500 "max_io_qpairs_per_ctrlr": 127, 00:24:47.500 "in_capsule_data_size": 4096, 00:24:47.500 "max_io_size": 131072, 00:24:47.500 "io_unit_size": 131072, 00:24:47.500 "max_aq_depth": 128, 00:24:47.500 "num_shared_buffers": 511, 00:24:47.500 "buf_cache_size": 4294967295, 00:24:47.500 "dif_insert_or_strip": false, 00:24:47.500 "zcopy": false, 00:24:47.500 "c2h_success": false, 00:24:47.500 "sock_priority": 0, 00:24:47.500 "abort_timeout_sec": 1, 00:24:47.500 "ack_timeout": 0, 00:24:47.500 "data_wr_pool_size": 0 00:24:47.500 } 00:24:47.500 }, 00:24:47.500 { 00:24:47.500 "method": "nvmf_create_subsystem", 00:24:47.500 "params": { 00:24:47.500 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:47.500 "allow_any_host": false, 00:24:47.500 "serial_number": "00000000000000000000", 00:24:47.500 "model_number": "SPDK bdev Controller", 00:24:47.500 "max_namespaces": 32, 00:24:47.500 "min_cntlid": 1, 00:24:47.500 "max_cntlid": 65519, 00:24:47.500 "ana_reporting": false 00:24:47.500 } 00:24:47.500 }, 00:24:47.500 { 00:24:47.500 "method": "nvmf_subsystem_add_host", 00:24:47.500 "params": { 00:24:47.500 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:47.500 "host": "nqn.2016-06.io.spdk:host1", 00:24:47.500 "psk": "key0" 00:24:47.500 } 00:24:47.500 }, 00:24:47.500 { 00:24:47.500 "method": "nvmf_subsystem_add_ns", 00:24:47.501 "params": { 00:24:47.501 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:47.501 "namespace": { 00:24:47.501 "nsid": 1, 00:24:47.501 "bdev_name": "malloc0", 00:24:47.501 "nguid": "4F068C7CD07F4E36932728553069388C", 00:24:47.501 "uuid": "4f068c7c-d07f-4e36-9327-28553069388c", 00:24:47.501 "no_auto_visible": false 00:24:47.501 } 00:24:47.501 } 00:24:47.501 }, 00:24:47.501 { 00:24:47.501 "method": "nvmf_subsystem_add_listener", 00:24:47.501 "params": { 00:24:47.501 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:47.501 "listen_address": { 00:24:47.501 "trtype": "TCP", 00:24:47.501 "adrfam": "IPv4", 00:24:47.501 "traddr": "10.0.0.2", 00:24:47.501 "trsvcid": "4420" 00:24:47.501 }, 00:24:47.501 "secure_channel": false, 00:24:47.501 "sock_impl": "ssl" 00:24:47.501 } 00:24:47.501 } 00:24:47.501 ] 00:24:47.501 } 00:24:47.501 ] 00:24:47.501 }' 00:24:47.501 09:42:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:47.762 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:24:47.762 "subsystems": [ 00:24:47.762 { 00:24:47.762 "subsystem": "keyring", 00:24:47.762 "config": [ 00:24:47.762 { 00:24:47.762 "method": "keyring_file_add_key", 00:24:47.762 "params": { 00:24:47.762 "name": "key0", 00:24:47.762 "path": "/tmp/tmp.D6ZEjKx6Va" 00:24:47.762 } 00:24:47.762 } 00:24:47.762 ] 00:24:47.762 }, 00:24:47.762 { 00:24:47.762 "subsystem": "iobuf", 00:24:47.762 "config": [ 00:24:47.762 { 00:24:47.762 "method": "iobuf_set_options", 00:24:47.762 "params": { 00:24:47.762 "small_pool_count": 8192, 00:24:47.762 "large_pool_count": 1024, 00:24:47.762 "small_bufsize": 8192, 00:24:47.762 "large_bufsize": 135168, 00:24:47.762 "enable_numa": false 00:24:47.762 } 00:24:47.762 } 00:24:47.762 ] 00:24:47.762 }, 00:24:47.762 { 00:24:47.762 "subsystem": "sock", 00:24:47.762 "config": [ 00:24:47.762 { 00:24:47.762 "method": "sock_set_default_impl", 00:24:47.762 "params": { 00:24:47.762 "impl_name": "posix" 00:24:47.762 } 00:24:47.762 }, 00:24:47.762 { 00:24:47.762 "method": "sock_impl_set_options", 00:24:47.762 "params": { 00:24:47.762 "impl_name": "ssl", 00:24:47.762 "recv_buf_size": 4096, 00:24:47.762 "send_buf_size": 4096, 00:24:47.762 "enable_recv_pipe": true, 00:24:47.762 "enable_quickack": false, 00:24:47.762 "enable_placement_id": 0, 00:24:47.762 "enable_zerocopy_send_server": true, 00:24:47.762 "enable_zerocopy_send_client": false, 00:24:47.762 "zerocopy_threshold": 0, 00:24:47.762 "tls_version": 0, 00:24:47.762 "enable_ktls": false 00:24:47.762 } 00:24:47.762 }, 00:24:47.762 { 00:24:47.762 "method": "sock_impl_set_options", 00:24:47.762 "params": { 00:24:47.762 "impl_name": "posix", 00:24:47.762 "recv_buf_size": 2097152, 00:24:47.762 "send_buf_size": 2097152, 00:24:47.762 "enable_recv_pipe": true, 00:24:47.762 "enable_quickack": false, 00:24:47.762 "enable_placement_id": 0, 00:24:47.762 "enable_zerocopy_send_server": true, 00:24:47.762 "enable_zerocopy_send_client": false, 00:24:47.762 "zerocopy_threshold": 0, 00:24:47.762 "tls_version": 0, 00:24:47.762 "enable_ktls": false 00:24:47.762 } 00:24:47.762 } 00:24:47.762 ] 00:24:47.762 }, 00:24:47.762 { 00:24:47.762 "subsystem": "vmd", 00:24:47.762 "config": [] 00:24:47.762 }, 00:24:47.762 { 00:24:47.762 "subsystem": "accel", 00:24:47.762 "config": [ 00:24:47.762 { 00:24:47.762 "method": "accel_set_options", 00:24:47.762 "params": { 00:24:47.762 "small_cache_size": 128, 00:24:47.762 "large_cache_size": 16, 00:24:47.762 "task_count": 2048, 00:24:47.762 "sequence_count": 2048, 00:24:47.762 "buf_count": 2048 00:24:47.762 } 00:24:47.762 } 00:24:47.762 ] 00:24:47.762 }, 00:24:47.762 { 00:24:47.762 "subsystem": "bdev", 00:24:47.762 "config": [ 00:24:47.762 { 00:24:47.762 "method": "bdev_set_options", 00:24:47.762 "params": { 00:24:47.762 "bdev_io_pool_size": 65535, 00:24:47.762 "bdev_io_cache_size": 256, 00:24:47.762 "bdev_auto_examine": true, 00:24:47.762 "iobuf_small_cache_size": 128, 00:24:47.762 "iobuf_large_cache_size": 16 00:24:47.762 } 00:24:47.762 }, 00:24:47.762 { 00:24:47.762 "method": "bdev_raid_set_options", 00:24:47.762 "params": { 00:24:47.762 "process_window_size_kb": 1024, 00:24:47.762 "process_max_bandwidth_mb_sec": 0 00:24:47.762 } 00:24:47.762 }, 00:24:47.762 { 00:24:47.762 "method": "bdev_iscsi_set_options", 00:24:47.762 "params": { 00:24:47.762 "timeout_sec": 30 00:24:47.762 } 00:24:47.762 }, 00:24:47.762 { 00:24:47.762 "method": "bdev_nvme_set_options", 00:24:47.762 "params": { 00:24:47.762 "action_on_timeout": "none", 00:24:47.762 "timeout_us": 0, 00:24:47.762 "timeout_admin_us": 0, 00:24:47.762 "keep_alive_timeout_ms": 10000, 00:24:47.762 "arbitration_burst": 0, 00:24:47.762 "low_priority_weight": 0, 00:24:47.762 "medium_priority_weight": 0, 00:24:47.762 "high_priority_weight": 0, 00:24:47.762 "nvme_adminq_poll_period_us": 10000, 00:24:47.762 "nvme_ioq_poll_period_us": 0, 00:24:47.762 "io_queue_requests": 512, 00:24:47.762 "delay_cmd_submit": true, 00:24:47.762 "transport_retry_count": 4, 00:24:47.762 "bdev_retry_count": 3, 00:24:47.762 "transport_ack_timeout": 0, 00:24:47.762 "ctrlr_loss_timeout_sec": 0, 00:24:47.762 "reconnect_delay_sec": 0, 00:24:47.762 "fast_io_fail_timeout_sec": 0, 00:24:47.762 "disable_auto_failback": false, 00:24:47.762 "generate_uuids": false, 00:24:47.762 "transport_tos": 0, 00:24:47.762 "nvme_error_stat": false, 00:24:47.762 "rdma_srq_size": 0, 00:24:47.762 "io_path_stat": false, 00:24:47.762 "allow_accel_sequence": false, 00:24:47.762 "rdma_max_cq_size": 0, 00:24:47.762 "rdma_cm_event_timeout_ms": 0, 00:24:47.762 "dhchap_digests": [ 00:24:47.762 "sha256", 00:24:47.763 "sha384", 00:24:47.763 "sha512" 00:24:47.763 ], 00:24:47.763 "dhchap_dhgroups": [ 00:24:47.763 "null", 00:24:47.763 "ffdhe2048", 00:24:47.763 "ffdhe3072", 00:24:47.763 "ffdhe4096", 00:24:47.763 "ffdhe6144", 00:24:47.763 "ffdhe8192" 00:24:47.763 ] 00:24:47.763 } 00:24:47.763 }, 00:24:47.763 { 00:24:47.763 "method": "bdev_nvme_attach_controller", 00:24:47.763 "params": { 00:24:47.763 "name": "nvme0", 00:24:47.763 "trtype": "TCP", 00:24:47.763 "adrfam": "IPv4", 00:24:47.763 "traddr": "10.0.0.2", 00:24:47.763 "trsvcid": "4420", 00:24:47.763 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:47.763 "prchk_reftag": false, 00:24:47.763 "prchk_guard": false, 00:24:47.763 "ctrlr_loss_timeout_sec": 0, 00:24:47.763 "reconnect_delay_sec": 0, 00:24:47.763 "fast_io_fail_timeout_sec": 0, 00:24:47.763 "psk": "key0", 00:24:47.763 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:47.763 "hdgst": false, 00:24:47.763 "ddgst": false, 00:24:47.763 "multipath": "multipath" 00:24:47.763 } 00:24:47.763 }, 00:24:47.763 { 00:24:47.763 "method": "bdev_nvme_set_hotplug", 00:24:47.763 "params": { 00:24:47.763 "period_us": 100000, 00:24:47.763 "enable": false 00:24:47.763 } 00:24:47.763 }, 00:24:47.763 { 00:24:47.763 "method": "bdev_enable_histogram", 00:24:47.763 "params": { 00:24:47.763 "name": "nvme0n1", 00:24:47.763 "enable": true 00:24:47.763 } 00:24:47.763 }, 00:24:47.763 { 00:24:47.763 "method": "bdev_wait_for_examine" 00:24:47.763 } 00:24:47.763 ] 00:24:47.763 }, 00:24:47.763 { 00:24:47.763 "subsystem": "nbd", 00:24:47.763 "config": [] 00:24:47.763 } 00:24:47.763 ] 00:24:47.763 }' 00:24:47.763 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 2842651 00:24:47.763 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2842651 ']' 00:24:47.763 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2842651 00:24:47.763 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:47.763 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:47.763 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2842651 00:24:47.763 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:47.763 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:47.763 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2842651' 00:24:47.763 killing process with pid 2842651 00:24:47.763 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2842651 00:24:47.763 Received shutdown signal, test time was about 1.000000 seconds 00:24:47.763 00:24:47.763 Latency(us) 00:24:47.763 [2024-12-09T08:42:23.216Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:47.763 [2024-12-09T08:42:23.216Z] =================================================================================================================== 00:24:47.763 [2024-12-09T08:42:23.216Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:47.763 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2842651 00:24:47.763 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 2842555 00:24:47.763 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2842555 ']' 00:24:47.763 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2842555 00:24:47.763 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:47.763 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:47.763 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2842555 00:24:48.025 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:48.025 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:48.025 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2842555' 00:24:48.025 killing process with pid 2842555 00:24:48.025 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2842555 00:24:48.025 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2842555 00:24:48.025 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:24:48.025 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:48.025 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:48.025 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:24:48.025 "subsystems": [ 00:24:48.025 { 00:24:48.025 "subsystem": "keyring", 00:24:48.025 "config": [ 00:24:48.025 { 00:24:48.025 "method": "keyring_file_add_key", 00:24:48.025 "params": { 00:24:48.025 "name": "key0", 00:24:48.025 "path": "/tmp/tmp.D6ZEjKx6Va" 00:24:48.025 } 00:24:48.025 } 00:24:48.025 ] 00:24:48.025 }, 00:24:48.025 { 00:24:48.025 "subsystem": "iobuf", 00:24:48.025 "config": [ 00:24:48.025 { 00:24:48.025 "method": "iobuf_set_options", 00:24:48.025 "params": { 00:24:48.025 "small_pool_count": 8192, 00:24:48.025 "large_pool_count": 1024, 00:24:48.025 "small_bufsize": 8192, 00:24:48.025 "large_bufsize": 135168, 00:24:48.025 "enable_numa": false 00:24:48.025 } 00:24:48.025 } 00:24:48.025 ] 00:24:48.025 }, 00:24:48.025 { 00:24:48.025 "subsystem": "sock", 00:24:48.025 "config": [ 00:24:48.025 { 00:24:48.025 "method": "sock_set_default_impl", 00:24:48.025 "params": { 00:24:48.025 "impl_name": "posix" 00:24:48.025 } 00:24:48.025 }, 00:24:48.025 { 00:24:48.025 "method": "sock_impl_set_options", 00:24:48.025 "params": { 00:24:48.025 "impl_name": "ssl", 00:24:48.025 "recv_buf_size": 4096, 00:24:48.025 "send_buf_size": 4096, 00:24:48.025 "enable_recv_pipe": true, 00:24:48.025 "enable_quickack": false, 00:24:48.025 "enable_placement_id": 0, 00:24:48.025 "enable_zerocopy_send_server": true, 00:24:48.025 "enable_zerocopy_send_client": false, 00:24:48.025 "zerocopy_threshold": 0, 00:24:48.025 "tls_version": 0, 00:24:48.025 "enable_ktls": false 00:24:48.025 } 00:24:48.025 }, 00:24:48.025 { 00:24:48.025 "method": "sock_impl_set_options", 00:24:48.025 "params": { 00:24:48.025 "impl_name": "posix", 00:24:48.025 "recv_buf_size": 2097152, 00:24:48.025 "send_buf_size": 2097152, 00:24:48.025 "enable_recv_pipe": true, 00:24:48.025 "enable_quickack": false, 00:24:48.025 "enable_placement_id": 0, 00:24:48.025 "enable_zerocopy_send_server": true, 00:24:48.025 "enable_zerocopy_send_client": false, 00:24:48.025 "zerocopy_threshold": 0, 00:24:48.025 "tls_version": 0, 00:24:48.025 "enable_ktls": false 00:24:48.025 } 00:24:48.025 } 00:24:48.025 ] 00:24:48.025 }, 00:24:48.025 { 00:24:48.025 "subsystem": "vmd", 00:24:48.025 "config": [] 00:24:48.025 }, 00:24:48.025 { 00:24:48.025 "subsystem": "accel", 00:24:48.025 "config": [ 00:24:48.025 { 00:24:48.025 "method": "accel_set_options", 00:24:48.025 "params": { 00:24:48.025 "small_cache_size": 128, 00:24:48.025 "large_cache_size": 16, 00:24:48.025 "task_count": 2048, 00:24:48.025 "sequence_count": 2048, 00:24:48.025 "buf_count": 2048 00:24:48.025 } 00:24:48.025 } 00:24:48.025 ] 00:24:48.025 }, 00:24:48.025 { 00:24:48.025 "subsystem": "bdev", 00:24:48.025 "config": [ 00:24:48.025 { 00:24:48.025 "method": "bdev_set_options", 00:24:48.025 "params": { 00:24:48.025 "bdev_io_pool_size": 65535, 00:24:48.025 "bdev_io_cache_size": 256, 00:24:48.025 "bdev_auto_examine": true, 00:24:48.025 "iobuf_small_cache_size": 128, 00:24:48.025 "iobuf_large_cache_size": 16 00:24:48.025 } 00:24:48.025 }, 00:24:48.025 { 00:24:48.025 "method": "bdev_raid_set_options", 00:24:48.025 "params": { 00:24:48.025 "process_window_size_kb": 1024, 00:24:48.025 "process_max_bandwidth_mb_sec": 0 00:24:48.025 } 00:24:48.025 }, 00:24:48.025 { 00:24:48.025 "method": "bdev_iscsi_set_options", 00:24:48.025 "params": { 00:24:48.025 "timeout_sec": 30 00:24:48.025 } 00:24:48.025 }, 00:24:48.025 { 00:24:48.025 "method": "bdev_nvme_set_options", 00:24:48.025 "params": { 00:24:48.025 "action_on_timeout": "none", 00:24:48.025 "timeout_us": 0, 00:24:48.025 "timeout_admin_us": 0, 00:24:48.025 "keep_alive_timeout_ms": 10000, 00:24:48.025 "arbitration_burst": 0, 00:24:48.025 "low_priority_weight": 0, 00:24:48.025 "medium_priority_weight": 0, 00:24:48.025 "high_priority_weight": 0, 00:24:48.025 "nvme_adminq_poll_period_us": 10000, 00:24:48.025 "nvme_ioq_poll_period_us": 0, 00:24:48.025 "io_queue_requests": 0, 00:24:48.025 "delay_cmd_submit": true, 00:24:48.025 "transport_retry_count": 4, 00:24:48.025 "bdev_retry_count": 3, 00:24:48.025 "transport_ack_timeout": 0, 00:24:48.025 "ctrlr_loss_timeout_sec": 0, 00:24:48.025 "reconnect_delay_sec": 0, 00:24:48.025 "fast_io_fail_timeout_sec": 0, 00:24:48.025 "disable_auto_failback": false, 00:24:48.025 "generate_uuids": false, 00:24:48.025 "transport_tos": 0, 00:24:48.025 "nvme_error_stat": false, 00:24:48.025 "rdma_srq_size": 0, 00:24:48.025 "io_path_stat": false, 00:24:48.025 "allow_accel_sequence": false, 00:24:48.025 "rdma_max_cq_size": 0, 00:24:48.025 "rdma_cm_event_timeout_ms": 0, 00:24:48.025 "dhchap_digests": [ 00:24:48.025 "sha256", 00:24:48.025 "sha384", 00:24:48.025 "sha512" 00:24:48.025 ], 00:24:48.025 "dhchap_dhgroups": [ 00:24:48.025 "null", 00:24:48.025 "ffdhe2048", 00:24:48.025 "ffdhe3072", 00:24:48.025 "ffdhe4096", 00:24:48.025 "ffdhe6144", 00:24:48.025 "ffdhe8192" 00:24:48.025 ] 00:24:48.025 } 00:24:48.025 }, 00:24:48.025 { 00:24:48.025 "method": "bdev_nvme_set_hotplug", 00:24:48.025 "params": { 00:24:48.025 "period_us": 100000, 00:24:48.025 "enable": false 00:24:48.025 } 00:24:48.025 }, 00:24:48.025 { 00:24:48.025 "method": "bdev_malloc_create", 00:24:48.025 "params": { 00:24:48.025 "name": "malloc0", 00:24:48.025 "num_blocks": 8192, 00:24:48.025 "block_size": 4096, 00:24:48.025 "physical_block_size": 4096, 00:24:48.025 "uuid": "4f068c7c-d07f-4e36-9327-28553069388c", 00:24:48.025 "optimal_io_boundary": 0, 00:24:48.025 "md_size": 0, 00:24:48.025 "dif_type": 0, 00:24:48.025 "dif_is_head_of_md": false, 00:24:48.025 "dif_pi_format": 0 00:24:48.025 } 00:24:48.025 }, 00:24:48.025 { 00:24:48.025 "method": "bdev_wait_for_examine" 00:24:48.025 } 00:24:48.025 ] 00:24:48.025 }, 00:24:48.025 { 00:24:48.025 "subsystem": "nbd", 00:24:48.025 "config": [] 00:24:48.025 }, 00:24:48.025 { 00:24:48.025 "subsystem": "scheduler", 00:24:48.025 "config": [ 00:24:48.025 { 00:24:48.025 "method": "framework_set_scheduler", 00:24:48.025 "params": { 00:24:48.025 "name": "static" 00:24:48.025 } 00:24:48.025 } 00:24:48.025 ] 00:24:48.025 }, 00:24:48.025 { 00:24:48.025 "subsystem": "nvmf", 00:24:48.025 "config": [ 00:24:48.025 { 00:24:48.025 "method": "nvmf_set_config", 00:24:48.025 "params": { 00:24:48.025 "discovery_filter": "match_any", 00:24:48.025 "admin_cmd_passthru": { 00:24:48.025 "identify_ctrlr": false 00:24:48.025 }, 00:24:48.025 "dhchap_digests": [ 00:24:48.025 "sha256", 00:24:48.025 "sha384", 00:24:48.025 "sha512" 00:24:48.025 ], 00:24:48.025 "dhchap_dhgroups": [ 00:24:48.025 "null", 00:24:48.025 "ffdhe2048", 00:24:48.025 "ffdhe3072", 00:24:48.025 "ffdhe4096", 00:24:48.025 "ffdhe6144", 00:24:48.025 "ffdhe8192" 00:24:48.025 ] 00:24:48.026 } 00:24:48.026 }, 00:24:48.026 { 00:24:48.026 "method": "nvmf_set_max_subsystems", 00:24:48.026 "params": { 00:24:48.026 "max_subsystems": 1024 00:24:48.026 } 00:24:48.026 }, 00:24:48.026 { 00:24:48.026 "method": "nvmf_set_crdt", 00:24:48.026 "params": { 00:24:48.026 "crdt1": 0, 00:24:48.026 "crdt2": 0, 00:24:48.026 "crdt3": 0 00:24:48.026 } 00:24:48.026 }, 00:24:48.026 { 00:24:48.026 "method": "nvmf_create_transport", 00:24:48.026 "params": { 00:24:48.026 "trtype": "TCP", 00:24:48.026 "max_queue_depth": 128, 00:24:48.026 "max_io_qpairs_per_ctrlr": 127, 00:24:48.026 "in_capsule_data_size": 4096, 00:24:48.026 "max_io_size": 131072, 00:24:48.026 "io_unit_size": 131072, 00:24:48.026 "max_aq_depth": 128, 00:24:48.026 "num_shared_buffers": 511, 00:24:48.026 "buf_cache_size": 4294967295, 00:24:48.026 "dif_insert_or_strip": false, 00:24:48.026 "zcopy": false, 00:24:48.026 "c2h_success": false, 00:24:48.026 "sock_priority": 0, 00:24:48.026 "abort_timeout_sec": 1, 00:24:48.026 "ack_timeout": 0, 00:24:48.026 "data_wr_pool_size": 0 00:24:48.026 } 00:24:48.026 }, 00:24:48.026 { 00:24:48.026 "method": "nvmf_create_subsystem", 00:24:48.026 "params": { 00:24:48.026 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:48.026 "allow_any_host": false, 00:24:48.026 "serial_number": "00000000000000000000", 00:24:48.026 "model_number": "SPDK bdev Controller", 00:24:48.026 "max_namespaces": 32, 00:24:48.026 "min_cntlid": 1, 00:24:48.026 "max_cntlid": 65519, 00:24:48.026 "ana_reporting": false 00:24:48.026 } 00:24:48.026 }, 00:24:48.026 { 00:24:48.026 "method": "nvmf_subsystem_add_host", 00:24:48.026 "params": { 00:24:48.026 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:48.026 "host": "nqn.2016-06.io.spdk:host1", 00:24:48.026 "psk": "key0" 00:24:48.026 } 00:24:48.026 }, 00:24:48.026 { 00:24:48.026 "method": "nvmf_subsystem_add_ns", 00:24:48.026 "params": { 00:24:48.026 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:48.026 "namespace": { 00:24:48.026 "nsid": 1, 00:24:48.026 "bdev_name": "malloc0", 00:24:48.026 "nguid": "4F068C7CD07F4E36932728553069388C", 00:24:48.026 "uuid": "4f068c7c-d07f-4e36-9327-28553069388c", 00:24:48.026 "no_auto_visible": false 00:24:48.026 } 00:24:48.026 } 00:24:48.026 }, 00:24:48.026 { 00:24:48.026 "method": "nvmf_subsystem_add_listener", 00:24:48.026 "params": { 00:24:48.026 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:48.026 "listen_address": { 00:24:48.026 "trtype": "TCP", 00:24:48.026 "adrfam": "IPv4", 00:24:48.026 "traddr": "10.0.0.2", 00:24:48.026 "trsvcid": "4420" 00:24:48.026 }, 00:24:48.026 "secure_channel": false, 00:24:48.026 "sock_impl": "ssl" 00:24:48.026 } 00:24:48.026 } 00:24:48.026 ] 00:24:48.026 } 00:24:48.026 ] 00:24:48.026 }' 00:24:48.026 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:48.026 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2843206 00:24:48.026 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2843206 00:24:48.026 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:24:48.026 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2843206 ']' 00:24:48.026 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:48.026 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:48.026 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:48.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:48.026 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:48.026 09:42:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:48.026 [2024-12-09 09:42:23.435269] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:24:48.026 [2024-12-09 09:42:23.435329] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:48.287 [2024-12-09 09:42:23.523301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:48.287 [2024-12-09 09:42:23.538682] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:48.287 [2024-12-09 09:42:23.538710] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:48.287 [2024-12-09 09:42:23.538716] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:48.287 [2024-12-09 09:42:23.538721] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:48.287 [2024-12-09 09:42:23.538725] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:48.287 [2024-12-09 09:42:23.539192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:48.287 [2024-12-09 09:42:23.727813] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:48.547 [2024-12-09 09:42:23.759846] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:48.547 [2024-12-09 09:42:23.760052] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:48.807 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:48.807 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:48.807 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:48.807 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:48.807 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:49.068 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:49.068 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=2843357 00:24:49.068 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 2843357 /var/tmp/bdevperf.sock 00:24:49.068 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2843357 ']' 00:24:49.068 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:49.068 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:49.068 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:49.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:49.068 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:24:49.068 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:49.068 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:49.068 09:42:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:24:49.068 "subsystems": [ 00:24:49.068 { 00:24:49.068 "subsystem": "keyring", 00:24:49.068 "config": [ 00:24:49.068 { 00:24:49.068 "method": "keyring_file_add_key", 00:24:49.068 "params": { 00:24:49.068 "name": "key0", 00:24:49.068 "path": "/tmp/tmp.D6ZEjKx6Va" 00:24:49.068 } 00:24:49.068 } 00:24:49.068 ] 00:24:49.068 }, 00:24:49.068 { 00:24:49.068 "subsystem": "iobuf", 00:24:49.068 "config": [ 00:24:49.068 { 00:24:49.068 "method": "iobuf_set_options", 00:24:49.068 "params": { 00:24:49.068 "small_pool_count": 8192, 00:24:49.068 "large_pool_count": 1024, 00:24:49.068 "small_bufsize": 8192, 00:24:49.068 "large_bufsize": 135168, 00:24:49.068 "enable_numa": false 00:24:49.068 } 00:24:49.068 } 00:24:49.068 ] 00:24:49.068 }, 00:24:49.068 { 00:24:49.068 "subsystem": "sock", 00:24:49.068 "config": [ 00:24:49.068 { 00:24:49.068 "method": "sock_set_default_impl", 00:24:49.068 "params": { 00:24:49.069 "impl_name": "posix" 00:24:49.069 } 00:24:49.069 }, 00:24:49.069 { 00:24:49.069 "method": "sock_impl_set_options", 00:24:49.069 "params": { 00:24:49.069 "impl_name": "ssl", 00:24:49.069 "recv_buf_size": 4096, 00:24:49.069 "send_buf_size": 4096, 00:24:49.069 "enable_recv_pipe": true, 00:24:49.069 "enable_quickack": false, 00:24:49.069 "enable_placement_id": 0, 00:24:49.069 "enable_zerocopy_send_server": true, 00:24:49.069 "enable_zerocopy_send_client": false, 00:24:49.069 "zerocopy_threshold": 0, 00:24:49.069 "tls_version": 0, 00:24:49.069 "enable_ktls": false 00:24:49.069 } 00:24:49.069 }, 00:24:49.069 { 00:24:49.069 "method": "sock_impl_set_options", 00:24:49.069 "params": { 00:24:49.069 "impl_name": "posix", 00:24:49.069 "recv_buf_size": 2097152, 00:24:49.069 "send_buf_size": 2097152, 00:24:49.069 "enable_recv_pipe": true, 00:24:49.069 "enable_quickack": false, 00:24:49.069 "enable_placement_id": 0, 00:24:49.069 "enable_zerocopy_send_server": true, 00:24:49.069 "enable_zerocopy_send_client": false, 00:24:49.069 "zerocopy_threshold": 0, 00:24:49.069 "tls_version": 0, 00:24:49.069 "enable_ktls": false 00:24:49.069 } 00:24:49.069 } 00:24:49.069 ] 00:24:49.069 }, 00:24:49.069 { 00:24:49.069 "subsystem": "vmd", 00:24:49.069 "config": [] 00:24:49.069 }, 00:24:49.069 { 00:24:49.069 "subsystem": "accel", 00:24:49.069 "config": [ 00:24:49.069 { 00:24:49.069 "method": "accel_set_options", 00:24:49.069 "params": { 00:24:49.069 "small_cache_size": 128, 00:24:49.069 "large_cache_size": 16, 00:24:49.069 "task_count": 2048, 00:24:49.069 "sequence_count": 2048, 00:24:49.069 "buf_count": 2048 00:24:49.069 } 00:24:49.069 } 00:24:49.069 ] 00:24:49.069 }, 00:24:49.069 { 00:24:49.069 "subsystem": "bdev", 00:24:49.069 "config": [ 00:24:49.069 { 00:24:49.069 "method": "bdev_set_options", 00:24:49.069 "params": { 00:24:49.069 "bdev_io_pool_size": 65535, 00:24:49.069 "bdev_io_cache_size": 256, 00:24:49.069 "bdev_auto_examine": true, 00:24:49.069 "iobuf_small_cache_size": 128, 00:24:49.069 "iobuf_large_cache_size": 16 00:24:49.069 } 00:24:49.069 }, 00:24:49.069 { 00:24:49.069 "method": "bdev_raid_set_options", 00:24:49.069 "params": { 00:24:49.069 "process_window_size_kb": 1024, 00:24:49.069 "process_max_bandwidth_mb_sec": 0 00:24:49.069 } 00:24:49.069 }, 00:24:49.069 { 00:24:49.069 "method": "bdev_iscsi_set_options", 00:24:49.069 "params": { 00:24:49.069 "timeout_sec": 30 00:24:49.069 } 00:24:49.069 }, 00:24:49.069 { 00:24:49.069 "method": "bdev_nvme_set_options", 00:24:49.069 "params": { 00:24:49.069 "action_on_timeout": "none", 00:24:49.069 "timeout_us": 0, 00:24:49.069 "timeout_admin_us": 0, 00:24:49.069 "keep_alive_timeout_ms": 10000, 00:24:49.069 "arbitration_burst": 0, 00:24:49.069 "low_priority_weight": 0, 00:24:49.069 "medium_priority_weight": 0, 00:24:49.069 "high_priority_weight": 0, 00:24:49.069 "nvme_adminq_poll_period_us": 10000, 00:24:49.069 "nvme_ioq_poll_period_us": 0, 00:24:49.069 "io_queue_requests": 512, 00:24:49.069 "delay_cmd_submit": true, 00:24:49.069 "transport_retry_count": 4, 00:24:49.069 "bdev_retry_count": 3, 00:24:49.069 "transport_ack_timeout": 0, 00:24:49.069 "ctrlr_loss_timeout_sec": 0, 00:24:49.069 "reconnect_delay_sec": 0, 00:24:49.069 "fast_io_fail_timeout_sec": 0, 00:24:49.069 "disable_auto_failback": false, 00:24:49.069 "generate_uuids": false, 00:24:49.069 "transport_tos": 0, 00:24:49.069 "nvme_error_stat": false, 00:24:49.069 "rdma_srq_size": 0, 00:24:49.069 "io_path_stat": false, 00:24:49.069 "allow_accel_sequence": false, 00:24:49.069 "rdma_max_cq_size": 0, 00:24:49.069 "rdma_cm_event_timeout_ms": 0, 00:24:49.069 "dhchap_digests": [ 00:24:49.069 "sha256", 00:24:49.069 "sha384", 00:24:49.069 "sha512" 00:24:49.069 ], 00:24:49.069 "dhchap_dhgroups": [ 00:24:49.069 "null", 00:24:49.069 "ffdhe2048", 00:24:49.069 "ffdhe3072", 00:24:49.069 "ffdhe4096", 00:24:49.069 "ffdhe6144", 00:24:49.069 "ffdhe8192" 00:24:49.069 ] 00:24:49.069 } 00:24:49.069 }, 00:24:49.069 { 00:24:49.069 "method": "bdev_nvme_attach_controller", 00:24:49.069 "params": { 00:24:49.069 "name": "nvme0", 00:24:49.069 "trtype": "TCP", 00:24:49.069 "adrfam": "IPv4", 00:24:49.069 "traddr": "10.0.0.2", 00:24:49.069 "trsvcid": "4420", 00:24:49.069 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:49.069 "prchk_reftag": false, 00:24:49.069 "prchk_guard": false, 00:24:49.069 "ctrlr_loss_timeout_sec": 0, 00:24:49.069 "reconnect_delay_sec": 0, 00:24:49.069 "fast_io_fail_timeout_sec": 0, 00:24:49.069 "psk": "key0", 00:24:49.069 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:49.069 "hdgst": false, 00:24:49.069 "ddgst": false, 00:24:49.069 "multipath": "multipath" 00:24:49.069 } 00:24:49.069 }, 00:24:49.069 { 00:24:49.069 "method": "bdev_nvme_set_hotplug", 00:24:49.069 "params": { 00:24:49.069 "period_us": 100000, 00:24:49.069 "enable": false 00:24:49.069 } 00:24:49.069 }, 00:24:49.069 { 00:24:49.069 "method": "bdev_enable_histogram", 00:24:49.069 "params": { 00:24:49.069 "name": "nvme0n1", 00:24:49.069 "enable": true 00:24:49.069 } 00:24:49.069 }, 00:24:49.069 { 00:24:49.069 "method": "bdev_wait_for_examine" 00:24:49.069 } 00:24:49.069 ] 00:24:49.069 }, 00:24:49.069 { 00:24:49.069 "subsystem": "nbd", 00:24:49.069 "config": [] 00:24:49.069 } 00:24:49.069 ] 00:24:49.069 }' 00:24:49.069 [2024-12-09 09:42:24.324034] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:24:49.069 [2024-12-09 09:42:24.324099] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2843357 ] 00:24:49.069 [2024-12-09 09:42:24.407563] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:49.069 [2024-12-09 09:42:24.423856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:49.329 [2024-12-09 09:42:24.554351] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:49.899 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:49.899 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:49.899 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:49.899 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:24:49.899 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:49.899 09:42:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:50.159 Running I/O for 1 seconds... 00:24:51.096 5334.00 IOPS, 20.84 MiB/s 00:24:51.096 Latency(us) 00:24:51.096 [2024-12-09T08:42:26.549Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:51.096 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:51.096 Verification LBA range: start 0x0 length 0x2000 00:24:51.096 nvme0n1 : 1.05 5220.74 20.39 0.00 0.00 24049.27 4696.75 44346.03 00:24:51.096 [2024-12-09T08:42:26.549Z] =================================================================================================================== 00:24:51.096 [2024-12-09T08:42:26.549Z] Total : 5220.74 20.39 0.00 0.00 24049.27 4696.75 44346.03 00:24:51.096 { 00:24:51.096 "results": [ 00:24:51.096 { 00:24:51.096 "job": "nvme0n1", 00:24:51.096 "core_mask": "0x2", 00:24:51.096 "workload": "verify", 00:24:51.096 "status": "finished", 00:24:51.096 "verify_range": { 00:24:51.096 "start": 0, 00:24:51.096 "length": 8192 00:24:51.096 }, 00:24:51.096 "queue_depth": 128, 00:24:51.096 "io_size": 4096, 00:24:51.096 "runtime": 1.046212, 00:24:51.096 "iops": 5220.739200085643, 00:24:51.096 "mibps": 20.39351250033454, 00:24:51.096 "io_failed": 0, 00:24:51.096 "io_timeout": 0, 00:24:51.096 "avg_latency_us": 24049.271797876238, 00:24:51.096 "min_latency_us": 4696.746666666667, 00:24:51.096 "max_latency_us": 44346.026666666665 00:24:51.096 } 00:24:51.096 ], 00:24:51.096 "core_count": 1 00:24:51.096 } 00:24:51.096 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:24:51.096 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:24:51.096 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:24:51.096 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:24:51.096 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:24:51.096 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:24:51.096 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:51.096 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:24:51.096 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:24:51.096 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:24:51.096 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:51.096 nvmf_trace.0 00:24:51.096 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:24:51.096 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 2843357 00:24:51.096 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2843357 ']' 00:24:51.096 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2843357 00:24:51.096 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:51.096 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:51.096 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2843357 00:24:51.357 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:51.357 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:51.357 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2843357' 00:24:51.357 killing process with pid 2843357 00:24:51.357 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2843357 00:24:51.357 Received shutdown signal, test time was about 1.000000 seconds 00:24:51.357 00:24:51.357 Latency(us) 00:24:51.357 [2024-12-09T08:42:26.810Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:51.357 [2024-12-09T08:42:26.810Z] =================================================================================================================== 00:24:51.357 [2024-12-09T08:42:26.810Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:51.357 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2843357 00:24:51.357 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:24:51.357 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:51.357 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:24:51.357 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:51.357 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:24:51.357 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:51.357 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:51.357 rmmod nvme_tcp 00:24:51.357 rmmod nvme_fabrics 00:24:51.357 rmmod nvme_keyring 00:24:51.357 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:51.357 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:24:51.357 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:24:51.357 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 2843206 ']' 00:24:51.357 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 2843206 00:24:51.357 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2843206 ']' 00:24:51.357 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2843206 00:24:51.357 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:51.357 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:51.357 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2843206 00:24:51.617 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:51.617 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:51.617 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2843206' 00:24:51.617 killing process with pid 2843206 00:24:51.617 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2843206 00:24:51.617 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2843206 00:24:51.617 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:51.618 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:51.618 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:51.618 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:24:51.618 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:24:51.618 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:51.618 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:24:51.618 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:51.618 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:51.618 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:51.618 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:51.618 09:42:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:54.163 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:54.163 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.709AaRasRl /tmp/tmp.Qse5bJRum1 /tmp/tmp.D6ZEjKx6Va 00:24:54.163 00:24:54.163 real 1m21.204s 00:24:54.164 user 2m4.851s 00:24:54.164 sys 0m26.792s 00:24:54.164 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:54.164 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:54.164 ************************************ 00:24:54.164 END TEST nvmf_tls 00:24:54.164 ************************************ 00:24:54.164 09:42:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:54.164 09:42:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:54.164 09:42:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:54.164 09:42:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:54.164 ************************************ 00:24:54.164 START TEST nvmf_fips 00:24:54.164 ************************************ 00:24:54.164 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:54.164 * Looking for test storage... 00:24:54.164 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:24:54.164 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:54.164 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version 00:24:54.164 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:54.164 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:54.164 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:54.164 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:54.164 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:54.164 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:24:54.164 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:24:54.164 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:24:54.164 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:24:54.164 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:24:54.164 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:24:54.164 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:24:54.164 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:54.164 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:24:54.164 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:24:54.164 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:54.164 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:54.164 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:24:54.164 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:24:54.164 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:54.164 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:24:54.164 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:24:54.164 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:24:54.164 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:24:54.164 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:54.164 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:24:54.164 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:24:54.164 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:54.164 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:54.164 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:24:54.164 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:54.164 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:54.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:54.164 --rc genhtml_branch_coverage=1 00:24:54.164 --rc genhtml_function_coverage=1 00:24:54.164 --rc genhtml_legend=1 00:24:54.164 --rc geninfo_all_blocks=1 00:24:54.164 --rc geninfo_unexecuted_blocks=1 00:24:54.164 00:24:54.164 ' 00:24:54.164 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:54.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:54.164 --rc genhtml_branch_coverage=1 00:24:54.164 --rc genhtml_function_coverage=1 00:24:54.164 --rc genhtml_legend=1 00:24:54.164 --rc geninfo_all_blocks=1 00:24:54.164 --rc geninfo_unexecuted_blocks=1 00:24:54.164 00:24:54.164 ' 00:24:54.164 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:54.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:54.164 --rc genhtml_branch_coverage=1 00:24:54.164 --rc genhtml_function_coverage=1 00:24:54.164 --rc genhtml_legend=1 00:24:54.164 --rc geninfo_all_blocks=1 00:24:54.164 --rc geninfo_unexecuted_blocks=1 00:24:54.164 00:24:54.164 ' 00:24:54.164 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:54.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:54.164 --rc genhtml_branch_coverage=1 00:24:54.164 --rc genhtml_function_coverage=1 00:24:54.164 --rc genhtml_legend=1 00:24:54.164 --rc geninfo_all_blocks=1 00:24:54.164 --rc geninfo_unexecuted_blocks=1 00:24:54.164 00:24:54.164 ' 00:24:54.164 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:54.164 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:24:54.164 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:54.164 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:54.164 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:54.164 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:54.164 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:54.164 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:54.164 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:54.164 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:54.164 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:54.164 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:54.164 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:54.164 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:54.164 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:54.164 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:54.164 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:54.164 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:54.164 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:54.164 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:24:54.164 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:54.164 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:54.164 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:54.164 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.164 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.164 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.164 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:24:54.164 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.164 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:54.165 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:24:54.165 Error setting digest 00:24:54.165 401209370A7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:24:54.165 401209370A7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:24:54.165 09:42:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:02.308 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:02.308 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:25:02.308 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:02.308 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:02.308 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:02.308 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:02.308 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:02.308 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:25:02.308 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:02.309 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:25:02.309 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:25:02.309 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:25:02.309 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:25:02.309 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:25:02.309 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:25:02.309 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:02.309 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:02.309 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:02.309 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:02.309 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:02.309 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:02.309 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:02.309 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:02.309 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:02.309 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:02.309 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:02.309 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:02.309 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:02.309 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:02.309 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:02.309 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:02.309 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:02.309 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:02.309 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:02.309 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:02.309 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:02.309 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:02.309 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:02.309 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:02.309 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:02.309 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:02.309 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:02.309 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:02.309 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:02.309 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:02.309 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:02.309 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:02.309 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:02.309 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:02.309 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:02.309 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:02.309 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:02.309 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:02.309 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:02.309 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:02.309 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:02.309 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:02.309 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:02.309 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:02.309 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:02.309 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:02.309 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:02.309 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:02.309 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:02.309 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:02.309 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:02.309 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:02.309 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:02.309 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:02.309 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:02.309 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:02.309 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:02.309 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:02.309 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:25:02.309 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:02.309 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:02.309 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:02.309 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:02.309 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:02.309 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:02.309 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:02.309 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:02.309 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:02.309 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:02.309 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:02.309 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:02.309 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:02.309 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:02.309 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:02.309 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:02.309 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:02.309 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:02.309 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:02.309 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:02.309 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:02.309 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:02.309 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:02.309 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:02.309 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:02.309 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:02.309 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:02.309 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.615 ms 00:25:02.309 00:25:02.309 --- 10.0.0.2 ping statistics --- 00:25:02.309 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:02.309 rtt min/avg/max/mdev = 0.615/0.615/0.615/0.000 ms 00:25:02.309 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:02.309 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:02.309 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.326 ms 00:25:02.309 00:25:02.309 --- 10.0.0.1 ping statistics --- 00:25:02.309 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:02.309 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:25:02.309 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:02.309 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:25:02.309 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:02.309 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:02.309 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:02.309 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:02.309 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:02.309 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:02.309 09:42:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:02.309 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:25:02.309 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:02.310 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:02.310 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:02.310 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=2848068 00:25:02.310 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 2848068 00:25:02.310 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:02.310 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 2848068 ']' 00:25:02.310 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:02.310 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:02.310 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:02.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:02.310 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:02.310 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:02.310 [2024-12-09 09:42:37.110420] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:25:02.310 [2024-12-09 09:42:37.110498] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:02.310 [2024-12-09 09:42:37.211200] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:02.310 [2024-12-09 09:42:37.236670] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:02.310 [2024-12-09 09:42:37.236719] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:02.310 [2024-12-09 09:42:37.236729] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:02.310 [2024-12-09 09:42:37.236738] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:02.310 [2024-12-09 09:42:37.236744] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:02.310 [2024-12-09 09:42:37.237491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:02.572 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:02.572 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:25:02.572 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:02.572 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:02.572 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:02.572 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:02.572 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:25:02.572 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:25:02.572 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:25:02.572 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.ttb 00:25:02.572 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:25:02.572 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.ttb 00:25:02.572 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.ttb 00:25:02.573 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.ttb 00:25:02.573 09:42:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:02.834 [2024-12-09 09:42:38.125344] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:02.834 [2024-12-09 09:42:38.141327] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:02.834 [2024-12-09 09:42:38.141611] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:02.834 malloc0 00:25:02.834 09:42:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:02.834 09:42:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=2848419 00:25:02.835 09:42:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 2848419 /var/tmp/bdevperf.sock 00:25:02.835 09:42:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:02.835 09:42:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 2848419 ']' 00:25:02.835 09:42:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:02.835 09:42:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:02.835 09:42:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:02.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:02.835 09:42:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:02.835 09:42:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:02.835 [2024-12-09 09:42:38.273767] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:25:02.835 [2024-12-09 09:42:38.273847] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2848419 ] 00:25:03.096 [2024-12-09 09:42:38.339594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:03.096 [2024-12-09 09:42:38.360100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:03.096 09:42:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:03.096 09:42:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:25:03.096 09:42:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.ttb 00:25:03.356 09:42:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:25:03.356 [2024-12-09 09:42:38.777590] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:03.618 TLSTESTn1 00:25:03.619 09:42:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:03.619 Running I/O for 10 seconds... 00:25:05.945 4656.00 IOPS, 18.19 MiB/s [2024-12-09T08:42:42.335Z] 4953.50 IOPS, 19.35 MiB/s [2024-12-09T08:42:43.273Z] 5150.00 IOPS, 20.12 MiB/s [2024-12-09T08:42:44.212Z] 5266.75 IOPS, 20.57 MiB/s [2024-12-09T08:42:45.155Z] 5180.40 IOPS, 20.24 MiB/s [2024-12-09T08:42:46.104Z] 5191.17 IOPS, 20.28 MiB/s [2024-12-09T08:42:47.045Z] 5256.71 IOPS, 20.53 MiB/s [2024-12-09T08:42:47.984Z] 5311.00 IOPS, 20.75 MiB/s [2024-12-09T08:42:49.362Z] 5189.33 IOPS, 20.27 MiB/s [2024-12-09T08:42:49.362Z] 5222.90 IOPS, 20.40 MiB/s 00:25:13.909 Latency(us) 00:25:13.909 [2024-12-09T08:42:49.362Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:13.909 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:13.909 Verification LBA range: start 0x0 length 0x2000 00:25:13.909 TLSTESTn1 : 10.01 5228.61 20.42 0.00 0.00 24449.20 4833.28 225443.84 00:25:13.909 [2024-12-09T08:42:49.362Z] =================================================================================================================== 00:25:13.909 [2024-12-09T08:42:49.362Z] Total : 5228.61 20.42 0.00 0.00 24449.20 4833.28 225443.84 00:25:13.909 { 00:25:13.909 "results": [ 00:25:13.909 { 00:25:13.909 "job": "TLSTESTn1", 00:25:13.909 "core_mask": "0x4", 00:25:13.909 "workload": "verify", 00:25:13.909 "status": "finished", 00:25:13.909 "verify_range": { 00:25:13.909 "start": 0, 00:25:13.909 "length": 8192 00:25:13.909 }, 00:25:13.909 "queue_depth": 128, 00:25:13.909 "io_size": 4096, 00:25:13.909 "runtime": 10.013557, 00:25:13.909 "iops": 5228.6115712927985, 00:25:13.909 "mibps": 20.424263950362494, 00:25:13.909 "io_failed": 0, 00:25:13.909 "io_timeout": 0, 00:25:13.909 "avg_latency_us": 24449.202395349872, 00:25:13.909 "min_latency_us": 4833.28, 00:25:13.909 "max_latency_us": 225443.84 00:25:13.909 } 00:25:13.909 ], 00:25:13.909 "core_count": 1 00:25:13.909 } 00:25:13.909 09:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:25:13.909 09:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:25:13.909 09:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:25:13.909 09:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:25:13.909 09:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:25:13.909 09:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:25:13.909 09:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:25:13.909 09:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:25:13.909 09:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:25:13.909 09:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:25:13.909 nvmf_trace.0 00:25:13.909 09:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:25:13.909 09:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2848419 00:25:13.909 09:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 2848419 ']' 00:25:13.909 09:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 2848419 00:25:13.909 09:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:25:13.909 09:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:13.909 09:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2848419 00:25:13.909 09:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:25:13.909 09:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:25:13.909 09:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2848419' 00:25:13.909 killing process with pid 2848419 00:25:13.909 09:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 2848419 00:25:13.909 Received shutdown signal, test time was about 10.000000 seconds 00:25:13.909 00:25:13.909 Latency(us) 00:25:13.909 [2024-12-09T08:42:49.363Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:13.910 [2024-12-09T08:42:49.363Z] =================================================================================================================== 00:25:13.910 [2024-12-09T08:42:49.363Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:13.910 09:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 2848419 00:25:13.910 09:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:25:13.910 09:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:13.910 09:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:25:13.910 09:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:13.910 09:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:25:13.910 09:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:13.910 09:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:13.910 rmmod nvme_tcp 00:25:13.910 rmmod nvme_fabrics 00:25:13.910 rmmod nvme_keyring 00:25:13.910 09:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:13.910 09:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:25:13.910 09:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:25:13.910 09:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 2848068 ']' 00:25:13.910 09:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 2848068 00:25:13.910 09:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 2848068 ']' 00:25:13.910 09:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 2848068 00:25:13.910 09:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:25:13.910 09:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:13.910 09:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2848068 00:25:14.170 09:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:14.170 09:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:14.170 09:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2848068' 00:25:14.170 killing process with pid 2848068 00:25:14.170 09:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 2848068 00:25:14.170 09:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 2848068 00:25:14.170 09:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:14.170 09:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:14.170 09:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:14.170 09:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:25:14.170 09:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:25:14.170 09:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:14.170 09:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:25:14.170 09:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:14.170 09:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:14.170 09:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:14.170 09:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:14.170 09:42:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:16.715 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:16.715 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.ttb 00:25:16.715 00:25:16.715 real 0m22.476s 00:25:16.715 user 0m22.854s 00:25:16.715 sys 0m10.059s 00:25:16.715 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:16.715 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:16.715 ************************************ 00:25:16.715 END TEST nvmf_fips 00:25:16.715 ************************************ 00:25:16.715 09:42:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:25:16.715 09:42:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:16.715 09:42:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:16.715 09:42:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:16.715 ************************************ 00:25:16.715 START TEST nvmf_control_msg_list 00:25:16.715 ************************************ 00:25:16.715 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:25:16.715 * Looking for test storage... 00:25:16.715 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:16.715 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:16.715 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version 00:25:16.715 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:16.715 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:16.715 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:16.715 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:16.715 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:16.715 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:25:16.715 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:25:16.715 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:25:16.715 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:25:16.715 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:25:16.715 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:25:16.716 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:25:16.716 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:16.716 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:25:16.716 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:25:16.716 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:16.716 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:16.716 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:25:16.716 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:25:16.716 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:16.716 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:25:16.716 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:25:16.716 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:25:16.716 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:25:16.716 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:16.716 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:25:16.716 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:25:16.716 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:16.716 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:16.716 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:25:16.716 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:16.716 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:16.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:16.716 --rc genhtml_branch_coverage=1 00:25:16.716 --rc genhtml_function_coverage=1 00:25:16.716 --rc genhtml_legend=1 00:25:16.716 --rc geninfo_all_blocks=1 00:25:16.716 --rc geninfo_unexecuted_blocks=1 00:25:16.716 00:25:16.716 ' 00:25:16.716 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:16.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:16.716 --rc genhtml_branch_coverage=1 00:25:16.716 --rc genhtml_function_coverage=1 00:25:16.716 --rc genhtml_legend=1 00:25:16.716 --rc geninfo_all_blocks=1 00:25:16.716 --rc geninfo_unexecuted_blocks=1 00:25:16.716 00:25:16.716 ' 00:25:16.716 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:16.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:16.716 --rc genhtml_branch_coverage=1 00:25:16.716 --rc genhtml_function_coverage=1 00:25:16.716 --rc genhtml_legend=1 00:25:16.716 --rc geninfo_all_blocks=1 00:25:16.716 --rc geninfo_unexecuted_blocks=1 00:25:16.716 00:25:16.716 ' 00:25:16.716 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:16.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:16.716 --rc genhtml_branch_coverage=1 00:25:16.716 --rc genhtml_function_coverage=1 00:25:16.716 --rc genhtml_legend=1 00:25:16.716 --rc geninfo_all_blocks=1 00:25:16.716 --rc geninfo_unexecuted_blocks=1 00:25:16.716 00:25:16.716 ' 00:25:16.716 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:16.716 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:25:16.716 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:16.716 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:16.716 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:16.716 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:16.716 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:16.716 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:16.716 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:16.716 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:16.716 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:16.716 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:16.716 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:16.716 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:16.716 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:16.716 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:16.716 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:16.716 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:16.716 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:16.716 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:25:16.716 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:16.716 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:16.716 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:16.716 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:16.716 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:16.716 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:16.716 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:25:16.716 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:16.716 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:25:16.716 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:16.716 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:16.716 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:16.716 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:16.716 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:16.717 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:16.717 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:16.717 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:16.717 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:16.717 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:16.717 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:25:16.717 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:16.717 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:16.717 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:16.717 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:16.717 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:16.717 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:16.717 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:16.717 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:16.717 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:16.717 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:16.717 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:25:16.717 09:42:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:25.031 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:25.032 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:25:25.032 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:25.032 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:25.032 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:25.032 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:25.032 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:25.032 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:25:25.032 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:25.032 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:25:25.032 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:25:25.032 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:25:25.032 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:25:25.032 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:25:25.032 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:25:25.032 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:25.032 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:25.032 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:25.032 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:25.032 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:25.032 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:25.032 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:25.032 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:25.032 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:25.032 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:25.032 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:25.032 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:25.032 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:25.032 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:25.032 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:25.032 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:25.032 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:25.032 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:25.032 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:25.032 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:25.032 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:25.032 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:25.032 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:25.032 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:25.032 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:25.032 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:25.032 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:25.032 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:25.032 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:25.032 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:25.032 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:25.032 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:25.032 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:25.032 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:25.032 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:25.032 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:25.032 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:25.032 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:25.032 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:25.032 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:25.032 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:25.032 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:25.032 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:25.032 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:25.032 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:25.032 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:25.032 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:25.032 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:25.032 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:25.032 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:25.032 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:25.032 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:25.032 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:25.032 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:25.032 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:25.032 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:25.032 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:25.032 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:25.032 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:25:25.032 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:25.032 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:25.032 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:25.032 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:25.032 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:25.032 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:25.032 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:25.032 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:25.032 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:25.033 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:25.033 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:25.033 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:25.033 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:25.033 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:25.033 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:25.033 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:25.033 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:25.033 09:42:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:25.033 09:42:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:25.033 09:42:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:25.033 09:42:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:25.033 09:42:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:25.033 09:42:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:25.033 09:42:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:25.033 09:42:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:25.033 09:42:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:25.033 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:25.033 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.606 ms 00:25:25.033 00:25:25.033 --- 10.0.0.2 ping statistics --- 00:25:25.033 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:25.033 rtt min/avg/max/mdev = 0.606/0.606/0.606/0.000 ms 00:25:25.033 09:42:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:25.033 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:25.033 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:25:25.033 00:25:25.033 --- 10.0.0.1 ping statistics --- 00:25:25.033 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:25.033 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:25:25.033 09:42:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:25.033 09:42:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:25:25.033 09:42:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:25.033 09:42:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:25.033 09:42:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:25.033 09:42:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:25.033 09:42:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:25.033 09:42:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:25.033 09:42:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:25.033 09:42:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:25:25.033 09:42:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:25.033 09:42:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:25.033 09:42:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:25.033 09:42:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=2854754 00:25:25.033 09:42:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:25.033 09:42:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 2854754 00:25:25.033 09:42:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 2854754 ']' 00:25:25.033 09:42:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:25.033 09:42:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:25.033 09:42:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:25.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:25.033 09:42:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:25.033 09:42:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:25.033 [2024-12-09 09:42:59.328929] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:25:25.033 [2024-12-09 09:42:59.328996] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:25.033 [2024-12-09 09:42:59.428643] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:25.033 [2024-12-09 09:42:59.454921] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:25.033 [2024-12-09 09:42:59.454967] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:25.033 [2024-12-09 09:42:59.454975] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:25.033 [2024-12-09 09:42:59.454983] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:25.033 [2024-12-09 09:42:59.454989] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:25.033 [2024-12-09 09:42:59.455763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:25.033 09:43:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:25.033 09:43:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:25:25.033 09:43:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:25.033 09:43:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:25.033 09:43:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:25.033 09:43:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:25.033 09:43:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:25:25.033 09:43:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:25:25.033 09:43:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:25:25.033 09:43:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.033 09:43:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:25.033 [2024-12-09 09:43:00.189819] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:25.033 09:43:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.033 09:43:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:25:25.033 09:43:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.033 09:43:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:25.033 09:43:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.033 09:43:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:25:25.034 09:43:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.034 09:43:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:25.034 Malloc0 00:25:25.034 09:43:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.034 09:43:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:25:25.034 09:43:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.034 09:43:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:25.034 09:43:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.034 09:43:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:25.034 09:43:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.034 09:43:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:25.034 [2024-12-09 09:43:00.244273] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:25.034 09:43:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.034 09:43:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=2854808 00:25:25.034 09:43:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:25.034 09:43:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=2854809 00:25:25.034 09:43:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:25.034 09:43:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=2854810 00:25:25.034 09:43:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 2854808 00:25:25.034 09:43:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:25.034 [2024-12-09 09:43:00.324748] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:25.034 [2024-12-09 09:43:00.334786] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:25.034 [2024-12-09 09:43:00.354534] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:26.446 Initializing NVMe Controllers 00:25:26.446 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:26.446 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:25:26.446 Initialization complete. Launching workers. 00:25:26.446 ======================================================== 00:25:26.446 Latency(us) 00:25:26.446 Device Information : IOPS MiB/s Average min max 00:25:26.446 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 785.00 3.07 1284.32 235.01 41439.85 00:25:26.446 ======================================================== 00:25:26.446 Total : 785.00 3.07 1284.32 235.01 41439.85 00:25:26.446 00:25:26.446 09:43:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 2854809 00:25:26.446 Initializing NVMe Controllers 00:25:26.446 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:26.446 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:25:26.446 Initialization complete. Launching workers. 00:25:26.446 ======================================================== 00:25:26.446 Latency(us) 00:25:26.446 Device Information : IOPS MiB/s Average min max 00:25:26.446 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 1511.98 5.91 661.28 304.52 910.08 00:25:26.446 ======================================================== 00:25:26.446 Total : 1511.98 5.91 661.28 304.52 910.08 00:25:26.446 00:25:26.446 Initializing NVMe Controllers 00:25:26.446 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:26.446 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:25:26.446 Initialization complete. Launching workers. 00:25:26.446 ======================================================== 00:25:26.446 Latency(us) 00:25:26.446 Device Information : IOPS MiB/s Average min max 00:25:26.446 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 817.00 3.19 1248.45 263.40 41495.49 00:25:26.446 ======================================================== 00:25:26.446 Total : 817.00 3.19 1248.45 263.40 41495.49 00:25:26.446 00:25:26.446 09:43:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 2854810 00:25:26.446 09:43:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:25:26.446 09:43:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:25:26.446 09:43:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:26.446 09:43:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:25:26.446 09:43:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:26.446 09:43:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:25:26.446 09:43:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:26.446 09:43:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:26.446 rmmod nvme_tcp 00:25:26.446 rmmod nvme_fabrics 00:25:26.446 rmmod nvme_keyring 00:25:26.446 09:43:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:26.446 09:43:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:25:26.446 09:43:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:25:26.446 09:43:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 2854754 ']' 00:25:26.446 09:43:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 2854754 00:25:26.446 09:43:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 2854754 ']' 00:25:26.446 09:43:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 2854754 00:25:26.446 09:43:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:25:26.446 09:43:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:26.447 09:43:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2854754 00:25:26.447 09:43:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:26.447 09:43:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:26.447 09:43:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2854754' 00:25:26.447 killing process with pid 2854754 00:25:26.447 09:43:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 2854754 00:25:26.447 09:43:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 2854754 00:25:26.447 09:43:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:26.447 09:43:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:26.447 09:43:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:26.447 09:43:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:25:26.447 09:43:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:25:26.447 09:43:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:26.447 09:43:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:25:26.447 09:43:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:26.447 09:43:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:26.447 09:43:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:26.447 09:43:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:26.447 09:43:01 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:28.996 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:28.996 00:25:28.996 real 0m12.248s 00:25:28.996 user 0m7.962s 00:25:28.996 sys 0m6.479s 00:25:28.996 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:28.996 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:28.996 ************************************ 00:25:28.996 END TEST nvmf_control_msg_list 00:25:28.996 ************************************ 00:25:28.996 09:43:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:25:28.996 09:43:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:28.996 09:43:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:28.996 09:43:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:28.996 ************************************ 00:25:28.996 START TEST nvmf_wait_for_buf 00:25:28.996 ************************************ 00:25:28.996 09:43:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:25:28.996 * Looking for test storage... 00:25:28.996 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:28.996 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:28.997 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version 00:25:28.997 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:28.997 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:28.997 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:28.997 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:28.997 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:28.997 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:25:28.997 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:25:28.997 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:25:28.997 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:25:28.997 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:25:28.997 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:25:28.997 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:25:28.997 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:28.997 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:25:28.997 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:25:28.997 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:28.997 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:28.997 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:25:28.997 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:25:28.997 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:28.997 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:25:28.997 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:25:28.997 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:25:28.997 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:25:28.997 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:28.997 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:25:28.997 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:25:28.997 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:28.997 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:28.997 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:25:28.997 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:28.997 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:28.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:28.997 --rc genhtml_branch_coverage=1 00:25:28.997 --rc genhtml_function_coverage=1 00:25:28.997 --rc genhtml_legend=1 00:25:28.997 --rc geninfo_all_blocks=1 00:25:28.997 --rc geninfo_unexecuted_blocks=1 00:25:28.997 00:25:28.997 ' 00:25:28.997 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:28.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:28.997 --rc genhtml_branch_coverage=1 00:25:28.997 --rc genhtml_function_coverage=1 00:25:28.997 --rc genhtml_legend=1 00:25:28.997 --rc geninfo_all_blocks=1 00:25:28.997 --rc geninfo_unexecuted_blocks=1 00:25:28.997 00:25:28.997 ' 00:25:28.997 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:28.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:28.997 --rc genhtml_branch_coverage=1 00:25:28.997 --rc genhtml_function_coverage=1 00:25:28.997 --rc genhtml_legend=1 00:25:28.997 --rc geninfo_all_blocks=1 00:25:28.997 --rc geninfo_unexecuted_blocks=1 00:25:28.997 00:25:28.997 ' 00:25:28.997 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:28.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:28.997 --rc genhtml_branch_coverage=1 00:25:28.997 --rc genhtml_function_coverage=1 00:25:28.997 --rc genhtml_legend=1 00:25:28.997 --rc geninfo_all_blocks=1 00:25:28.997 --rc geninfo_unexecuted_blocks=1 00:25:28.997 00:25:28.997 ' 00:25:28.997 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:28.997 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:25:28.997 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:28.997 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:28.997 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:28.997 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:28.997 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:28.997 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:28.997 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:28.997 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:28.997 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:28.997 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:28.997 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:28.997 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:28.997 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:28.997 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:28.997 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:28.997 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:28.997 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:28.997 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:25:28.997 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:28.997 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:28.997 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:28.997 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.997 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.997 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.997 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:25:28.997 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:28.997 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:25:28.997 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:28.997 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:28.997 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:28.997 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:28.998 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:28.998 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:28.998 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:28.998 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:28.998 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:28.998 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:28.998 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:25:28.998 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:28.998 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:28.998 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:28.998 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:28.998 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:28.998 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:28.998 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:28.998 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:28.998 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:28.998 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:28.998 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:25:28.998 09:43:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:37.140 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:37.140 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:25:37.140 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:37.140 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:37.140 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:37.140 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:37.140 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:37.140 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:25:37.140 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:37.140 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:25:37.140 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:25:37.140 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:25:37.140 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:25:37.140 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:25:37.140 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:25:37.140 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:37.140 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:37.140 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:37.140 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:37.140 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:37.140 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:37.140 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:37.140 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:37.140 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:37.140 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:37.140 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:37.140 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:37.140 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:37.140 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:37.140 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:37.140 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:37.140 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:37.140 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:37.140 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:37.140 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:37.140 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:37.140 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:37.140 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:37.140 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:37.140 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:37.140 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:37.140 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:37.140 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:37.140 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:37.140 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:37.140 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:37.140 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:37.140 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:37.140 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:37.140 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:37.140 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:37.140 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:37.140 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:37.140 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:37.140 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:37.140 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:37.140 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:37.140 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:37.140 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:37.140 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:37.140 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:37.140 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:37.140 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:37.140 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:37.140 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:37.140 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:37.140 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:37.140 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:37.140 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:37.141 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:37.141 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:37.141 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:37.141 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:37.141 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:25:37.141 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:37.141 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:37.141 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:37.141 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:37.141 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:37.141 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:37.141 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:37.141 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:37.141 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:37.141 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:37.141 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:37.141 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:37.141 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:37.141 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:37.141 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:37.141 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:37.141 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:37.141 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:37.141 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:37.141 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:37.141 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:37.141 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:37.141 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:37.141 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:37.141 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:37.141 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:37.141 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:37.141 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.688 ms 00:25:37.141 00:25:37.141 --- 10.0.0.2 ping statistics --- 00:25:37.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:37.141 rtt min/avg/max/mdev = 0.688/0.688/0.688/0.000 ms 00:25:37.141 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:37.141 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:37.141 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.347 ms 00:25:37.141 00:25:37.141 --- 10.0.0.1 ping statistics --- 00:25:37.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:37.141 rtt min/avg/max/mdev = 0.347/0.347/0.347/0.000 ms 00:25:37.141 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:37.141 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:25:37.141 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:37.141 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:37.141 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:37.141 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:37.141 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:37.141 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:37.141 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:37.141 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:25:37.141 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:37.141 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:37.141 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:37.141 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=2859384 00:25:37.141 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 2859384 00:25:37.141 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:37.141 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 2859384 ']' 00:25:37.141 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:37.141 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:37.141 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:37.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:37.141 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:37.141 09:43:11 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:37.141 [2024-12-09 09:43:11.638651] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:25:37.141 [2024-12-09 09:43:11.638715] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:37.141 [2024-12-09 09:43:11.738801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:37.141 [2024-12-09 09:43:11.764902] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:37.141 [2024-12-09 09:43:11.764954] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:37.141 [2024-12-09 09:43:11.764963] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:37.141 [2024-12-09 09:43:11.764970] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:37.141 [2024-12-09 09:43:11.764978] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:37.141 [2024-12-09 09:43:11.765730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:37.141 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:37.141 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:25:37.141 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:37.141 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:37.141 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:37.141 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:37.141 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:25:37.141 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:25:37.141 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:25:37.141 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.141 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:37.141 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.141 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:25:37.141 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.141 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:37.141 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.141 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:25:37.141 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.141 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:37.403 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.403 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:25:37.403 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.403 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:37.403 Malloc0 00:25:37.403 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.403 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:25:37.403 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.403 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:37.403 [2024-12-09 09:43:12.625589] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:37.403 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.403 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:25:37.403 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.403 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:37.403 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.403 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:25:37.403 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.403 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:37.403 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.403 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:37.403 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.403 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:37.403 [2024-12-09 09:43:12.661936] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:37.403 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.403 09:43:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:37.403 [2024-12-09 09:43:12.763757] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:38.790 Initializing NVMe Controllers 00:25:38.790 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:38.790 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:25:38.790 Initialization complete. Launching workers. 00:25:38.790 ======================================================== 00:25:38.790 Latency(us) 00:25:38.790 Device Information : IOPS MiB/s Average min max 00:25:38.790 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 129.00 16.12 32294.71 8007.16 63855.38 00:25:38.790 ======================================================== 00:25:38.790 Total : 129.00 16.12 32294.71 8007.16 63855.38 00:25:38.790 00:25:39.051 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:25:39.051 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:25:39.051 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.051 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:39.051 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.051 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2038 00:25:39.051 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2038 -eq 0 ]] 00:25:39.051 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:25:39.051 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:25:39.051 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:39.051 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:25:39.051 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:39.051 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:25:39.051 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:39.051 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:39.051 rmmod nvme_tcp 00:25:39.051 rmmod nvme_fabrics 00:25:39.051 rmmod nvme_keyring 00:25:39.051 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:39.051 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:25:39.051 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:25:39.051 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 2859384 ']' 00:25:39.051 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 2859384 00:25:39.051 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 2859384 ']' 00:25:39.051 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 2859384 00:25:39.051 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:25:39.051 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:39.051 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2859384 00:25:39.051 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:39.051 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:39.051 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2859384' 00:25:39.051 killing process with pid 2859384 00:25:39.051 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 2859384 00:25:39.051 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 2859384 00:25:39.312 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:39.312 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:39.312 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:39.312 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:25:39.312 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:25:39.312 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:25:39.312 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:39.312 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:39.312 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:39.312 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:39.312 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:39.312 09:43:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:41.854 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:41.854 00:25:41.854 real 0m12.699s 00:25:41.854 user 0m5.216s 00:25:41.854 sys 0m6.066s 00:25:41.854 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:41.854 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:41.854 ************************************ 00:25:41.854 END TEST nvmf_wait_for_buf 00:25:41.854 ************************************ 00:25:41.854 09:43:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:25:41.854 09:43:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:25:41.854 09:43:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:41.854 09:43:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:41.854 09:43:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:41.855 ************************************ 00:25:41.855 START TEST nvmf_fuzz 00:25:41.855 ************************************ 00:25:41.855 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:25:41.855 * Looking for test storage... 00:25:41.855 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:41.855 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:41.855 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:25:41.855 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:41.855 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:41.855 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:41.855 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:41.855 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:41.855 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:25:41.855 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:25:41.855 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:25:41.855 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:25:41.855 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:25:41.855 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:25:41.855 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:25:41.855 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:41.855 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:25:41.855 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:25:41.855 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:41.855 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:41.855 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:25:41.855 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:25:41.855 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:41.855 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:25:41.855 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:25:41.855 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:25:41.855 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:25:41.855 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:41.855 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:25:41.855 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:25:41.855 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:41.855 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:41.855 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:25:41.855 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:41.855 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:41.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:41.855 --rc genhtml_branch_coverage=1 00:25:41.855 --rc genhtml_function_coverage=1 00:25:41.855 --rc genhtml_legend=1 00:25:41.855 --rc geninfo_all_blocks=1 00:25:41.855 --rc geninfo_unexecuted_blocks=1 00:25:41.855 00:25:41.855 ' 00:25:41.855 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:41.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:41.855 --rc genhtml_branch_coverage=1 00:25:41.855 --rc genhtml_function_coverage=1 00:25:41.855 --rc genhtml_legend=1 00:25:41.855 --rc geninfo_all_blocks=1 00:25:41.855 --rc geninfo_unexecuted_blocks=1 00:25:41.855 00:25:41.855 ' 00:25:41.855 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:41.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:41.855 --rc genhtml_branch_coverage=1 00:25:41.855 --rc genhtml_function_coverage=1 00:25:41.855 --rc genhtml_legend=1 00:25:41.855 --rc geninfo_all_blocks=1 00:25:41.855 --rc geninfo_unexecuted_blocks=1 00:25:41.855 00:25:41.855 ' 00:25:41.855 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:41.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:41.855 --rc genhtml_branch_coverage=1 00:25:41.855 --rc genhtml_function_coverage=1 00:25:41.855 --rc genhtml_legend=1 00:25:41.855 --rc geninfo_all_blocks=1 00:25:41.855 --rc geninfo_unexecuted_blocks=1 00:25:41.855 00:25:41.855 ' 00:25:41.855 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:41.855 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:25:41.855 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:41.855 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:41.855 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:41.855 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:41.855 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:41.855 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:41.855 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:41.855 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:41.855 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:41.855 09:43:16 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:41.855 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:41.855 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:41.855 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:41.855 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:41.855 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:41.855 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:41.855 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:41.855 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:25:41.855 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:41.855 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:41.855 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:41.855 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:41.855 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:41.855 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:41.855 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:25:41.855 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:41.855 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:25:41.855 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:41.855 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:41.855 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:41.855 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:41.855 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:41.855 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:41.855 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:41.855 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:41.855 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:41.855 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:41.855 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:25:41.856 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:41.856 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:41.856 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:41.856 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:41.856 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:41.856 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:41.856 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:41.856 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:41.856 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:41.856 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:41.856 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@309 -- # xtrace_disable 00:25:41.856 09:43:17 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:50.044 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:50.044 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # pci_devs=() 00:25:50.044 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:50.044 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:50.044 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:50.044 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:50.044 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:50.044 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # net_devs=() 00:25:50.044 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:50.044 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # e810=() 00:25:50.044 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # local -ga e810 00:25:50.044 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # x722=() 00:25:50.044 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # local -ga x722 00:25:50.044 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # mlx=() 00:25:50.044 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # local -ga mlx 00:25:50.044 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:50.044 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:50.044 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:50.044 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:50.044 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:50.044 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:50.044 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:50.044 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:50.044 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:50.044 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:50.044 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:50.044 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:50.044 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:50.044 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:50.044 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:50.044 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:50.044 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:50.044 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:50.044 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:50.044 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:50.044 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:50.045 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:50.045 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:50.045 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:50.045 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:50.045 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:50.045 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:50.045 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:50.045 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:50.045 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:50.045 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:50.045 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:50.045 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:50.045 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:50.045 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:50.045 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:50.045 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:50.045 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:50.045 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:50.045 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:50.045 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:50.045 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:50.045 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:50.045 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:50.045 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:50.045 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:50.045 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:50.045 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:50.045 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:50.045 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:50.045 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:50.045 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:50.045 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:50.045 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:50.045 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:50.045 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:50.045 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:50.045 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:50.045 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # is_hw=yes 00:25:50.045 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:50.045 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:50.045 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:50.045 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:50.045 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:50.045 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:50.045 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:50.045 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:50.045 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:50.045 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:50.045 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:50.045 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:50.045 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:50.045 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:50.045 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:50.045 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:50.045 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:50.045 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:50.045 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:50.045 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:50.045 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:50.045 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:50.045 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:50.045 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:50.045 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:50.045 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:50.045 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:50.045 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.692 ms 00:25:50.045 00:25:50.045 --- 10.0.0.2 ping statistics --- 00:25:50.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:50.045 rtt min/avg/max/mdev = 0.692/0.692/0.692/0.000 ms 00:25:50.045 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:50.045 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:50.045 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.317 ms 00:25:50.045 00:25:50.045 --- 10.0.0.1 ping statistics --- 00:25:50.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:50.045 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:25:50.045 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:50.045 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@450 -- # return 0 00:25:50.045 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:50.045 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:50.045 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:50.045 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:50.045 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:50.045 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:50.045 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:50.045 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=2864132 00:25:50.045 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:25:50.045 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:25:50.045 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 2864132 00:25:50.045 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # '[' -z 2864132 ']' 00:25:50.045 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:50.045 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:50.045 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:50.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:50.045 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:50.045 09:43:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:50.045 09:43:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:50.045 09:43:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@868 -- # return 0 00:25:50.045 09:43:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:50.045 09:43:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.045 09:43:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:50.045 09:43:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.045 09:43:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:25:50.045 09:43:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.045 09:43:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:50.045 Malloc0 00:25:50.045 09:43:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.045 09:43:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:50.045 09:43:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.045 09:43:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:50.045 09:43:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.046 09:43:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:50.046 09:43:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.046 09:43:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:50.046 09:43:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.046 09:43:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:50.046 09:43:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.046 09:43:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:50.046 09:43:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.046 09:43:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:25:50.046 09:43:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:26:22.150 Fuzzing completed. Shutting down the fuzz application 00:26:22.150 00:26:22.150 Dumping successful admin opcodes: 00:26:22.150 9, 10, 00:26:22.150 Dumping successful io opcodes: 00:26:22.150 0, 9, 00:26:22.150 NS: 0x2000008eff00 I/O qp, Total commands completed: 1098666, total successful commands: 6455, random_seed: 2475329728 00:26:22.150 NS: 0x2000008eff00 admin qp, Total commands completed: 137792, total successful commands: 30, random_seed: 385584384 00:26:22.150 09:43:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:26:22.150 Fuzzing completed. Shutting down the fuzz application 00:26:22.150 00:26:22.150 Dumping successful admin opcodes: 00:26:22.150 00:26:22.150 Dumping successful io opcodes: 00:26:22.150 00:26:22.150 NS: 0x2000008eff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 671713243 00:26:22.150 NS: 0x2000008eff00 admin qp, Total commands completed: 16, total successful commands: 0, random_seed: 671785017 00:26:22.150 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:22.150 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.150 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:22.150 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.150 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:26:22.150 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:26:22.150 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:22.150 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:26:22.150 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:22.150 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:26:22.150 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:22.150 09:43:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:22.150 rmmod nvme_tcp 00:26:22.150 rmmod nvme_fabrics 00:26:22.150 rmmod nvme_keyring 00:26:22.150 09:43:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:22.150 09:43:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:26:22.150 09:43:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:26:22.150 09:43:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@517 -- # '[' -n 2864132 ']' 00:26:22.150 09:43:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@518 -- # killprocess 2864132 00:26:22.150 09:43:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # '[' -z 2864132 ']' 00:26:22.150 09:43:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@958 -- # kill -0 2864132 00:26:22.150 09:43:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # uname 00:26:22.150 09:43:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:22.150 09:43:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2864132 00:26:22.150 09:43:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:22.151 09:43:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:22.151 09:43:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2864132' 00:26:22.151 killing process with pid 2864132 00:26:22.151 09:43:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@973 -- # kill 2864132 00:26:22.151 09:43:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@978 -- # wait 2864132 00:26:22.151 09:43:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:22.151 09:43:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:22.151 09:43:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:22.151 09:43:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # iptr 00:26:22.151 09:43:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-save 00:26:22.151 09:43:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:22.151 09:43:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-restore 00:26:22.151 09:43:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:22.151 09:43:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:22.151 09:43:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:22.151 09:43:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:22.151 09:43:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:24.061 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:24.061 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:26:24.061 00:26:24.061 real 0m42.559s 00:26:24.061 user 0m55.063s 00:26:24.061 sys 0m17.064s 00:26:24.061 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:24.061 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:24.061 ************************************ 00:26:24.061 END TEST nvmf_fuzz 00:26:24.061 ************************************ 00:26:24.061 09:43:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:26:24.061 09:43:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:24.061 09:43:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:24.061 09:43:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:24.061 ************************************ 00:26:24.061 START TEST nvmf_multiconnection 00:26:24.061 ************************************ 00:26:24.061 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:26:24.322 * Looking for test storage... 00:26:24.322 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:24.322 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:24.322 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1711 -- # lcov --version 00:26:24.322 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:24.322 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:24.323 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:24.323 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:24.323 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:24.323 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:26:24.323 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:26:24.323 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:26:24.323 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:26:24.323 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:26:24.323 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:26:24.323 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:26:24.323 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:24.323 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:26:24.323 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:26:24.323 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:24.323 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:24.323 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:26:24.323 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:26:24.323 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:24.323 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:26:24.323 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:26:24.323 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:26:24.323 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:26:24.323 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:24.323 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:26:24.323 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:26:24.323 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:24.323 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:24.323 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:26:24.323 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:24.323 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:24.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:24.323 --rc genhtml_branch_coverage=1 00:26:24.323 --rc genhtml_function_coverage=1 00:26:24.323 --rc genhtml_legend=1 00:26:24.323 --rc geninfo_all_blocks=1 00:26:24.323 --rc geninfo_unexecuted_blocks=1 00:26:24.323 00:26:24.323 ' 00:26:24.323 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:24.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:24.323 --rc genhtml_branch_coverage=1 00:26:24.323 --rc genhtml_function_coverage=1 00:26:24.323 --rc genhtml_legend=1 00:26:24.323 --rc geninfo_all_blocks=1 00:26:24.323 --rc geninfo_unexecuted_blocks=1 00:26:24.323 00:26:24.323 ' 00:26:24.323 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:24.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:24.323 --rc genhtml_branch_coverage=1 00:26:24.323 --rc genhtml_function_coverage=1 00:26:24.323 --rc genhtml_legend=1 00:26:24.323 --rc geninfo_all_blocks=1 00:26:24.323 --rc geninfo_unexecuted_blocks=1 00:26:24.323 00:26:24.323 ' 00:26:24.323 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:24.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:24.323 --rc genhtml_branch_coverage=1 00:26:24.323 --rc genhtml_function_coverage=1 00:26:24.323 --rc genhtml_legend=1 00:26:24.323 --rc geninfo_all_blocks=1 00:26:24.323 --rc geninfo_unexecuted_blocks=1 00:26:24.323 00:26:24.323 ' 00:26:24.323 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:24.323 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:26:24.323 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:24.323 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:24.323 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:24.323 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:24.323 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:24.323 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:24.323 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:24.323 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:24.323 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:24.323 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:24.323 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:24.323 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:24.323 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:24.323 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:24.323 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:24.323 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:24.323 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:24.323 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:26:24.323 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:24.323 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:24.323 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:24.323 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:24.323 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:24.323 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:24.323 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:26:24.323 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:24.323 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:26:24.323 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:24.323 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:24.323 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:24.323 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:24.323 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:24.323 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:24.323 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:24.323 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:24.323 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:24.323 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:24.323 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:24.323 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:24.323 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:26:24.324 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:26:24.324 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:24.324 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:24.324 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:24.324 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:24.324 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:24.324 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:24.324 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:24.324 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:24.324 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:24.324 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:24.324 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@309 -- # xtrace_disable 00:26:24.324 09:43:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:32.472 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:32.472 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # pci_devs=() 00:26:32.472 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:32.472 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:32.472 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:32.473 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:32.473 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:32.473 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # net_devs=() 00:26:32.473 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:32.473 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # e810=() 00:26:32.473 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # local -ga e810 00:26:32.473 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # x722=() 00:26:32.473 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # local -ga x722 00:26:32.473 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # mlx=() 00:26:32.473 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # local -ga mlx 00:26:32.473 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:32.473 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:32.473 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:32.473 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:32.473 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:32.473 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:32.473 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:32.473 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:32.473 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:32.473 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:32.473 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:32.473 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:32.473 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:32.473 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:32.473 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:32.473 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:32.473 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:32.473 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:32.473 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:32.473 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:32.473 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:32.473 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:32.473 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:32.473 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:32.473 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:32.473 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:32.473 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:32.473 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:32.473 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:32.473 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:32.473 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:32.473 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:32.473 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:32.473 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:32.473 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:32.473 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:32.473 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:32.473 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:32.473 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:32.473 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:32.473 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:32.473 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:32.473 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:32.473 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:32.473 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:32.473 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:32.473 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:32.473 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:32.473 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:32.473 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:32.473 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:32.473 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:32.473 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:32.473 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:32.473 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:32.473 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:32.473 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:32.473 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:32.473 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # is_hw=yes 00:26:32.473 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:32.473 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:32.473 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:32.473 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:32.473 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:32.473 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:32.473 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:32.473 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:32.473 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:32.473 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:32.473 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:32.473 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:32.473 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:32.473 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:32.473 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:32.473 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:32.473 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:32.473 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:32.473 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:32.473 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:32.473 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:32.473 09:44:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:32.473 09:44:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:32.473 09:44:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:32.473 09:44:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:32.473 09:44:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:32.473 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:32.473 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.545 ms 00:26:32.473 00:26:32.473 --- 10.0.0.2 ping statistics --- 00:26:32.474 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:32.474 rtt min/avg/max/mdev = 0.545/0.545/0.545/0.000 ms 00:26:32.474 09:44:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:32.474 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:32.474 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.268 ms 00:26:32.474 00:26:32.474 --- 10.0.0.1 ping statistics --- 00:26:32.474 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:32.474 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:26:32.474 09:44:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:32.474 09:44:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@450 -- # return 0 00:26:32.474 09:44:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:32.474 09:44:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:32.474 09:44:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:32.474 09:44:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:32.474 09:44:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:32.474 09:44:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:32.474 09:44:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:32.474 09:44:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:26:32.474 09:44:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:32.474 09:44:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:32.474 09:44:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:32.474 09:44:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@509 -- # nvmfpid=2874474 00:26:32.474 09:44:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@510 -- # waitforlisten 2874474 00:26:32.474 09:44:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:32.474 09:44:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # '[' -z 2874474 ']' 00:26:32.474 09:44:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:32.474 09:44:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:32.474 09:44:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:32.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:32.474 09:44:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:32.474 09:44:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:32.474 [2024-12-09 09:44:07.172563] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:26:32.474 [2024-12-09 09:44:07.172652] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:32.474 [2024-12-09 09:44:07.272136] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:32.474 [2024-12-09 09:44:07.301744] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:32.474 [2024-12-09 09:44:07.301793] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:32.474 [2024-12-09 09:44:07.301805] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:32.474 [2024-12-09 09:44:07.301813] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:32.474 [2024-12-09 09:44:07.301819] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:32.474 [2024-12-09 09:44:07.303997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:32.474 [2024-12-09 09:44:07.304121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:32.474 [2024-12-09 09:44:07.304288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:32.474 [2024-12-09 09:44:07.304289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:32.736 09:44:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:32.736 09:44:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@868 -- # return 0 00:26:32.736 09:44:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:32.736 09:44:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:32.736 09:44:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:32.736 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:32.736 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:32.736 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.736 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:32.736 [2024-12-09 09:44:08.031181] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:32.736 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.736 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:26:32.736 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:32.736 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:32.736 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.736 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:32.736 Malloc1 00:26:32.736 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.736 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:26:32.736 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.736 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:32.736 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.736 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:32.736 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.736 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:32.736 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.736 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:32.736 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.736 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:32.736 [2024-12-09 09:44:08.111156] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:32.736 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.736 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:32.736 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:26:32.736 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.736 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:32.736 Malloc2 00:26:32.736 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.736 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:26:32.736 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.736 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:32.736 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.736 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:26:32.736 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.736 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:32.736 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.736 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:26:32.736 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.736 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:32.736 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.736 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:32.736 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:26:32.736 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.736 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:32.997 Malloc3 00:26:32.997 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.997 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:26:32.997 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.997 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:32.997 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.997 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:26:32.997 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.997 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:32.997 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.997 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:26:32.997 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.997 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:32.997 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.997 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:32.997 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:26:32.997 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.997 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:32.997 Malloc4 00:26:32.997 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.997 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:26:32.997 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.997 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:32.997 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.997 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:26:32.997 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.997 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:32.997 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.997 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:26:32.997 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.997 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:32.997 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.997 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:32.997 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:26:32.997 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.997 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:32.997 Malloc5 00:26:32.997 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.997 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:26:32.997 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.997 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:32.997 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.997 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:26:32.997 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.997 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:32.997 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.997 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:26:32.997 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.997 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:32.997 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.997 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:32.997 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:26:32.997 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.997 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:32.997 Malloc6 00:26:32.997 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.997 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:26:32.997 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.997 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:32.997 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.997 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:26:32.997 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.997 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:32.997 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.997 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:26:32.997 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.997 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:32.997 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.997 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:32.997 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:26:32.997 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.997 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:32.997 Malloc7 00:26:32.997 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.997 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:26:32.997 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.997 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:32.997 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.997 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:26:32.997 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.997 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:32.997 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.997 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:26:32.997 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.997 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:32.997 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.997 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:32.997 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:26:32.997 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.998 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:33.258 Malloc8 00:26:33.258 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.258 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:26:33.258 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.258 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:33.258 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.258 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:26:33.258 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.258 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:33.258 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.258 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:26:33.258 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.258 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:33.258 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.258 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:33.258 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:26:33.258 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.258 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:33.258 Malloc9 00:26:33.258 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.258 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:26:33.258 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.258 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:33.258 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.258 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:26:33.258 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.258 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:33.258 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.258 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:26:33.258 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.258 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:33.258 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.258 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:33.258 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:26:33.258 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.258 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:33.258 Malloc10 00:26:33.258 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.258 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:26:33.258 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.258 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:33.258 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.258 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:26:33.258 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.258 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:33.258 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.258 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:26:33.258 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.258 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:33.258 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.258 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:33.258 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:26:33.258 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.258 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:33.258 Malloc11 00:26:33.258 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.258 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:26:33.258 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.258 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:33.258 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.258 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:26:33.258 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.258 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:33.258 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.258 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:26:33.258 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.258 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:33.258 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.258 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:26:33.258 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:33.258 09:44:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:35.170 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:26:35.170 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:35.170 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:35.171 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:35.171 09:44:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:37.080 09:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:37.080 09:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:37.080 09:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK1 00:26:37.080 09:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:37.080 09:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:37.080 09:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:37.080 09:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:37.080 09:44:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:26:38.469 09:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:26:38.469 09:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:38.469 09:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:38.469 09:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:38.469 09:44:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:40.392 09:44:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:40.392 09:44:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:40.392 09:44:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK2 00:26:40.393 09:44:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:40.393 09:44:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:40.393 09:44:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:40.393 09:44:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:40.393 09:44:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:26:42.318 09:44:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:26:42.318 09:44:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:42.318 09:44:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:42.318 09:44:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:42.318 09:44:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:44.234 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:44.234 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:44.234 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK3 00:26:44.234 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:44.234 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:44.234 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:44.234 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:44.234 09:44:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:26:46.149 09:44:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:26:46.149 09:44:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:46.149 09:44:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:46.149 09:44:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:46.149 09:44:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:48.065 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:48.065 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:48.065 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK4 00:26:48.065 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:48.065 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:48.065 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:48.065 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:48.065 09:44:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:26:49.454 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:26:49.454 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:49.454 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:49.454 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:49.454 09:44:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:51.996 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:51.996 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:51.996 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK5 00:26:51.996 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:51.996 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:51.996 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:51.996 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:51.996 09:44:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:26:53.377 09:44:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:26:53.377 09:44:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:53.377 09:44:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:53.377 09:44:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:53.377 09:44:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:55.393 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:55.393 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:55.393 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK6 00:26:55.393 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:55.393 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:55.393 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:55.393 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:55.393 09:44:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:26:57.303 09:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:26:57.303 09:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:57.303 09:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:57.303 09:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:57.303 09:44:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:59.217 09:44:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:59.217 09:44:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:59.217 09:44:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK7 00:26:59.217 09:44:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:59.217 09:44:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:59.217 09:44:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:59.217 09:44:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:59.217 09:44:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:27:00.601 09:44:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:27:00.601 09:44:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:27:00.601 09:44:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:27:00.602 09:44:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:27:00.602 09:44:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:27:03.149 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:27:03.149 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:27:03.149 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK8 00:27:03.149 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:27:03.149 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:27:03.149 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:27:03.149 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:03.149 09:44:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:27:04.533 09:44:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:27:04.533 09:44:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:27:04.533 09:44:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:27:04.533 09:44:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:27:04.533 09:44:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:27:07.076 09:44:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:27:07.076 09:44:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:27:07.076 09:44:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK9 00:27:07.076 09:44:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:27:07.076 09:44:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:27:07.076 09:44:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:27:07.076 09:44:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:07.076 09:44:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:27:08.460 09:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:27:08.460 09:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:27:08.460 09:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:27:08.460 09:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:27:08.460 09:44:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:27:10.380 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:27:10.380 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:27:10.380 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK10 00:27:10.380 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:27:10.380 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:27:10.380 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:27:10.380 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:10.380 09:44:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:27:12.942 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:27:12.942 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:27:12.942 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:27:12.942 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:27:12.942 09:44:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:27:14.876 09:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:27:14.876 09:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:27:14.876 09:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK11 00:27:14.876 09:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:27:14.876 09:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:27:14.876 09:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:27:14.876 09:44:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:27:14.876 [global] 00:27:14.876 thread=1 00:27:14.876 invalidate=1 00:27:14.876 rw=read 00:27:14.876 time_based=1 00:27:14.876 runtime=10 00:27:14.876 ioengine=libaio 00:27:14.876 direct=1 00:27:14.876 bs=262144 00:27:14.876 iodepth=64 00:27:14.876 norandommap=1 00:27:14.876 numjobs=1 00:27:14.876 00:27:14.876 [job0] 00:27:14.876 filename=/dev/nvme0n1 00:27:14.876 [job1] 00:27:14.876 filename=/dev/nvme10n1 00:27:14.876 [job2] 00:27:14.876 filename=/dev/nvme1n1 00:27:14.876 [job3] 00:27:14.876 filename=/dev/nvme2n1 00:27:14.876 [job4] 00:27:14.876 filename=/dev/nvme3n1 00:27:14.876 [job5] 00:27:14.876 filename=/dev/nvme4n1 00:27:14.876 [job6] 00:27:14.876 filename=/dev/nvme5n1 00:27:14.876 [job7] 00:27:14.876 filename=/dev/nvme6n1 00:27:14.876 [job8] 00:27:14.876 filename=/dev/nvme7n1 00:27:14.876 [job9] 00:27:14.876 filename=/dev/nvme8n1 00:27:14.876 [job10] 00:27:14.876 filename=/dev/nvme9n1 00:27:14.876 Could not set queue depth (nvme0n1) 00:27:14.876 Could not set queue depth (nvme10n1) 00:27:14.876 Could not set queue depth (nvme1n1) 00:27:14.876 Could not set queue depth (nvme2n1) 00:27:14.876 Could not set queue depth (nvme3n1) 00:27:14.876 Could not set queue depth (nvme4n1) 00:27:14.876 Could not set queue depth (nvme5n1) 00:27:14.876 Could not set queue depth (nvme6n1) 00:27:14.876 Could not set queue depth (nvme7n1) 00:27:14.876 Could not set queue depth (nvme8n1) 00:27:14.876 Could not set queue depth (nvme9n1) 00:27:15.143 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:15.143 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:15.143 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:15.143 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:15.143 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:15.143 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:15.143 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:15.143 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:15.143 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:15.144 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:15.144 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:15.144 fio-3.35 00:27:15.144 Starting 11 threads 00:27:27.378 00:27:27.378 job0: (groupid=0, jobs=1): err= 0: pid=2883033: Mon Dec 9 09:45:01 2024 00:27:27.378 read: IOPS=340, BW=85.1MiB/s (89.2MB/s)(862MiB/10126msec) 00:27:27.378 slat (usec): min=7, max=196210, avg=2897.92, stdev=11046.69 00:27:27.378 clat (msec): min=17, max=769, avg=184.82, stdev=146.89 00:27:27.378 lat (msec): min=18, max=769, avg=187.72, stdev=149.01 00:27:27.378 clat percentiles (msec): 00:27:27.378 | 1.00th=[ 31], 5.00th=[ 34], 10.00th=[ 39], 20.00th=[ 58], 00:27:27.378 | 30.00th=[ 86], 40.00th=[ 103], 50.00th=[ 122], 60.00th=[ 178], 00:27:27.378 | 70.00th=[ 257], 80.00th=[ 309], 90.00th=[ 388], 95.00th=[ 468], 00:27:27.378 | 99.00th=[ 634], 99.50th=[ 693], 99.90th=[ 726], 99.95th=[ 726], 00:27:27.378 | 99.99th=[ 768] 00:27:27.378 bw ( KiB/s): min=20992, max=271360, per=10.12%, avg=86630.40, stdev=74635.00, samples=20 00:27:27.378 iops : min= 82, max= 1060, avg=338.40, stdev=291.54, samples=20 00:27:27.378 lat (msec) : 20=0.09%, 50=14.97%, 100=23.93%, 250=30.43%, 500=26.52% 00:27:27.378 lat (msec) : 750=4.03%, 1000=0.03% 00:27:27.378 cpu : usr=0.07%, sys=1.26%, ctx=576, majf=0, minf=4097 00:27:27.378 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:27:27.378 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:27.378 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:27.378 issued rwts: total=3447,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:27.378 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:27.378 job1: (groupid=0, jobs=1): err= 0: pid=2883050: Mon Dec 9 09:45:01 2024 00:27:27.378 read: IOPS=127, BW=31.8MiB/s (33.4MB/s)(323MiB/10130msec) 00:27:27.378 slat (usec): min=10, max=297995, avg=6499.08, stdev=23907.75 00:27:27.378 clat (msec): min=14, max=961, avg=495.27, stdev=188.81 00:27:27.378 lat (msec): min=15, max=961, avg=501.77, stdev=190.57 00:27:27.378 clat percentiles (msec): 00:27:27.378 | 1.00th=[ 59], 5.00th=[ 194], 10.00th=[ 230], 20.00th=[ 305], 00:27:27.378 | 30.00th=[ 401], 40.00th=[ 460], 50.00th=[ 510], 60.00th=[ 575], 00:27:27.378 | 70.00th=[ 609], 80.00th=[ 651], 90.00th=[ 735], 95.00th=[ 802], 00:27:27.378 | 99.00th=[ 860], 99.50th=[ 902], 99.90th=[ 936], 99.95th=[ 961], 00:27:27.378 | 99.99th=[ 961] 00:27:27.378 bw ( KiB/s): min=11776, max=68096, per=3.67%, avg=31411.20, stdev=12642.49, samples=20 00:27:27.378 iops : min= 46, max= 266, avg=122.70, stdev=49.38, samples=20 00:27:27.378 lat (msec) : 20=0.23%, 100=1.09%, 250=11.71%, 500=34.34%, 750=44.19% 00:27:27.378 lat (msec) : 1000=8.45% 00:27:27.378 cpu : usr=0.04%, sys=0.55%, ctx=209, majf=0, minf=4097 00:27:27.378 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.5%, >=64=95.1% 00:27:27.378 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:27.378 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:27.378 issued rwts: total=1290,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:27.378 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:27.378 job2: (groupid=0, jobs=1): err= 0: pid=2883072: Mon Dec 9 09:45:01 2024 00:27:27.378 read: IOPS=360, BW=90.2MiB/s (94.6MB/s)(913MiB/10120msec) 00:27:27.378 slat (usec): min=12, max=452040, avg=1890.62, stdev=14136.68 00:27:27.378 clat (msec): min=6, max=871, avg=175.19, stdev=189.11 00:27:27.378 lat (msec): min=7, max=928, avg=177.08, stdev=191.29 00:27:27.378 clat percentiles (msec): 00:27:27.378 | 1.00th=[ 17], 5.00th=[ 34], 10.00th=[ 48], 20.00th=[ 57], 00:27:27.378 | 30.00th=[ 77], 40.00th=[ 90], 50.00th=[ 107], 60.00th=[ 125], 00:27:27.378 | 70.00th=[ 148], 80.00th=[ 180], 90.00th=[ 472], 95.00th=[ 693], 00:27:27.378 | 99.00th=[ 785], 99.50th=[ 810], 99.90th=[ 860], 99.95th=[ 869], 00:27:27.378 | 99.99th=[ 869] 00:27:27.378 bw ( KiB/s): min=10752, max=239616, per=10.74%, avg=91878.40, stdev=67109.63, samples=20 00:27:27.378 iops : min= 42, max= 936, avg=358.90, stdev=262.15, samples=20 00:27:27.378 lat (msec) : 10=0.16%, 20=1.53%, 50=11.86%, 100=31.82%, 250=36.61% 00:27:27.378 lat (msec) : 500=8.41%, 750=8.05%, 1000=1.56% 00:27:27.378 cpu : usr=0.09%, sys=1.31%, ctx=873, majf=0, minf=4097 00:27:27.378 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:27:27.379 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:27.379 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:27.379 issued rwts: total=3652,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:27.379 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:27.379 job3: (groupid=0, jobs=1): err= 0: pid=2883086: Mon Dec 9 09:45:01 2024 00:27:27.379 read: IOPS=127, BW=31.8MiB/s (33.3MB/s)(322MiB/10130msec) 00:27:27.379 slat (usec): min=13, max=213753, avg=6229.58, stdev=21552.95 00:27:27.379 clat (msec): min=13, max=874, avg=496.70, stdev=184.98 00:27:27.379 lat (msec): min=14, max=874, avg=502.93, stdev=186.32 00:27:27.379 clat percentiles (msec): 00:27:27.379 | 1.00th=[ 43], 5.00th=[ 197], 10.00th=[ 236], 20.00th=[ 300], 00:27:27.379 | 30.00th=[ 418], 40.00th=[ 493], 50.00th=[ 542], 60.00th=[ 567], 00:27:27.379 | 70.00th=[ 609], 80.00th=[ 651], 90.00th=[ 726], 95.00th=[ 760], 00:27:27.379 | 99.00th=[ 810], 99.50th=[ 835], 99.90th=[ 844], 99.95th=[ 877], 00:27:27.379 | 99.99th=[ 877] 00:27:27.379 bw ( KiB/s): min=16896, max=68608, per=3.66%, avg=31338.95, stdev=12287.50, samples=20 00:27:27.379 iops : min= 66, max= 268, avg=122.40, stdev=47.98, samples=20 00:27:27.379 lat (msec) : 20=0.23%, 50=1.40%, 100=0.47%, 250=11.27%, 500=27.27% 00:27:27.379 lat (msec) : 750=53.30%, 1000=6.06% 00:27:27.379 cpu : usr=0.08%, sys=0.52%, ctx=274, majf=0, minf=4097 00:27:27.379 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.5%, >=64=95.1% 00:27:27.379 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:27.379 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:27.379 issued rwts: total=1287,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:27.379 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:27.379 job4: (groupid=0, jobs=1): err= 0: pid=2883093: Mon Dec 9 09:45:01 2024 00:27:27.379 read: IOPS=136, BW=34.0MiB/s (35.7MB/s)(345MiB/10135msec) 00:27:27.379 slat (usec): min=13, max=357198, avg=6688.24, stdev=25583.90 00:27:27.379 clat (msec): min=11, max=1018, avg=463.05, stdev=221.59 00:27:27.379 lat (msec): min=13, max=1018, avg=469.74, stdev=224.57 00:27:27.379 clat percentiles (msec): 00:27:27.379 | 1.00th=[ 20], 5.00th=[ 159], 10.00th=[ 215], 20.00th=[ 257], 00:27:27.379 | 30.00th=[ 296], 40.00th=[ 334], 50.00th=[ 456], 60.00th=[ 550], 00:27:27.379 | 70.00th=[ 609], 80.00th=[ 667], 90.00th=[ 768], 95.00th=[ 852], 00:27:27.379 | 99.00th=[ 919], 99.50th=[ 1020], 99.90th=[ 1020], 99.95th=[ 1020], 00:27:27.379 | 99.99th=[ 1020] 00:27:27.379 bw ( KiB/s): min= 9728, max=71168, per=3.93%, avg=33664.00, stdev=16300.31, samples=20 00:27:27.379 iops : min= 38, max= 278, avg=131.50, stdev=63.67, samples=20 00:27:27.379 lat (msec) : 20=1.38%, 50=0.58%, 100=0.44%, 250=14.94%, 500=38.87% 00:27:27.379 lat (msec) : 750=32.85%, 1000=10.37%, 2000=0.58% 00:27:27.379 cpu : usr=0.02%, sys=0.59%, ctx=236, majf=0, minf=3534 00:27:27.379 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.3%, >=64=95.4% 00:27:27.379 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:27.379 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:27.379 issued rwts: total=1379,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:27.379 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:27.379 job5: (groupid=0, jobs=1): err= 0: pid=2883105: Mon Dec 9 09:45:01 2024 00:27:27.379 read: IOPS=146, BW=36.5MiB/s (38.3MB/s)(370MiB/10118msec) 00:27:27.379 slat (usec): min=13, max=203758, avg=4794.00, stdev=18671.16 00:27:27.379 clat (msec): min=11, max=828, avg=432.89, stdev=187.12 00:27:27.379 lat (msec): min=11, max=828, avg=437.68, stdev=189.24 00:27:27.379 clat percentiles (msec): 00:27:27.379 | 1.00th=[ 62], 5.00th=[ 108], 10.00th=[ 150], 20.00th=[ 245], 00:27:27.379 | 30.00th=[ 347], 40.00th=[ 409], 50.00th=[ 451], 60.00th=[ 502], 00:27:27.379 | 70.00th=[ 550], 80.00th=[ 609], 90.00th=[ 667], 95.00th=[ 709], 00:27:27.379 | 99.00th=[ 802], 99.50th=[ 810], 99.90th=[ 827], 99.95th=[ 827], 00:27:27.379 | 99.99th=[ 827] 00:27:27.379 bw ( KiB/s): min=15872, max=84480, per=4.23%, avg=36224.00, stdev=15395.78, samples=20 00:27:27.379 iops : min= 62, max= 330, avg=141.50, stdev=60.14, samples=20 00:27:27.379 lat (msec) : 20=0.88%, 100=2.23%, 250=17.66%, 500=38.84%, 750=37.89% 00:27:27.379 lat (msec) : 1000=2.50% 00:27:27.379 cpu : usr=0.04%, sys=0.56%, ctx=259, majf=0, minf=4097 00:27:27.379 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.1%, 32=2.2%, >=64=95.7% 00:27:27.379 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:27.379 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:27.379 issued rwts: total=1478,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:27.379 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:27.379 job6: (groupid=0, jobs=1): err= 0: pid=2883116: Mon Dec 9 09:45:01 2024 00:27:27.379 read: IOPS=182, BW=45.6MiB/s (47.8MB/s)(462MiB/10129msec) 00:27:27.379 slat (usec): min=12, max=247274, avg=3420.87, stdev=17349.90 00:27:27.379 clat (msec): min=4, max=1035, avg=347.12, stdev=276.59 00:27:27.379 lat (msec): min=4, max=1035, avg=350.54, stdev=279.08 00:27:27.379 clat percentiles (msec): 00:27:27.379 | 1.00th=[ 8], 5.00th=[ 18], 10.00th=[ 22], 20.00th=[ 71], 00:27:27.379 | 30.00th=[ 112], 40.00th=[ 176], 50.00th=[ 296], 60.00th=[ 439], 00:27:27.379 | 70.00th=[ 550], 80.00th=[ 600], 90.00th=[ 693], 95.00th=[ 827], 00:27:27.379 | 99.00th=[ 1020], 99.50th=[ 1020], 99.90th=[ 1036], 99.95th=[ 1036], 00:27:27.379 | 99.99th=[ 1036] 00:27:27.379 bw ( KiB/s): min=11264, max=207360, per=5.33%, avg=45619.20, stdev=46409.64, samples=20 00:27:27.379 iops : min= 44, max= 810, avg=178.20, stdev=181.29, samples=20 00:27:27.379 lat (msec) : 10=1.90%, 20=5.90%, 50=10.83%, 100=6.83%, 250=19.34% 00:27:27.379 lat (msec) : 500=18.36%, 750=29.36%, 1000=5.47%, 2000=2.00% 00:27:27.379 cpu : usr=0.10%, sys=0.63%, ctx=372, majf=0, minf=4097 00:27:27.379 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.9%, 32=1.7%, >=64=96.6% 00:27:27.379 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:27.379 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:27.379 issued rwts: total=1846,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:27.379 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:27.379 job7: (groupid=0, jobs=1): err= 0: pid=2883126: Mon Dec 9 09:45:01 2024 00:27:27.379 read: IOPS=839, BW=210MiB/s (220MB/s)(2112MiB/10064msec) 00:27:27.379 slat (usec): min=6, max=631296, avg=1053.09, stdev=8719.45 00:27:27.379 clat (usec): min=1120, max=1124.9k, avg=75101.59, stdev=123978.50 00:27:27.379 lat (usec): min=1170, max=1197.4k, avg=76154.68, stdev=125263.72 00:27:27.379 clat percentiles (usec): 00:27:27.379 | 1.00th=[ 1958], 5.00th=[ 7242], 10.00th=[ 27919], 00:27:27.379 | 20.00th=[ 34866], 30.00th=[ 38011], 40.00th=[ 41157], 00:27:27.379 | 50.00th=[ 44827], 60.00th=[ 55837], 70.00th=[ 60556], 00:27:27.379 | 80.00th=[ 66323], 90.00th=[ 122160], 95.00th=[ 164627], 00:27:27.379 | 99.00th=[ 717226], 99.50th=[1061159], 99.90th=[1115685], 00:27:27.379 | 99.95th=[1115685], 99.99th=[1132463] 00:27:27.379 bw ( KiB/s): min= 2048, max=456192, per=25.08%, avg=214604.80, stdev=141448.65, samples=20 00:27:27.379 iops : min= 8, max= 1782, avg=838.30, stdev=552.53, samples=20 00:27:27.379 lat (msec) : 2=1.07%, 4=2.15%, 10=1.87%, 20=0.43%, 50=50.94% 00:27:27.379 lat (msec) : 100=30.13%, 250=10.43%, 500=0.18%, 750=1.95%, 1000=0.31% 00:27:27.379 lat (msec) : 2000=0.54% 00:27:27.379 cpu : usr=0.16%, sys=2.62%, ctx=1680, majf=0, minf=4097 00:27:27.379 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:27:27.379 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:27.379 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:27.379 issued rwts: total=8446,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:27.379 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:27.379 job8: (groupid=0, jobs=1): err= 0: pid=2883154: Mon Dec 9 09:45:01 2024 00:27:27.379 read: IOPS=453, BW=113MiB/s (119MB/s)(1148MiB/10127msec) 00:27:27.379 slat (usec): min=9, max=174956, avg=1780.13, stdev=8220.26 00:27:27.379 clat (msec): min=3, max=745, avg=139.09, stdev=131.48 00:27:27.379 lat (msec): min=3, max=749, avg=140.87, stdev=132.93 00:27:27.379 clat percentiles (msec): 00:27:27.379 | 1.00th=[ 11], 5.00th=[ 46], 10.00th=[ 50], 20.00th=[ 54], 00:27:27.379 | 30.00th=[ 58], 40.00th=[ 69], 50.00th=[ 90], 60.00th=[ 110], 00:27:27.379 | 70.00th=[ 131], 80.00th=[ 209], 90.00th=[ 321], 95.00th=[ 443], 00:27:27.379 | 99.00th=[ 642], 99.50th=[ 684], 99.90th=[ 735], 99.95th=[ 743], 00:27:27.379 | 99.99th=[ 743] 00:27:27.379 bw ( KiB/s): min=23040, max=288768, per=13.55%, avg=115968.00, stdev=87245.75, samples=20 00:27:27.379 iops : min= 90, max= 1128, avg=453.00, stdev=340.80, samples=20 00:27:27.379 lat (msec) : 4=0.04%, 10=0.94%, 20=0.61%, 50=9.80%, 100=43.89% 00:27:27.379 lat (msec) : 250=28.17%, 500=13.11%, 750=3.44% 00:27:27.379 cpu : usr=0.11%, sys=1.48%, ctx=919, majf=0, minf=4097 00:27:27.379 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:27:27.379 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:27.379 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:27.379 issued rwts: total=4593,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:27.379 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:27.379 job9: (groupid=0, jobs=1): err= 0: pid=2883167: Mon Dec 9 09:45:01 2024 00:27:27.379 read: IOPS=459, BW=115MiB/s (121MB/s)(1157MiB/10060msec) 00:27:27.379 slat (usec): min=9, max=305934, avg=1719.19, stdev=8687.22 00:27:27.379 clat (usec): min=1887, max=819360, avg=137345.93, stdev=155960.53 00:27:27.379 lat (msec): min=2, max=834, avg=139.07, stdev=157.70 00:27:27.379 clat percentiles (msec): 00:27:27.379 | 1.00th=[ 4], 5.00th=[ 6], 10.00th=[ 19], 20.00th=[ 72], 00:27:27.379 | 30.00th=[ 78], 40.00th=[ 80], 50.00th=[ 82], 60.00th=[ 84], 00:27:27.379 | 70.00th=[ 106], 80.00th=[ 144], 90.00th=[ 342], 95.00th=[ 567], 00:27:27.379 | 99.00th=[ 718], 99.50th=[ 768], 99.90th=[ 818], 99.95th=[ 818], 00:27:27.379 | 99.99th=[ 818] 00:27:27.379 bw ( KiB/s): min=20480, max=276992, per=13.65%, avg=116817.45, stdev=79498.77, samples=20 00:27:27.379 iops : min= 80, max= 1082, avg=456.30, stdev=310.56, samples=20 00:27:27.379 lat (msec) : 2=0.02%, 4=2.68%, 10=4.56%, 20=3.80%, 50=5.73% 00:27:27.379 lat (msec) : 100=51.30%, 250=16.13%, 500=9.53%, 750=5.47%, 1000=0.78% 00:27:27.379 cpu : usr=0.19%, sys=1.53%, ctx=1386, majf=0, minf=4097 00:27:27.379 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:27:27.380 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:27.380 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:27.380 issued rwts: total=4626,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:27.380 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:27.380 job10: (groupid=0, jobs=1): err= 0: pid=2883178: Mon Dec 9 09:45:01 2024 00:27:27.380 read: IOPS=181, BW=45.4MiB/s (47.6MB/s)(460MiB/10120msec) 00:27:27.380 slat (usec): min=11, max=589636, avg=4151.84, stdev=21046.36 00:27:27.380 clat (msec): min=46, max=1125, avg=347.86, stdev=228.33 00:27:27.380 lat (msec): min=46, max=1191, avg=352.02, stdev=230.55 00:27:27.380 clat percentiles (msec): 00:27:27.380 | 1.00th=[ 63], 5.00th=[ 74], 10.00th=[ 121], 20.00th=[ 165], 00:27:27.380 | 30.00th=[ 199], 40.00th=[ 243], 50.00th=[ 284], 60.00th=[ 334], 00:27:27.380 | 70.00th=[ 393], 80.00th=[ 531], 90.00th=[ 684], 95.00th=[ 827], 00:27:27.380 | 99.00th=[ 1011], 99.50th=[ 1028], 99.90th=[ 1133], 99.95th=[ 1133], 00:27:27.380 | 99.99th=[ 1133] 00:27:27.380 bw ( KiB/s): min= 8192, max=111104, per=5.31%, avg=45414.40, stdev=23994.87, samples=20 00:27:27.380 iops : min= 32, max= 434, avg=177.40, stdev=93.73, samples=20 00:27:27.380 lat (msec) : 50=0.27%, 100=6.86%, 250=34.98%, 500=35.85%, 750=15.02% 00:27:27.380 lat (msec) : 1000=5.66%, 2000=1.36% 00:27:27.380 cpu : usr=0.09%, sys=0.61%, ctx=340, majf=0, minf=4097 00:27:27.380 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.9%, 32=1.7%, >=64=96.6% 00:27:27.380 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:27.380 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:27.380 issued rwts: total=1838,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:27.380 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:27.380 00:27:27.380 Run status group 0 (all jobs): 00:27:27.380 READ: bw=836MiB/s (876MB/s), 31.8MiB/s-210MiB/s (33.3MB/s-220MB/s), io=8471MiB (8882MB), run=10060-10135msec 00:27:27.380 00:27:27.380 Disk stats (read/write): 00:27:27.380 nvme0n1: ios=6834/0, merge=0/0, ticks=1243996/0, in_queue=1243996, util=96.42% 00:27:27.380 nvme10n1: ios=2495/0, merge=0/0, ticks=1241041/0, in_queue=1241041, util=96.66% 00:27:27.380 nvme1n1: ios=7266/0, merge=0/0, ticks=1250412/0, in_queue=1250412, util=97.07% 00:27:27.380 nvme2n1: ios=2491/0, merge=0/0, ticks=1243396/0, in_queue=1243396, util=97.28% 00:27:27.380 nvme3n1: ios=2664/0, merge=0/0, ticks=1237533/0, in_queue=1237533, util=97.42% 00:27:27.380 nvme4n1: ios=2885/0, merge=0/0, ticks=1250588/0, in_queue=1250588, util=97.75% 00:27:27.380 nvme5n1: ios=3620/0, merge=0/0, ticks=1242454/0, in_queue=1242454, util=98.07% 00:27:27.380 nvme6n1: ios=16483/0, merge=0/0, ticks=1225834/0, in_queue=1225834, util=98.12% 00:27:27.380 nvme7n1: ios=9118/0, merge=0/0, ticks=1249935/0, in_queue=1249935, util=98.79% 00:27:27.380 nvme8n1: ios=8871/0, merge=0/0, ticks=1224614/0, in_queue=1224614, util=98.88% 00:27:27.380 nvme9n1: ios=3617/0, merge=0/0, ticks=1257077/0, in_queue=1257077, util=99.14% 00:27:27.380 09:45:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:27:27.380 [global] 00:27:27.380 thread=1 00:27:27.380 invalidate=1 00:27:27.380 rw=randwrite 00:27:27.380 time_based=1 00:27:27.380 runtime=10 00:27:27.380 ioengine=libaio 00:27:27.380 direct=1 00:27:27.380 bs=262144 00:27:27.380 iodepth=64 00:27:27.380 norandommap=1 00:27:27.380 numjobs=1 00:27:27.380 00:27:27.380 [job0] 00:27:27.380 filename=/dev/nvme0n1 00:27:27.380 [job1] 00:27:27.380 filename=/dev/nvme10n1 00:27:27.380 [job2] 00:27:27.380 filename=/dev/nvme1n1 00:27:27.380 [job3] 00:27:27.380 filename=/dev/nvme2n1 00:27:27.380 [job4] 00:27:27.380 filename=/dev/nvme3n1 00:27:27.380 [job5] 00:27:27.380 filename=/dev/nvme4n1 00:27:27.380 [job6] 00:27:27.380 filename=/dev/nvme5n1 00:27:27.380 [job7] 00:27:27.380 filename=/dev/nvme6n1 00:27:27.380 [job8] 00:27:27.380 filename=/dev/nvme7n1 00:27:27.380 [job9] 00:27:27.380 filename=/dev/nvme8n1 00:27:27.380 [job10] 00:27:27.380 filename=/dev/nvme9n1 00:27:27.380 Could not set queue depth (nvme0n1) 00:27:27.380 Could not set queue depth (nvme10n1) 00:27:27.380 Could not set queue depth (nvme1n1) 00:27:27.380 Could not set queue depth (nvme2n1) 00:27:27.380 Could not set queue depth (nvme3n1) 00:27:27.380 Could not set queue depth (nvme4n1) 00:27:27.380 Could not set queue depth (nvme5n1) 00:27:27.380 Could not set queue depth (nvme6n1) 00:27:27.380 Could not set queue depth (nvme7n1) 00:27:27.380 Could not set queue depth (nvme8n1) 00:27:27.380 Could not set queue depth (nvme9n1) 00:27:27.380 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:27.380 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:27.380 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:27.380 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:27.380 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:27.380 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:27.380 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:27.380 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:27.380 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:27.380 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:27.380 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:27.380 fio-3.35 00:27:27.380 Starting 11 threads 00:27:37.380 00:27:37.380 job0: (groupid=0, jobs=1): err= 0: pid=2885327: Mon Dec 9 09:45:12 2024 00:27:37.380 write: IOPS=261, BW=65.5MiB/s (68.7MB/s)(664MiB/10134msec); 0 zone resets 00:27:37.380 slat (usec): min=26, max=94213, avg=3582.45, stdev=7330.16 00:27:37.380 clat (msec): min=4, max=427, avg=240.69, stdev=99.32 00:27:37.380 lat (msec): min=5, max=432, avg=244.27, stdev=100.43 00:27:37.380 clat percentiles (msec): 00:27:37.380 | 1.00th=[ 13], 5.00th=[ 73], 10.00th=[ 100], 20.00th=[ 148], 00:27:37.380 | 30.00th=[ 176], 40.00th=[ 239], 50.00th=[ 262], 60.00th=[ 288], 00:27:37.380 | 70.00th=[ 305], 80.00th=[ 330], 90.00th=[ 359], 95.00th=[ 376], 00:27:37.380 | 99.00th=[ 405], 99.50th=[ 414], 99.90th=[ 422], 99.95th=[ 426], 00:27:37.380 | 99.99th=[ 426] 00:27:37.380 bw ( KiB/s): min=43008, max=119808, per=5.07%, avg=66337.55, stdev=22831.82, samples=20 00:27:37.380 iops : min= 168, max= 468, avg=259.10, stdev=89.17, samples=20 00:27:37.380 lat (msec) : 10=0.41%, 20=2.34%, 50=1.96%, 100=5.46%, 250=34.78% 00:27:37.380 lat (msec) : 500=55.05% 00:27:37.380 cpu : usr=0.60%, sys=0.78%, ctx=804, majf=0, minf=1 00:27:37.380 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:27:37.380 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:37.380 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:37.380 issued rwts: total=0,2654,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:37.380 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:37.380 job1: (groupid=0, jobs=1): err= 0: pid=2885357: Mon Dec 9 09:45:12 2024 00:27:37.380 write: IOPS=246, BW=61.7MiB/s (64.7MB/s)(625MiB/10135msec); 0 zone resets 00:27:37.380 slat (usec): min=24, max=442012, avg=3782.75, stdev=13718.63 00:27:37.380 clat (msec): min=7, max=742, avg=255.52, stdev=128.62 00:27:37.380 lat (msec): min=8, max=743, avg=259.30, stdev=130.14 00:27:37.380 clat percentiles (msec): 00:27:37.380 | 1.00th=[ 22], 5.00th=[ 81], 10.00th=[ 97], 20.00th=[ 140], 00:27:37.380 | 30.00th=[ 157], 40.00th=[ 224], 50.00th=[ 264], 60.00th=[ 296], 00:27:37.380 | 70.00th=[ 330], 80.00th=[ 359], 90.00th=[ 384], 95.00th=[ 447], 00:27:37.380 | 99.00th=[ 651], 99.50th=[ 701], 99.90th=[ 726], 99.95th=[ 726], 00:27:37.380 | 99.99th=[ 743] 00:27:37.380 bw ( KiB/s): min=15360, max=119808, per=4.77%, avg=62387.20, stdev=28429.54, samples=20 00:27:37.380 iops : min= 60, max= 468, avg=243.70, stdev=111.05, samples=20 00:27:37.380 lat (msec) : 10=0.08%, 20=0.68%, 50=2.24%, 100=8.44%, 250=34.00% 00:27:37.380 lat (msec) : 500=50.80%, 750=3.76% 00:27:37.380 cpu : usr=0.60%, sys=0.95%, ctx=763, majf=0, minf=1 00:27:37.380 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.3%, >=64=97.5% 00:27:37.380 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:37.380 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:37.380 issued rwts: total=0,2500,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:37.380 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:37.380 job2: (groupid=0, jobs=1): err= 0: pid=2885381: Mon Dec 9 09:45:12 2024 00:27:37.380 write: IOPS=845, BW=211MiB/s (222MB/s)(2127MiB/10064msec); 0 zone resets 00:27:37.380 slat (usec): min=18, max=40092, avg=1162.50, stdev=2411.75 00:27:37.380 clat (msec): min=9, max=318, avg=74.51, stdev=40.90 00:27:37.380 lat (msec): min=9, max=318, avg=75.67, stdev=41.49 00:27:37.380 clat percentiles (msec): 00:27:37.380 | 1.00th=[ 41], 5.00th=[ 43], 10.00th=[ 45], 20.00th=[ 48], 00:27:37.380 | 30.00th=[ 51], 40.00th=[ 54], 50.00th=[ 61], 60.00th=[ 64], 00:27:37.380 | 70.00th=[ 73], 80.00th=[ 96], 90.00th=[ 144], 95.00th=[ 157], 00:27:37.380 | 99.00th=[ 236], 99.50th=[ 296], 99.90th=[ 317], 99.95th=[ 317], 00:27:37.380 | 99.99th=[ 317] 00:27:37.380 bw ( KiB/s): min=65536, max=354816, per=16.53%, avg=216217.60, stdev=89947.12, samples=20 00:27:37.380 iops : min= 256, max= 1386, avg=844.60, stdev=351.36, samples=20 00:27:37.380 lat (msec) : 10=0.05%, 20=0.09%, 50=28.88%, 100=53.11%, 250=16.96% 00:27:37.380 lat (msec) : 500=0.92% 00:27:37.380 cpu : usr=1.95%, sys=2.66%, ctx=2076, majf=0, minf=1 00:27:37.380 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:27:37.380 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:37.380 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:37.380 issued rwts: total=0,8509,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:37.380 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:37.380 job3: (groupid=0, jobs=1): err= 0: pid=2885393: Mon Dec 9 09:45:12 2024 00:27:37.380 write: IOPS=700, BW=175MiB/s (184MB/s)(1774MiB/10135msec); 0 zone resets 00:27:37.380 slat (usec): min=27, max=213487, avg=1288.43, stdev=4750.31 00:27:37.380 clat (usec): min=1977, max=420281, avg=90080.07, stdev=56970.50 00:27:37.380 lat (msec): min=2, max=424, avg=91.37, stdev=57.53 00:27:37.380 clat percentiles (msec): 00:27:37.380 | 1.00th=[ 8], 5.00th=[ 27], 10.00th=[ 44], 20.00th=[ 52], 00:27:37.380 | 30.00th=[ 57], 40.00th=[ 60], 50.00th=[ 65], 60.00th=[ 85], 00:27:37.380 | 70.00th=[ 107], 80.00th=[ 140], 90.00th=[ 171], 95.00th=[ 190], 00:27:37.380 | 99.00th=[ 296], 99.50th=[ 326], 99.90th=[ 388], 99.95th=[ 405], 00:27:37.380 | 99.99th=[ 422] 00:27:37.380 bw ( KiB/s): min=82944, max=310272, per=13.76%, avg=180035.45, stdev=75535.94, samples=20 00:27:37.380 iops : min= 324, max= 1212, avg=703.25, stdev=295.07, samples=20 00:27:37.380 lat (msec) : 2=0.01%, 4=0.34%, 10=1.24%, 20=2.11%, 50=14.38% 00:27:37.380 lat (msec) : 100=48.32%, 250=31.91%, 500=1.69% 00:27:37.380 cpu : usr=1.74%, sys=2.31%, ctx=2151, majf=0, minf=1 00:27:37.380 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:27:37.380 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:37.380 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:37.380 issued rwts: total=0,7095,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:37.380 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:37.380 job4: (groupid=0, jobs=1): err= 0: pid=2885400: Mon Dec 9 09:45:12 2024 00:27:37.380 write: IOPS=485, BW=121MiB/s (127MB/s)(1222MiB/10065msec); 0 zone resets 00:27:37.380 slat (usec): min=26, max=122867, avg=1796.81, stdev=4634.34 00:27:37.380 clat (msec): min=3, max=456, avg=129.98, stdev=77.82 00:27:37.380 lat (msec): min=3, max=456, avg=131.77, stdev=78.78 00:27:37.380 clat percentiles (msec): 00:27:37.380 | 1.00th=[ 15], 5.00th=[ 46], 10.00th=[ 57], 20.00th=[ 75], 00:27:37.380 | 30.00th=[ 92], 40.00th=[ 100], 50.00th=[ 108], 60.00th=[ 124], 00:27:37.380 | 70.00th=[ 140], 80.00th=[ 169], 90.00th=[ 247], 95.00th=[ 313], 00:27:37.380 | 99.00th=[ 388], 99.50th=[ 414], 99.90th=[ 439], 99.95th=[ 456], 00:27:37.380 | 99.99th=[ 456] 00:27:37.380 bw ( KiB/s): min=48128, max=201728, per=9.44%, avg=123483.60, stdev=49893.94, samples=20 00:27:37.380 iops : min= 188, max= 788, avg=482.35, stdev=194.89, samples=20 00:27:37.380 lat (msec) : 4=0.02%, 10=0.61%, 20=1.02%, 50=4.93%, 100=34.06% 00:27:37.380 lat (msec) : 250=49.51%, 500=9.84% 00:27:37.380 cpu : usr=1.07%, sys=1.60%, ctx=1790, majf=0, minf=1 00:27:37.380 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:27:37.380 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:37.380 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:37.380 issued rwts: total=0,4886,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:37.380 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:37.380 job5: (groupid=0, jobs=1): err= 0: pid=2885423: Mon Dec 9 09:45:12 2024 00:27:37.380 write: IOPS=319, BW=79.9MiB/s (83.8MB/s)(805MiB/10075msec); 0 zone resets 00:27:37.380 slat (usec): min=27, max=121845, avg=2815.14, stdev=7948.03 00:27:37.380 clat (usec): min=1942, max=471175, avg=196967.67, stdev=124877.46 00:27:37.380 lat (msec): min=2, max=471, avg=199.78, stdev=126.78 00:27:37.380 clat percentiles (msec): 00:27:37.380 | 1.00th=[ 6], 5.00th=[ 23], 10.00th=[ 44], 20.00th=[ 84], 00:27:37.380 | 30.00th=[ 92], 40.00th=[ 144], 50.00th=[ 159], 60.00th=[ 222], 00:27:37.380 | 70.00th=[ 284], 80.00th=[ 330], 90.00th=[ 384], 95.00th=[ 401], 00:27:37.380 | 99.00th=[ 451], 99.50th=[ 464], 99.90th=[ 472], 99.95th=[ 472], 00:27:37.380 | 99.99th=[ 472] 00:27:37.380 bw ( KiB/s): min=38989, max=233472, per=6.18%, avg=80823.05, stdev=51110.18, samples=20 00:27:37.380 iops : min= 152, max= 912, avg=315.70, stdev=199.66, samples=20 00:27:37.380 lat (msec) : 2=0.03%, 4=0.34%, 10=1.80%, 20=2.27%, 50=6.49% 00:27:37.380 lat (msec) : 100=20.37%, 250=31.55%, 500=37.14% 00:27:37.380 cpu : usr=0.76%, sys=1.06%, ctx=1315, majf=0, minf=1 00:27:37.380 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.0% 00:27:37.380 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:37.380 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:37.380 issued rwts: total=0,3220,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:37.380 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:37.380 job6: (groupid=0, jobs=1): err= 0: pid=2885440: Mon Dec 9 09:45:12 2024 00:27:37.380 write: IOPS=294, BW=73.6MiB/s (77.2MB/s)(741MiB/10064msec); 0 zone resets 00:27:37.380 slat (usec): min=17, max=99554, avg=2959.73, stdev=7002.95 00:27:37.380 clat (msec): min=6, max=449, avg=214.43, stdev=108.98 00:27:37.380 lat (msec): min=6, max=449, avg=217.39, stdev=110.56 00:27:37.380 clat percentiles (msec): 00:27:37.380 | 1.00th=[ 25], 5.00th=[ 57], 10.00th=[ 80], 20.00th=[ 97], 00:27:37.380 | 30.00th=[ 120], 40.00th=[ 157], 50.00th=[ 232], 60.00th=[ 262], 00:27:37.380 | 70.00th=[ 292], 80.00th=[ 321], 90.00th=[ 355], 95.00th=[ 376], 00:27:37.381 | 99.00th=[ 426], 99.50th=[ 447], 99.90th=[ 451], 99.95th=[ 451], 00:27:37.381 | 99.99th=[ 451] 00:27:37.381 bw ( KiB/s): min=43008, max=190976, per=5.67%, avg=74214.40, stdev=41842.28, samples=20 00:27:37.381 iops : min= 168, max= 746, avg=289.90, stdev=163.45, samples=20 00:27:37.381 lat (msec) : 10=0.07%, 20=0.44%, 50=4.05%, 100=19.55%, 250=32.14% 00:27:37.381 lat (msec) : 500=43.75% 00:27:37.381 cpu : usr=0.52%, sys=1.08%, ctx=1169, majf=0, minf=1 00:27:37.381 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:27:37.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:37.381 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:37.381 issued rwts: total=0,2962,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:37.381 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:37.381 job7: (groupid=0, jobs=1): err= 0: pid=2885451: Mon Dec 9 09:45:12 2024 00:27:37.381 write: IOPS=982, BW=246MiB/s (257MB/s)(2471MiB/10062msec); 0 zone resets 00:27:37.381 slat (usec): min=11, max=16400, avg=1002.65, stdev=1885.20 00:27:37.381 clat (msec): min=11, max=164, avg=64.14, stdev=24.13 00:27:37.381 lat (msec): min=11, max=164, avg=65.14, stdev=24.47 00:27:37.381 clat percentiles (msec): 00:27:37.381 | 1.00th=[ 33], 5.00th=[ 37], 10.00th=[ 40], 20.00th=[ 45], 00:27:37.381 | 30.00th=[ 47], 40.00th=[ 51], 50.00th=[ 59], 60.00th=[ 64], 00:27:37.381 | 70.00th=[ 74], 80.00th=[ 90], 90.00th=[ 95], 95.00th=[ 107], 00:27:37.381 | 99.00th=[ 140], 99.50th=[ 148], 99.90th=[ 161], 99.95th=[ 165], 00:27:37.381 | 99.99th=[ 165] 00:27:37.381 bw ( KiB/s): min=143360, max=400896, per=19.22%, avg=251392.00, stdev=83547.12, samples=20 00:27:37.381 iops : min= 560, max= 1566, avg=982.00, stdev=326.36, samples=20 00:27:37.381 lat (msec) : 20=0.12%, 50=40.10%, 100=53.70%, 250=6.08% 00:27:37.381 cpu : usr=2.06%, sys=2.90%, ctx=2440, majf=0, minf=1 00:27:37.381 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:27:37.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:37.381 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:37.381 issued rwts: total=0,9883,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:37.381 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:37.381 job8: (groupid=0, jobs=1): err= 0: pid=2885463: Mon Dec 9 09:45:12 2024 00:27:37.381 write: IOPS=264, BW=66.1MiB/s (69.3MB/s)(670MiB/10138msec); 0 zone resets 00:27:37.381 slat (usec): min=23, max=100701, avg=3448.08, stdev=7653.96 00:27:37.381 clat (msec): min=16, max=451, avg=238.56, stdev=98.57 00:27:37.381 lat (msec): min=16, max=451, avg=242.01, stdev=99.79 00:27:37.381 clat percentiles (msec): 00:27:37.381 | 1.00th=[ 84], 5.00th=[ 112], 10.00th=[ 120], 20.00th=[ 146], 00:27:37.381 | 30.00th=[ 157], 40.00th=[ 169], 50.00th=[ 241], 60.00th=[ 275], 00:27:37.381 | 70.00th=[ 309], 80.00th=[ 338], 90.00th=[ 368], 95.00th=[ 401], 00:27:37.381 | 99.00th=[ 447], 99.50th=[ 451], 99.90th=[ 451], 99.95th=[ 451], 00:27:37.381 | 99.99th=[ 451] 00:27:37.381 bw ( KiB/s): min=38912, max=123392, per=5.12%, avg=66969.60, stdev=28337.83, samples=20 00:27:37.381 iops : min= 152, max= 482, avg=261.60, stdev=110.69, samples=20 00:27:37.381 lat (msec) : 20=0.15%, 50=0.45%, 100=0.78%, 250=50.11%, 500=48.51% 00:27:37.381 cpu : usr=0.77%, sys=0.61%, ctx=831, majf=0, minf=1 00:27:37.381 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:27:37.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:37.381 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:37.381 issued rwts: total=0,2680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:37.381 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:37.381 job9: (groupid=0, jobs=1): err= 0: pid=2885467: Mon Dec 9 09:45:12 2024 00:27:37.381 write: IOPS=287, BW=71.8MiB/s (75.3MB/s)(724MiB/10074msec); 0 zone resets 00:27:37.381 slat (usec): min=26, max=51534, avg=2968.74, stdev=6590.07 00:27:37.381 clat (msec): min=4, max=410, avg=219.58, stdev=86.53 00:27:37.381 lat (msec): min=4, max=410, avg=222.54, stdev=87.79 00:27:37.381 clat percentiles (msec): 00:27:37.381 | 1.00th=[ 43], 5.00th=[ 99], 10.00th=[ 117], 20.00th=[ 150], 00:27:37.381 | 30.00th=[ 165], 40.00th=[ 176], 50.00th=[ 194], 60.00th=[ 239], 00:27:37.381 | 70.00th=[ 284], 80.00th=[ 317], 90.00th=[ 342], 95.00th=[ 359], 00:27:37.381 | 99.00th=[ 388], 99.50th=[ 397], 99.90th=[ 409], 99.95th=[ 409], 00:27:37.381 | 99.99th=[ 409] 00:27:37.381 bw ( KiB/s): min=45056, max=107008, per=5.54%, avg=72483.05, stdev=22047.99, samples=20 00:27:37.381 iops : min= 176, max= 418, avg=283.10, stdev=86.09, samples=20 00:27:37.381 lat (msec) : 10=0.07%, 20=0.17%, 50=1.38%, 100=3.77%, 250=57.88% 00:27:37.381 lat (msec) : 500=36.73% 00:27:37.381 cpu : usr=0.80%, sys=0.75%, ctx=1136, majf=0, minf=1 00:27:37.381 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.8% 00:27:37.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:37.381 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:37.381 issued rwts: total=0,2894,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:37.381 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:37.381 job10: (groupid=0, jobs=1): err= 0: pid=2885473: Mon Dec 9 09:45:12 2024 00:27:37.381 write: IOPS=446, BW=112MiB/s (117MB/s)(1132MiB/10135msec); 0 zone resets 00:27:37.381 slat (usec): min=19, max=698857, avg=2089.22, stdev=11549.88 00:27:37.381 clat (msec): min=2, max=1094, avg=141.15, stdev=129.38 00:27:37.381 lat (msec): min=2, max=1110, avg=143.24, stdev=130.62 00:27:37.381 clat percentiles (msec): 00:27:37.381 | 1.00th=[ 9], 5.00th=[ 30], 10.00th=[ 57], 20.00th=[ 63], 00:27:37.381 | 30.00th=[ 87], 40.00th=[ 92], 50.00th=[ 96], 60.00th=[ 118], 00:27:37.381 | 70.00th=[ 138], 80.00th=[ 184], 90.00th=[ 326], 95.00th=[ 363], 00:27:37.381 | 99.00th=[ 852], 99.50th=[ 877], 99.90th=[ 1083], 99.95th=[ 1099], 00:27:37.381 | 99.99th=[ 1099] 00:27:37.381 bw ( KiB/s): min=45056, max=268288, per=8.73%, avg=114243.00, stdev=59888.79, samples=20 00:27:37.381 iops : min= 176, max= 1048, avg=446.25, stdev=233.93, samples=20 00:27:37.381 lat (msec) : 4=0.27%, 10=1.26%, 20=2.14%, 50=4.84%, 100=45.58% 00:27:37.381 lat (msec) : 250=31.73%, 500=12.79%, 750=0.20%, 1000=0.73%, 2000=0.46% 00:27:37.381 cpu : usr=0.99%, sys=1.35%, ctx=1497, majf=0, minf=1 00:27:37.381 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:27:37.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:37.381 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:37.381 issued rwts: total=0,4526,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:37.381 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:37.381 00:27:37.381 Run status group 0 (all jobs): 00:27:37.381 WRITE: bw=1278MiB/s (1340MB/s), 61.7MiB/s-246MiB/s (64.7MB/s-257MB/s), io=12.6GiB (13.6GB), run=10062-10138msec 00:27:37.381 00:27:37.381 Disk stats (read/write): 00:27:37.381 nvme0n1: ios=49/5247, merge=0/0, ticks=47/1224118, in_queue=1224165, util=96.69% 00:27:37.381 nvme10n1: ios=42/4938, merge=0/0, ticks=3293/1175609, in_queue=1178902, util=100.00% 00:27:37.381 nvme1n1: ios=26/16613, merge=0/0, ticks=906/1193968, in_queue=1194874, util=100.00% 00:27:37.381 nvme2n1: ios=49/14127, merge=0/0, ticks=5384/1156324, in_queue=1161708, util=100.00% 00:27:37.381 nvme3n1: ios=46/9343, merge=0/0, ticks=1517/1198507, in_queue=1200024, util=100.00% 00:27:37.381 nvme4n1: ios=50/6136, merge=0/0, ticks=5494/1184865, in_queue=1190359, util=100.00% 00:27:37.381 nvme5n1: ios=0/5529, merge=0/0, ticks=0/1203917, in_queue=1203917, util=97.90% 00:27:37.381 nvme6n1: ios=0/19398, merge=0/0, ticks=0/1195538, in_queue=1195538, util=98.07% 00:27:37.381 nvme7n1: ios=0/5291, merge=0/0, ticks=0/1225173, in_queue=1225173, util=98.65% 00:27:37.381 nvme8n1: ios=30/5489, merge=0/0, ticks=1877/1196460, in_queue=1198337, util=100.00% 00:27:37.381 nvme9n1: ios=39/8988, merge=0/0, ticks=2547/1151815, in_queue=1154362, util=100.00% 00:27:37.381 09:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:27:37.381 09:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:27:37.381 09:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:37.381 09:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:37.381 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:37.381 09:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:27:37.381 09:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:27:37.381 09:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:27:37.381 09:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK1 00:27:37.381 09:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:27:37.381 09:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK1 00:27:37.381 09:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:27:37.381 09:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:37.381 09:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.381 09:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:37.381 09:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.381 09:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:37.381 09:45:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:27:37.642 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:27:37.642 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:27:37.642 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:27:37.642 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:27:37.642 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK2 00:27:37.642 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:27:37.642 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK2 00:27:37.642 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:27:37.642 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:37.642 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.642 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:37.642 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.642 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:37.642 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:27:37.904 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:27:37.904 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:27:37.904 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:27:37.904 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:27:37.904 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK3 00:27:37.904 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:27:37.904 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK3 00:27:38.166 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:27:38.166 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:27:38.166 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.166 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:38.166 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.166 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:38.166 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:27:38.166 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:27:38.166 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:27:38.166 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:27:38.166 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:27:38.166 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK4 00:27:38.427 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:27:38.427 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK4 00:27:38.427 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:27:38.427 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:27:38.427 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.427 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:38.427 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.427 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:38.427 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:27:38.688 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:27:38.688 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:27:38.688 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:27:38.688 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:27:38.688 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK5 00:27:38.688 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:27:38.688 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK5 00:27:38.688 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:27:38.688 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:27:38.688 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.688 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:38.688 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.688 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:38.688 09:45:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:27:38.949 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:27:38.949 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:27:38.949 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:27:38.949 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:27:38.949 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK6 00:27:38.949 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:27:38.949 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK6 00:27:38.949 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:27:38.949 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:27:38.949 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.949 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:38.949 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.949 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:38.949 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:27:38.949 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:27:38.949 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:27:38.949 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:27:38.949 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK7 00:27:38.949 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:27:38.949 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK7 00:27:38.950 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:27:38.950 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:27:38.950 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:27:38.950 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.950 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:38.950 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.950 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:38.950 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:27:39.210 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:27:39.210 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:27:39.210 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:27:39.210 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:27:39.210 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK8 00:27:39.210 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:27:39.210 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK8 00:27:39.210 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:27:39.210 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:27:39.210 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.210 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:39.210 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.211 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:39.211 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:27:39.472 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:27:39.472 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:27:39.472 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:27:39.472 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:27:39.472 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK9 00:27:39.472 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:27:39.472 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK9 00:27:39.472 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:27:39.472 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:27:39.472 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.472 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:39.472 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.472 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:39.472 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:27:39.734 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:27:39.734 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:27:39.734 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:27:39.734 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:27:39.734 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK10 00:27:39.734 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:27:39.734 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK10 00:27:39.734 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:27:39.734 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:27:39.734 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.734 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:39.734 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.734 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:39.734 09:45:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:27:39.734 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:27:39.734 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:27:39.734 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:27:39.734 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:27:39.734 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK11 00:27:39.734 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:27:39.734 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK11 00:27:39.734 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:27:39.734 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:27:39.734 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.734 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:39.734 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.734 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:27:39.734 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:27:39.734 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:27:39.734 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:39.734 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:27:39.734 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:39.734 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:27:39.734 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:39.734 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:39.995 rmmod nvme_tcp 00:27:39.995 rmmod nvme_fabrics 00:27:39.995 rmmod nvme_keyring 00:27:39.995 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:39.995 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:27:39.995 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:27:39.995 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@517 -- # '[' -n 2874474 ']' 00:27:39.995 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@518 -- # killprocess 2874474 00:27:39.995 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # '[' -z 2874474 ']' 00:27:39.995 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@958 -- # kill -0 2874474 00:27:39.995 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # uname 00:27:39.995 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:39.995 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2874474 00:27:39.995 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:39.995 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:39.995 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2874474' 00:27:39.995 killing process with pid 2874474 00:27:39.995 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@973 -- # kill 2874474 00:27:39.995 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@978 -- # wait 2874474 00:27:40.256 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:40.256 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:40.256 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:40.256 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # iptr 00:27:40.256 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-save 00:27:40.256 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:40.256 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-restore 00:27:40.256 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:40.256 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:40.256 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:40.256 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:40.256 09:45:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:42.803 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:42.804 00:27:42.804 real 1m18.223s 00:27:42.804 user 5m0.331s 00:27:42.804 sys 0m16.651s 00:27:42.804 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:42.804 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:42.804 ************************************ 00:27:42.804 END TEST nvmf_multiconnection 00:27:42.804 ************************************ 00:27:42.804 09:45:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:27:42.804 09:45:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:42.804 09:45:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:42.804 09:45:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:42.804 ************************************ 00:27:42.804 START TEST nvmf_initiator_timeout 00:27:42.804 ************************************ 00:27:42.804 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:27:42.804 * Looking for test storage... 00:27:42.804 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:42.804 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:42.804 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1711 -- # lcov --version 00:27:42.804 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:42.804 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:42.804 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:42.804 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:42.804 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:42.804 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:27:42.804 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:27:42.804 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:27:42.804 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:27:42.804 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:27:42.804 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:27:42.804 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:27:42.804 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:42.804 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:27:42.804 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:27:42.804 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:42.804 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:42.804 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:27:42.804 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:27:42.804 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:42.804 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:27:42.804 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:27:42.804 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:27:42.804 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:27:42.804 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:42.804 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:27:42.804 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:27:42.804 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:42.804 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:42.804 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:27:42.804 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:42.804 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:42.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:42.804 --rc genhtml_branch_coverage=1 00:27:42.804 --rc genhtml_function_coverage=1 00:27:42.804 --rc genhtml_legend=1 00:27:42.804 --rc geninfo_all_blocks=1 00:27:42.804 --rc geninfo_unexecuted_blocks=1 00:27:42.804 00:27:42.804 ' 00:27:42.804 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:42.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:42.804 --rc genhtml_branch_coverage=1 00:27:42.804 --rc genhtml_function_coverage=1 00:27:42.804 --rc genhtml_legend=1 00:27:42.804 --rc geninfo_all_blocks=1 00:27:42.804 --rc geninfo_unexecuted_blocks=1 00:27:42.804 00:27:42.804 ' 00:27:42.804 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:42.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:42.804 --rc genhtml_branch_coverage=1 00:27:42.804 --rc genhtml_function_coverage=1 00:27:42.804 --rc genhtml_legend=1 00:27:42.804 --rc geninfo_all_blocks=1 00:27:42.804 --rc geninfo_unexecuted_blocks=1 00:27:42.804 00:27:42.804 ' 00:27:42.804 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:42.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:42.804 --rc genhtml_branch_coverage=1 00:27:42.804 --rc genhtml_function_coverage=1 00:27:42.804 --rc genhtml_legend=1 00:27:42.804 --rc geninfo_all_blocks=1 00:27:42.804 --rc geninfo_unexecuted_blocks=1 00:27:42.804 00:27:42.804 ' 00:27:42.804 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:42.804 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:27:42.804 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:42.804 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:42.804 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:42.804 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:42.804 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:42.804 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:42.804 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:42.804 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:42.804 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:42.805 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:42.805 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:42.805 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:42.805 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:42.805 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:42.805 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:42.805 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:42.805 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:42.805 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:27:42.805 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:42.805 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:42.805 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:42.805 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:42.805 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:42.805 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:42.805 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:27:42.805 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:42.805 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:27:42.805 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:42.805 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:42.805 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:42.805 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:42.805 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:42.805 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:42.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:42.805 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:42.805 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:42.805 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:42.805 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:42.805 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:42.805 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:27:42.805 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:42.805 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:42.805 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:42.805 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:42.805 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:42.805 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:42.805 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:42.805 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:42.805 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:42.805 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:42.805 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@309 -- # xtrace_disable 00:27:42.805 09:45:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:50.955 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:50.955 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # pci_devs=() 00:27:50.955 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:50.955 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:50.955 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:50.955 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:50.955 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:50.955 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # net_devs=() 00:27:50.955 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:50.955 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # e810=() 00:27:50.955 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # local -ga e810 00:27:50.955 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # x722=() 00:27:50.955 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # local -ga x722 00:27:50.955 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # mlx=() 00:27:50.955 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # local -ga mlx 00:27:50.955 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:50.955 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:50.955 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:50.955 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:50.955 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:50.955 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:50.955 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:50.955 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:50.955 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:50.955 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:50.955 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:50.955 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:50.955 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:50.955 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:50.955 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:50.955 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:50.956 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:50.956 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:50.956 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:50.956 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:50.956 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:50.956 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:50.956 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:50.956 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:50.956 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:50.956 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:50.956 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:50.956 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:50.956 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:50.956 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:50.956 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:50.956 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:50.956 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:50.956 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:50.956 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:50.956 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:50.956 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:50.956 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:50.956 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:50.956 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:50.956 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:50.956 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:50.956 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:50.956 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:50.956 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:50.956 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:50.956 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:50.956 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:50.956 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:50.956 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:50.956 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:50.956 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:50.956 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:50.956 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:50.956 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:50.956 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:50.956 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:50.956 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:50.956 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # is_hw=yes 00:27:50.956 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:50.956 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:50.956 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:50.956 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:50.956 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:50.956 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:50.956 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:50.956 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:50.956 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:50.956 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:50.956 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:50.956 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:50.956 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:50.956 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:50.956 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:50.956 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:50.956 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:50.956 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:50.956 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:50.956 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:50.956 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:50.956 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:50.956 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:50.956 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:50.956 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:50.956 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:50.956 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:50.956 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.704 ms 00:27:50.956 00:27:50.956 --- 10.0.0.2 ping statistics --- 00:27:50.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:50.956 rtt min/avg/max/mdev = 0.704/0.704/0.704/0.000 ms 00:27:50.956 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:50.956 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:50.956 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:27:50.956 00:27:50.956 --- 10.0.0.1 ping statistics --- 00:27:50.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:50.956 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:27:50.956 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:50.956 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # return 0 00:27:50.956 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:50.956 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:50.956 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:50.956 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:50.956 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:50.956 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:50.956 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:50.956 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:27:50.956 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:50.956 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:50.956 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:50.956 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@509 -- # nvmfpid=2892195 00:27:50.956 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@510 -- # waitforlisten 2892195 00:27:50.956 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:50.956 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # '[' -z 2892195 ']' 00:27:50.956 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:50.956 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:50.956 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:50.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:50.956 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:50.956 09:45:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:50.956 [2024-12-09 09:45:25.457354] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:27:50.956 [2024-12-09 09:45:25.457417] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:50.956 [2024-12-09 09:45:25.556320] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:50.956 [2024-12-09 09:45:25.574532] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:50.956 [2024-12-09 09:45:25.574566] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:50.956 [2024-12-09 09:45:25.574574] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:50.956 [2024-12-09 09:45:25.574580] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:50.956 [2024-12-09 09:45:25.574586] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:50.956 [2024-12-09 09:45:25.576085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:50.956 [2024-12-09 09:45:25.576210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:50.956 [2024-12-09 09:45:25.576365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:50.956 [2024-12-09 09:45:25.576366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:50.956 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:50.956 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@868 -- # return 0 00:27:50.956 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:50.956 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:50.956 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:50.956 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:50.956 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:27:50.956 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:50.956 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.956 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:50.956 Malloc0 00:27:50.956 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.956 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:27:50.956 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.956 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:50.956 Delay0 00:27:50.956 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.956 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:50.956 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.956 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:50.956 [2024-12-09 09:45:26.330903] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:50.956 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.956 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:27:50.956 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.956 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:50.956 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.956 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:50.956 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.956 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:50.956 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.956 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:50.956 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.956 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:50.956 [2024-12-09 09:45:26.371210] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:50.956 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.956 09:45:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:27:52.868 09:45:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:27:52.868 09:45:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # local i=0 00:27:52.868 09:45:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:27:52.868 09:45:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:27:52.868 09:45:27 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1209 -- # sleep 2 00:27:54.817 09:45:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:27:54.817 09:45:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:27:54.817 09:45:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:27:54.817 09:45:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:27:54.817 09:45:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:27:54.817 09:45:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # return 0 00:27:54.817 09:45:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=2893078 00:27:54.817 09:45:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:27:54.817 09:45:29 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:27:54.817 [global] 00:27:54.817 thread=1 00:27:54.817 invalidate=1 00:27:54.817 rw=write 00:27:54.817 time_based=1 00:27:54.817 runtime=60 00:27:54.817 ioengine=libaio 00:27:54.817 direct=1 00:27:54.817 bs=4096 00:27:54.817 iodepth=1 00:27:54.817 norandommap=0 00:27:54.817 numjobs=1 00:27:54.817 00:27:54.817 verify_dump=1 00:27:54.817 verify_backlog=512 00:27:54.817 verify_state_save=0 00:27:54.817 do_verify=1 00:27:54.817 verify=crc32c-intel 00:27:54.817 [job0] 00:27:54.817 filename=/dev/nvme0n1 00:27:54.817 Could not set queue depth (nvme0n1) 00:27:55.080 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:27:55.080 fio-3.35 00:27:55.080 Starting 1 thread 00:27:57.624 09:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:27:57.624 09:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.624 09:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:57.624 true 00:27:57.624 09:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.624 09:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:27:57.624 09:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.624 09:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:57.624 true 00:27:57.624 09:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.624 09:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:27:57.624 09:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.624 09:45:32 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:57.624 true 00:27:57.624 09:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.624 09:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:27:57.624 09:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.624 09:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:57.624 true 00:27:57.624 09:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.624 09:45:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:28:00.929 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:28:00.929 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.929 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:00.929 true 00:28:00.929 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.929 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:28:00.929 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.929 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:00.929 true 00:28:00.929 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.929 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:28:00.929 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.929 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:00.929 true 00:28:00.929 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.929 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:28:00.929 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.929 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:00.929 true 00:28:00.929 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.929 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:28:00.929 09:45:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 2893078 00:28:57.366 00:28:57.366 job0: (groupid=0, jobs=1): err= 0: pid=2893396: Mon Dec 9 09:46:30 2024 00:28:57.366 read: IOPS=80, BW=324KiB/s (332kB/s)(19.0MiB/60025msec) 00:28:57.366 slat (usec): min=7, max=13182, avg=30.76, stdev=220.23 00:28:57.366 clat (usec): min=312, max=41831k, avg=11797.44, stdev=600185.83 00:28:57.366 lat (usec): min=324, max=41831k, avg=11828.20, stdev=600185.55 00:28:57.366 clat percentiles (usec): 00:28:57.366 | 1.00th=[ 537], 5.00th=[ 644], 10.00th=[ 693], 00:28:57.366 | 20.00th=[ 775], 30.00th=[ 816], 40.00th=[ 848], 00:28:57.366 | 50.00th=[ 881], 60.00th=[ 922], 70.00th=[ 1012], 00:28:57.366 | 80.00th=[ 1057], 90.00th=[ 1106], 95.00th=[ 41157], 00:28:57.366 | 99.00th=[ 42206], 99.50th=[ 42206], 99.90th=[ 43254], 00:28:57.366 | 99.95th=[ 43254], 99.99th=[17112761] 00:28:57.366 write: IOPS=85, BW=341KiB/s (349kB/s)(20.0MiB/60025msec); 0 zone resets 00:28:57.366 slat (usec): min=9, max=29895, avg=35.85, stdev=417.53 00:28:57.366 clat (usec): min=148, max=1270, avg=449.72, stdev=115.85 00:28:57.366 lat (usec): min=183, max=30471, avg=485.57, stdev=435.83 00:28:57.366 clat percentiles (usec): 00:28:57.366 | 1.00th=[ 202], 5.00th=[ 269], 10.00th=[ 306], 20.00th=[ 343], 00:28:57.366 | 30.00th=[ 396], 40.00th=[ 429], 50.00th=[ 449], 60.00th=[ 469], 00:28:57.366 | 70.00th=[ 486], 80.00th=[ 545], 90.00th=[ 619], 95.00th=[ 652], 00:28:57.366 | 99.00th=[ 734], 99.50th=[ 775], 99.90th=[ 881], 99.95th=[ 898], 00:28:57.366 | 99.99th=[ 1270] 00:28:57.366 bw ( KiB/s): min= 304, max= 4096, per=100.00%, avg=2730.67, stdev=1499.42, samples=15 00:28:57.366 iops : min= 76, max= 1024, avg=682.67, stdev=374.86, samples=15 00:28:57.366 lat (usec) : 250=1.96%, 500=35.96%, 750=20.91%, 1000=25.73% 00:28:57.366 lat (msec) : 2=12.70%, 50=2.74%, >=2000=0.01% 00:28:57.366 cpu : usr=0.31%, sys=0.52%, ctx=9984, majf=0, minf=1 00:28:57.366 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:57.366 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:57.366 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:57.367 issued rwts: total=4858,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:57.367 latency : target=0, window=0, percentile=100.00%, depth=1 00:28:57.367 00:28:57.367 Run status group 0 (all jobs): 00:28:57.367 READ: bw=324KiB/s (332kB/s), 324KiB/s-324KiB/s (332kB/s-332kB/s), io=19.0MiB (19.9MB), run=60025-60025msec 00:28:57.367 WRITE: bw=341KiB/s (349kB/s), 341KiB/s-341KiB/s (349kB/s-349kB/s), io=20.0MiB (21.0MB), run=60025-60025msec 00:28:57.367 00:28:57.367 Disk stats (read/write): 00:28:57.367 nvme0n1: ios=4907/5120, merge=0/0, ticks=15643/2069, in_queue=17712, util=99.68% 00:28:57.367 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:28:57.367 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:28:57.367 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:28:57.367 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # local i=0 00:28:57.367 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:28:57.367 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:57.367 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:28:57.367 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:57.367 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1235 -- # return 0 00:28:57.367 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:28:57.367 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:28:57.367 nvmf hotplug test: fio successful as expected 00:28:57.367 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:57.367 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.367 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:57.367 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.367 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:28:57.367 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:28:57.367 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:28:57.367 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:57.367 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:28:57.367 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:57.367 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:28:57.367 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:57.367 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:57.367 rmmod nvme_tcp 00:28:57.367 rmmod nvme_fabrics 00:28:57.367 rmmod nvme_keyring 00:28:57.367 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:57.367 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:28:57.367 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:28:57.367 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@517 -- # '[' -n 2892195 ']' 00:28:57.367 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@518 -- # killprocess 2892195 00:28:57.367 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # '[' -z 2892195 ']' 00:28:57.367 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # kill -0 2892195 00:28:57.367 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # uname 00:28:57.367 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:57.367 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2892195 00:28:57.367 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:57.367 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:57.367 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2892195' 00:28:57.367 killing process with pid 2892195 00:28:57.367 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@973 -- # kill 2892195 00:28:57.367 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@978 -- # wait 2892195 00:28:57.367 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:57.367 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:57.367 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:57.367 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # iptr 00:28:57.367 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-save 00:28:57.367 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:57.367 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:28:57.367 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:57.367 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:57.367 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:57.367 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:57.367 09:46:30 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:57.628 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:57.628 00:28:57.628 real 1m15.273s 00:28:57.628 user 4m34.341s 00:28:57.628 sys 0m7.815s 00:28:57.628 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:57.628 09:46:33 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:57.628 ************************************ 00:28:57.628 END TEST nvmf_initiator_timeout 00:28:57.628 ************************************ 00:28:57.628 09:46:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:28:57.628 09:46:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:28:57.628 09:46:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:28:57.628 09:46:33 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:28:57.628 09:46:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:05.766 09:46:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:05.766 09:46:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:29:05.766 09:46:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:05.766 09:46:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:05.766 09:46:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:05.766 09:46:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:05.766 09:46:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:05.766 09:46:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:29:05.766 09:46:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:05.766 09:46:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:29:05.766 09:46:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:29:05.766 09:46:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:29:05.766 09:46:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:29:05.766 09:46:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:29:05.766 09:46:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:29:05.766 09:46:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:05.766 09:46:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:05.766 09:46:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:05.766 09:46:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:05.766 09:46:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:05.766 09:46:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:05.766 09:46:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:05.766 09:46:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:05.766 09:46:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:05.766 09:46:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:05.766 09:46:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:05.766 09:46:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:05.766 09:46:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:05.766 09:46:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:05.766 09:46:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:05.766 09:46:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:05.766 09:46:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:05.766 09:46:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:05.766 09:46:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:05.766 09:46:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:05.766 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:05.767 09:46:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:05.767 09:46:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:05.767 09:46:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:05.767 09:46:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:05.767 09:46:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:05.767 09:46:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:05.767 09:46:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:05.767 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:05.767 09:46:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:05.767 09:46:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:05.767 09:46:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:05.767 09:46:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:05.767 09:46:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:05.767 09:46:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:05.767 09:46:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:05.767 09:46:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:05.767 09:46:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:05.767 09:46:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:05.767 09:46:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:05.767 09:46:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:05.767 09:46:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:05.767 09:46:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:05.767 09:46:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:05.767 09:46:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:05.767 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:05.767 09:46:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:05.767 09:46:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:05.767 09:46:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:05.767 09:46:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:05.767 09:46:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:05.767 09:46:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:05.767 09:46:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:05.767 09:46:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:05.767 09:46:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:05.767 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:05.767 09:46:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:05.767 09:46:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:05.767 09:46:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:05.767 09:46:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:29:05.767 09:46:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:29:05.767 09:46:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:05.767 09:46:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:05.767 09:46:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:05.767 ************************************ 00:29:05.767 START TEST nvmf_perf_adq 00:29:05.767 ************************************ 00:29:05.767 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:29:05.767 * Looking for test storage... 00:29:05.767 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:05.767 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:05.767 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lcov --version 00:29:05.767 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:05.767 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:05.767 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:05.767 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:05.767 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:05.767 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:29:05.767 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:29:05.767 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:29:05.767 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:29:05.767 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:29:05.767 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:29:05.767 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:29:05.767 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:05.767 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:29:05.767 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:29:05.767 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:05.767 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:05.767 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:29:05.767 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:29:05.767 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:05.767 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:29:05.767 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:29:05.767 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:29:05.767 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:29:05.767 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:05.767 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:29:05.767 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:29:05.767 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:05.767 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:05.767 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:29:05.767 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:05.767 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:05.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:05.767 --rc genhtml_branch_coverage=1 00:29:05.767 --rc genhtml_function_coverage=1 00:29:05.767 --rc genhtml_legend=1 00:29:05.767 --rc geninfo_all_blocks=1 00:29:05.767 --rc geninfo_unexecuted_blocks=1 00:29:05.767 00:29:05.767 ' 00:29:05.767 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:05.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:05.767 --rc genhtml_branch_coverage=1 00:29:05.767 --rc genhtml_function_coverage=1 00:29:05.767 --rc genhtml_legend=1 00:29:05.767 --rc geninfo_all_blocks=1 00:29:05.767 --rc geninfo_unexecuted_blocks=1 00:29:05.767 00:29:05.767 ' 00:29:05.767 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:05.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:05.767 --rc genhtml_branch_coverage=1 00:29:05.767 --rc genhtml_function_coverage=1 00:29:05.767 --rc genhtml_legend=1 00:29:05.767 --rc geninfo_all_blocks=1 00:29:05.767 --rc geninfo_unexecuted_blocks=1 00:29:05.767 00:29:05.767 ' 00:29:05.767 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:05.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:05.767 --rc genhtml_branch_coverage=1 00:29:05.767 --rc genhtml_function_coverage=1 00:29:05.767 --rc genhtml_legend=1 00:29:05.767 --rc geninfo_all_blocks=1 00:29:05.767 --rc geninfo_unexecuted_blocks=1 00:29:05.767 00:29:05.767 ' 00:29:05.767 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:05.767 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:29:05.767 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:05.767 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:05.767 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:05.767 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:05.767 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:05.767 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:05.767 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:05.767 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:05.767 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:05.767 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:05.768 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:05.768 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:05.768 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:05.768 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:05.768 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:05.768 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:05.768 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:05.768 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:29:05.768 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:05.768 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:05.768 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:05.768 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:05.768 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:05.768 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:05.768 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:29:05.768 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:05.768 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:29:05.768 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:05.768 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:05.768 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:05.768 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:05.768 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:05.768 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:05.768 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:05.768 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:05.768 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:05.768 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:05.768 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:29:05.768 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:29:05.768 09:46:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:12.353 09:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:12.353 09:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:29:12.353 09:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:12.353 09:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:12.353 09:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:12.353 09:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:12.353 09:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:12.353 09:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:29:12.353 09:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:12.353 09:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:29:12.353 09:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:29:12.353 09:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:29:12.353 09:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:29:12.353 09:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:29:12.353 09:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:29:12.353 09:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:12.353 09:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:12.353 09:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:12.353 09:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:12.353 09:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:12.353 09:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:12.353 09:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:12.353 09:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:12.353 09:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:12.354 09:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:12.354 09:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:12.354 09:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:12.354 09:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:12.354 09:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:12.354 09:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:12.354 09:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:12.354 09:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:12.354 09:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:12.354 09:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:12.354 09:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:12.354 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:12.354 09:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:12.354 09:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:12.354 09:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:12.354 09:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:12.354 09:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:12.354 09:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:12.354 09:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:12.354 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:12.354 09:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:12.354 09:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:12.354 09:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:12.354 09:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:12.354 09:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:12.354 09:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:12.354 09:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:12.354 09:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:12.354 09:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:12.354 09:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:12.354 09:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:12.354 09:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:12.354 09:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:12.354 09:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:12.354 09:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:12.354 09:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:12.354 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:12.354 09:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:12.354 09:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:12.354 09:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:12.354 09:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:12.354 09:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:12.354 09:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:12.354 09:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:12.354 09:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:12.354 09:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:12.354 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:12.354 09:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:12.354 09:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:12.354 09:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:12.354 09:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:29:12.354 09:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:29:12.354 09:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:29:12.354 09:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:29:12.354 09:46:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:29:13.740 09:46:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:29:15.656 09:46:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:29:20.950 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:29:20.950 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:20.950 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:20.950 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:20.950 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:20.950 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:20.950 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:20.950 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:20.950 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:20.950 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:20.950 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:20.950 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:29:20.950 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:20.950 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:20.950 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:29:20.950 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:20.950 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:20.950 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:20.950 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:20.950 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:20.950 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:29:20.950 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:20.950 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:29:20.950 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:29:20.950 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:29:20.950 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:29:20.950 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:29:20.950 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:29:20.950 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:20.950 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:20.950 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:20.950 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:20.950 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:20.950 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:20.950 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:20.950 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:20.950 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:20.950 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:20.950 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:20.950 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:20.950 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:20.951 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:20.951 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:20.951 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:20.951 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:20.951 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:20.951 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:20.951 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:20.951 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:20.951 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:20.951 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:20.951 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:20.951 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:20.951 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:20.951 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:20.951 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:20.951 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:20.951 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:20.951 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:20.951 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:20.951 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:20.951 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:20.951 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:20.951 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:20.951 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:20.951 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:20.951 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:20.951 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:20.951 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:20.951 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:20.951 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:20.951 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:20.951 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:20.951 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:20.951 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:20.951 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:20.951 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:20.951 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:20.951 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:20.951 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:20.951 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:20.951 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:20.951 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:20.951 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:20.951 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:20.951 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:20.951 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:29:20.951 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:20.951 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:20.951 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:20.951 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:20.951 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:20.951 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:20.951 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:20.951 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:20.951 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:20.951 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:20.951 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:20.951 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:20.951 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:20.951 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:20.951 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:20.951 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:20.951 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:20.951 09:46:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:20.951 09:46:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:20.951 09:46:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:20.951 09:46:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:20.951 09:46:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:20.951 09:46:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:20.951 09:46:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:20.951 09:46:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:20.951 09:46:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:20.951 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:20.951 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.684 ms 00:29:20.951 00:29:20.951 --- 10.0.0.2 ping statistics --- 00:29:20.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:20.951 rtt min/avg/max/mdev = 0.684/0.684/0.684/0.000 ms 00:29:20.951 09:46:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:20.951 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:20.951 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.327 ms 00:29:20.951 00:29:20.951 --- 10.0.0.1 ping statistics --- 00:29:20.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:20.951 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:29:20.951 09:46:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:20.951 09:46:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:29:20.951 09:46:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:20.951 09:46:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:20.951 09:46:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:20.951 09:46:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:20.951 09:46:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:20.951 09:46:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:20.951 09:46:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:20.951 09:46:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:29:20.951 09:46:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:20.951 09:46:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:20.951 09:46:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:20.951 09:46:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=2914333 00:29:20.951 09:46:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 2914333 00:29:20.951 09:46:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:29:20.951 09:46:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 2914333 ']' 00:29:20.951 09:46:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:20.951 09:46:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:20.951 09:46:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:20.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:20.951 09:46:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:20.951 09:46:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:20.951 [2024-12-09 09:46:56.381285] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:29:20.951 [2024-12-09 09:46:56.381355] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:21.211 [2024-12-09 09:46:56.484770] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:21.211 [2024-12-09 09:46:56.512777] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:21.211 [2024-12-09 09:46:56.512828] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:21.211 [2024-12-09 09:46:56.512837] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:21.211 [2024-12-09 09:46:56.512844] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:21.211 [2024-12-09 09:46:56.512851] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:21.211 [2024-12-09 09:46:56.514661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:21.211 [2024-12-09 09:46:56.514786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:21.211 [2024-12-09 09:46:56.514949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:21.211 [2024-12-09 09:46:56.514950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:21.783 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:21.783 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:29:21.783 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:21.783 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:21.783 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:21.783 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:21.783 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:29:22.043 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:29:22.043 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:29:22.043 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.043 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:22.043 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.043 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:29:22.043 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:29:22.043 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.043 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:22.043 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.043 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:29:22.043 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.043 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:22.043 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.043 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:29:22.043 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.043 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:22.043 [2024-12-09 09:46:57.359430] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:22.043 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.043 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:29:22.043 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.043 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:22.043 Malloc1 00:29:22.043 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.043 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:22.043 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.043 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:22.043 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.043 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:29:22.043 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.043 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:22.043 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.043 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:22.043 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.043 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:22.043 [2024-12-09 09:46:57.431391] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:22.044 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.044 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=2914440 00:29:22.044 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:29:22.044 09:46:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:29:24.589 09:46:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:29:24.589 09:46:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.589 09:46:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:24.589 09:46:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.589 09:46:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:29:24.589 "tick_rate": 2400000000, 00:29:24.589 "poll_groups": [ 00:29:24.589 { 00:29:24.589 "name": "nvmf_tgt_poll_group_000", 00:29:24.589 "admin_qpairs": 1, 00:29:24.589 "io_qpairs": 1, 00:29:24.589 "current_admin_qpairs": 1, 00:29:24.589 "current_io_qpairs": 1, 00:29:24.589 "pending_bdev_io": 0, 00:29:24.589 "completed_nvme_io": 19596, 00:29:24.589 "transports": [ 00:29:24.589 { 00:29:24.589 "trtype": "TCP" 00:29:24.589 } 00:29:24.589 ] 00:29:24.589 }, 00:29:24.589 { 00:29:24.589 "name": "nvmf_tgt_poll_group_001", 00:29:24.589 "admin_qpairs": 0, 00:29:24.589 "io_qpairs": 1, 00:29:24.589 "current_admin_qpairs": 0, 00:29:24.589 "current_io_qpairs": 1, 00:29:24.589 "pending_bdev_io": 0, 00:29:24.589 "completed_nvme_io": 28443, 00:29:24.589 "transports": [ 00:29:24.589 { 00:29:24.589 "trtype": "TCP" 00:29:24.589 } 00:29:24.589 ] 00:29:24.589 }, 00:29:24.589 { 00:29:24.589 "name": "nvmf_tgt_poll_group_002", 00:29:24.589 "admin_qpairs": 0, 00:29:24.589 "io_qpairs": 1, 00:29:24.589 "current_admin_qpairs": 0, 00:29:24.589 "current_io_qpairs": 1, 00:29:24.589 "pending_bdev_io": 0, 00:29:24.590 "completed_nvme_io": 20891, 00:29:24.590 "transports": [ 00:29:24.590 { 00:29:24.590 "trtype": "TCP" 00:29:24.590 } 00:29:24.590 ] 00:29:24.590 }, 00:29:24.590 { 00:29:24.590 "name": "nvmf_tgt_poll_group_003", 00:29:24.590 "admin_qpairs": 0, 00:29:24.590 "io_qpairs": 1, 00:29:24.590 "current_admin_qpairs": 0, 00:29:24.590 "current_io_qpairs": 1, 00:29:24.590 "pending_bdev_io": 0, 00:29:24.590 "completed_nvme_io": 19678, 00:29:24.590 "transports": [ 00:29:24.590 { 00:29:24.590 "trtype": "TCP" 00:29:24.590 } 00:29:24.590 ] 00:29:24.590 } 00:29:24.590 ] 00:29:24.590 }' 00:29:24.590 09:46:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:29:24.590 09:46:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:29:24.590 09:46:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:29:24.590 09:46:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:29:24.590 09:46:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 2914440 00:29:32.727 Initializing NVMe Controllers 00:29:32.727 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:32.727 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:29:32.727 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:29:32.727 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:29:32.727 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:29:32.727 Initialization complete. Launching workers. 00:29:32.727 ======================================================== 00:29:32.727 Latency(us) 00:29:32.727 Device Information : IOPS MiB/s Average min max 00:29:32.727 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 12276.60 47.96 5213.76 1300.98 9417.99 00:29:32.727 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 14725.50 57.52 4345.65 1642.94 9496.95 00:29:32.727 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 13639.50 53.28 4692.11 1240.21 11273.98 00:29:32.727 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 12807.60 50.03 4996.57 1219.54 11287.45 00:29:32.727 ======================================================== 00:29:32.727 Total : 53449.19 208.79 4789.43 1219.54 11287.45 00:29:32.727 00:29:32.727 09:47:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:29:32.727 09:47:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:32.727 09:47:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:29:32.727 09:47:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:32.727 09:47:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:29:32.727 09:47:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:32.727 09:47:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:32.727 rmmod nvme_tcp 00:29:32.727 rmmod nvme_fabrics 00:29:32.727 rmmod nvme_keyring 00:29:32.727 09:47:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:32.727 09:47:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:29:32.727 09:47:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:29:32.727 09:47:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 2914333 ']' 00:29:32.727 09:47:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 2914333 00:29:32.727 09:47:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 2914333 ']' 00:29:32.727 09:47:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 2914333 00:29:32.727 09:47:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:29:32.727 09:47:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:32.727 09:47:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2914333 00:29:32.727 09:47:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:32.727 09:47:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:32.727 09:47:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2914333' 00:29:32.727 killing process with pid 2914333 00:29:32.727 09:47:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 2914333 00:29:32.727 09:47:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 2914333 00:29:32.727 09:47:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:32.727 09:47:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:32.727 09:47:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:32.727 09:47:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:29:32.727 09:47:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:29:32.727 09:47:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:29:32.727 09:47:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:32.727 09:47:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:32.727 09:47:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:32.727 09:47:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:32.727 09:47:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:32.727 09:47:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:34.639 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:34.639 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:29:34.639 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:29:34.639 09:47:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:29:36.557 09:47:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:29:38.475 09:47:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:29:43.767 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:29:43.767 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:43.767 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:43.767 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:43.767 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:43.767 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:43.767 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:43.767 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:43.767 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:43.767 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:43.767 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:43.767 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:29:43.767 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:43.767 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:43.767 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:29:43.767 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:43.768 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:43.768 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:43.768 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:43.768 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:43.768 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:29:43.768 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:43.768 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:29:43.768 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:29:43.768 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:29:43.768 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:29:43.768 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:29:43.768 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:29:43.768 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:43.768 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:43.768 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:43.768 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:43.768 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:43.768 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:43.768 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:43.768 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:43.768 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:43.768 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:43.768 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:43.768 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:43.768 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:43.768 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:43.768 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:43.768 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:43.768 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:43.768 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:43.768 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:43.768 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:43.768 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:43.768 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:43.768 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:43.768 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:43.768 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:43.768 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:43.768 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:43.768 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:43.768 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:43.768 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:43.768 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:43.768 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:43.768 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:43.768 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:43.768 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:43.768 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:43.768 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:43.768 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:43.768 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:43.768 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:43.768 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:43.768 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:43.768 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:43.768 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:43.768 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:43.768 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:43.768 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:43.768 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:43.768 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:43.768 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:43.768 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:43.768 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:43.768 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:43.768 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:43.768 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:43.768 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:43.768 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:43.768 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:43.768 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:29:43.768 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:43.768 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:43.768 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:43.768 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:43.768 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:43.768 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:43.768 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:43.768 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:43.768 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:43.768 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:43.768 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:43.768 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:43.768 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:43.768 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:43.768 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:43.768 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:43.768 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:43.768 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:43.768 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:43.768 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:43.768 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:43.768 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:43.768 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:43.768 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:43.768 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:43.768 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:43.768 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:43.768 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.605 ms 00:29:43.768 00:29:43.768 --- 10.0.0.2 ping statistics --- 00:29:43.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:43.768 rtt min/avg/max/mdev = 0.605/0.605/0.605/0.000 ms 00:29:43.768 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:43.768 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:43.768 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.307 ms 00:29:43.768 00:29:43.768 --- 10.0.0.1 ping statistics --- 00:29:43.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:43.768 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:29:43.768 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:43.768 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:29:43.768 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:43.768 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:43.769 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:43.769 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:43.769 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:43.769 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:43.769 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:43.769 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:29:43.769 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:29:43.769 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:29:43.769 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:29:43.769 net.core.busy_poll = 1 00:29:43.769 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:29:43.769 net.core.busy_read = 1 00:29:43.769 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:29:43.769 09:47:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:29:43.769 09:47:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:29:43.769 09:47:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:29:43.769 09:47:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:29:43.769 09:47:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:29:43.769 09:47:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:43.769 09:47:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:43.769 09:47:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:44.031 09:47:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=2919001 00:29:44.031 09:47:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 2919001 00:29:44.031 09:47:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:29:44.031 09:47:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 2919001 ']' 00:29:44.031 09:47:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:44.031 09:47:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:44.031 09:47:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:44.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:44.031 09:47:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:44.031 09:47:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:44.031 [2024-12-09 09:47:19.283458] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:29:44.031 [2024-12-09 09:47:19.283522] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:44.031 [2024-12-09 09:47:19.384660] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:44.031 [2024-12-09 09:47:19.413218] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:44.031 [2024-12-09 09:47:19.413271] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:44.031 [2024-12-09 09:47:19.413280] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:44.031 [2024-12-09 09:47:19.413287] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:44.031 [2024-12-09 09:47:19.413293] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:44.031 [2024-12-09 09:47:19.415152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:44.031 [2024-12-09 09:47:19.415276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:44.031 [2024-12-09 09:47:19.415439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:44.031 [2024-12-09 09:47:19.415439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:44.977 09:47:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:44.977 09:47:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:29:44.977 09:47:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:44.977 09:47:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:44.977 09:47:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:44.977 09:47:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:44.977 09:47:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:29:44.977 09:47:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:29:44.977 09:47:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:29:44.977 09:47:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.977 09:47:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:44.977 09:47:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.977 09:47:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:29:44.977 09:47:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:29:44.977 09:47:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.977 09:47:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:44.977 09:47:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.977 09:47:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:29:44.977 09:47:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.977 09:47:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:44.977 09:47:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.977 09:47:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:29:44.977 09:47:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.977 09:47:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:44.977 [2024-12-09 09:47:20.251611] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:44.977 09:47:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.977 09:47:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:29:44.977 09:47:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.977 09:47:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:44.977 Malloc1 00:29:44.977 09:47:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.977 09:47:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:44.977 09:47:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.977 09:47:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:44.977 09:47:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.977 09:47:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:29:44.977 09:47:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.977 09:47:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:44.977 09:47:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.977 09:47:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:44.977 09:47:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.977 09:47:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:44.977 [2024-12-09 09:47:20.333708] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:44.977 09:47:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.977 09:47:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=2919248 00:29:44.977 09:47:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:29:44.977 09:47:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:29:47.524 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:29:47.524 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.524 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:47.524 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.524 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:29:47.524 "tick_rate": 2400000000, 00:29:47.524 "poll_groups": [ 00:29:47.524 { 00:29:47.524 "name": "nvmf_tgt_poll_group_000", 00:29:47.524 "admin_qpairs": 1, 00:29:47.524 "io_qpairs": 3, 00:29:47.524 "current_admin_qpairs": 1, 00:29:47.524 "current_io_qpairs": 3, 00:29:47.524 "pending_bdev_io": 0, 00:29:47.524 "completed_nvme_io": 29503, 00:29:47.524 "transports": [ 00:29:47.524 { 00:29:47.524 "trtype": "TCP" 00:29:47.524 } 00:29:47.524 ] 00:29:47.524 }, 00:29:47.524 { 00:29:47.524 "name": "nvmf_tgt_poll_group_001", 00:29:47.524 "admin_qpairs": 0, 00:29:47.524 "io_qpairs": 1, 00:29:47.524 "current_admin_qpairs": 0, 00:29:47.524 "current_io_qpairs": 1, 00:29:47.524 "pending_bdev_io": 0, 00:29:47.524 "completed_nvme_io": 41527, 00:29:47.524 "transports": [ 00:29:47.524 { 00:29:47.524 "trtype": "TCP" 00:29:47.524 } 00:29:47.524 ] 00:29:47.524 }, 00:29:47.524 { 00:29:47.524 "name": "nvmf_tgt_poll_group_002", 00:29:47.524 "admin_qpairs": 0, 00:29:47.524 "io_qpairs": 0, 00:29:47.524 "current_admin_qpairs": 0, 00:29:47.524 "current_io_qpairs": 0, 00:29:47.524 "pending_bdev_io": 0, 00:29:47.524 "completed_nvme_io": 0, 00:29:47.524 "transports": [ 00:29:47.524 { 00:29:47.524 "trtype": "TCP" 00:29:47.524 } 00:29:47.524 ] 00:29:47.524 }, 00:29:47.524 { 00:29:47.524 "name": "nvmf_tgt_poll_group_003", 00:29:47.524 "admin_qpairs": 0, 00:29:47.524 "io_qpairs": 0, 00:29:47.524 "current_admin_qpairs": 0, 00:29:47.524 "current_io_qpairs": 0, 00:29:47.524 "pending_bdev_io": 0, 00:29:47.524 "completed_nvme_io": 0, 00:29:47.524 "transports": [ 00:29:47.524 { 00:29:47.524 "trtype": "TCP" 00:29:47.524 } 00:29:47.524 ] 00:29:47.525 } 00:29:47.525 ] 00:29:47.525 }' 00:29:47.525 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:29:47.525 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:29:47.525 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:29:47.525 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:29:47.525 09:47:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 2919248 00:29:55.660 Initializing NVMe Controllers 00:29:55.660 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:55.660 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:29:55.660 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:29:55.660 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:29:55.660 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:29:55.660 Initialization complete. Launching workers. 00:29:55.660 ======================================================== 00:29:55.660 Latency(us) 00:29:55.660 Device Information : IOPS MiB/s Average min max 00:29:55.660 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 6907.04 26.98 9267.48 1374.38 59483.34 00:29:55.660 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 6564.55 25.64 9784.24 1402.56 57548.50 00:29:55.660 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 21874.02 85.45 2925.49 1633.89 45247.57 00:29:55.660 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5509.76 21.52 11652.73 1398.30 58657.72 00:29:55.660 ======================================================== 00:29:55.660 Total : 40855.37 159.59 6276.68 1374.38 59483.34 00:29:55.660 00:29:55.660 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:29:55.660 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:55.660 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:29:55.660 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:55.660 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:29:55.660 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:55.660 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:55.660 rmmod nvme_tcp 00:29:55.660 rmmod nvme_fabrics 00:29:55.660 rmmod nvme_keyring 00:29:55.660 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:55.660 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:29:55.660 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:29:55.660 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 2919001 ']' 00:29:55.660 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 2919001 00:29:55.660 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 2919001 ']' 00:29:55.660 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 2919001 00:29:55.660 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:29:55.660 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:55.660 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2919001 00:29:55.660 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:55.660 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:55.660 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2919001' 00:29:55.660 killing process with pid 2919001 00:29:55.660 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 2919001 00:29:55.660 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 2919001 00:29:55.660 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:55.660 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:55.660 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:55.660 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:29:55.660 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:29:55.660 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:55.660 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:29:55.660 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:55.660 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:55.660 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:55.660 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:55.660 09:47:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:58.963 09:47:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:58.963 09:47:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:29:58.963 00:29:58.963 real 0m53.849s 00:29:58.963 user 2m49.458s 00:29:58.963 sys 0m11.496s 00:29:58.963 09:47:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:58.963 09:47:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:58.963 ************************************ 00:29:58.963 END TEST nvmf_perf_adq 00:29:58.963 ************************************ 00:29:58.963 09:47:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:29:58.963 09:47:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:58.963 09:47:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:58.963 09:47:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:58.963 ************************************ 00:29:58.963 START TEST nvmf_shutdown 00:29:58.963 ************************************ 00:29:58.963 09:47:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:29:58.963 * Looking for test storage... 00:29:58.963 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:58.963 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:58.963 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:29:58.963 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:58.963 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:58.963 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:58.963 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:58.963 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:58.963 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:29:58.963 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:29:58.963 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:29:58.963 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:29:58.963 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:29:58.964 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:29:58.964 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:29:58.964 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:58.964 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:29:58.964 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:29:58.964 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:58.964 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:58.964 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:29:58.964 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:29:58.964 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:58.964 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:29:58.964 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:29:58.964 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:29:58.964 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:29:58.964 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:58.964 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:29:58.964 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:29:58.964 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:58.964 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:58.964 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:29:58.964 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:58.964 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:58.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:58.964 --rc genhtml_branch_coverage=1 00:29:58.964 --rc genhtml_function_coverage=1 00:29:58.964 --rc genhtml_legend=1 00:29:58.964 --rc geninfo_all_blocks=1 00:29:58.964 --rc geninfo_unexecuted_blocks=1 00:29:58.964 00:29:58.964 ' 00:29:58.964 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:58.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:58.964 --rc genhtml_branch_coverage=1 00:29:58.964 --rc genhtml_function_coverage=1 00:29:58.964 --rc genhtml_legend=1 00:29:58.964 --rc geninfo_all_blocks=1 00:29:58.964 --rc geninfo_unexecuted_blocks=1 00:29:58.964 00:29:58.964 ' 00:29:58.964 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:58.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:58.964 --rc genhtml_branch_coverage=1 00:29:58.964 --rc genhtml_function_coverage=1 00:29:58.964 --rc genhtml_legend=1 00:29:58.964 --rc geninfo_all_blocks=1 00:29:58.964 --rc geninfo_unexecuted_blocks=1 00:29:58.964 00:29:58.964 ' 00:29:58.964 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:58.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:58.964 --rc genhtml_branch_coverage=1 00:29:58.964 --rc genhtml_function_coverage=1 00:29:58.964 --rc genhtml_legend=1 00:29:58.964 --rc geninfo_all_blocks=1 00:29:58.964 --rc geninfo_unexecuted_blocks=1 00:29:58.964 00:29:58.964 ' 00:29:58.964 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:58.964 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:29:58.964 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:58.964 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:58.964 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:58.964 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:58.964 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:58.964 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:58.964 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:58.964 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:58.964 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:58.964 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:58.964 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:58.964 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:58.964 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:58.964 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:58.964 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:58.964 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:58.964 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:58.964 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:29:58.964 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:58.964 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:58.964 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:58.964 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.964 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.964 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.964 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:29:58.964 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.964 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:29:58.964 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:58.964 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:58.964 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:58.964 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:58.964 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:58.964 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:58.964 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:58.964 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:58.964 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:58.964 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:58.964 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:29:58.964 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:29:58.964 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:29:58.964 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:58.964 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:58.964 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:58.964 ************************************ 00:29:58.964 START TEST nvmf_shutdown_tc1 00:29:58.964 ************************************ 00:29:58.964 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:29:58.964 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:29:58.964 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:58.964 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:58.964 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:58.964 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:58.965 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:58.965 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:58.965 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:58.965 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:58.965 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:58.965 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:58.965 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:58.965 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:58.965 09:47:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:07.105 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:07.105 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:30:07.105 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:07.105 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:07.105 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:07.105 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:07.105 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:07.105 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:30:07.105 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:07.105 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:30:07.105 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:30:07.105 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:30:07.105 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:30:07.105 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:30:07.105 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:30:07.105 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:07.105 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:07.105 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:07.105 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:07.105 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:07.105 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:07.105 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:07.105 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:07.105 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:07.105 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:07.105 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:07.105 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:07.106 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:07.106 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:07.106 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:07.106 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:07.106 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:07.106 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:07.106 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:07.106 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:07.106 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:07.106 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:07.106 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:07.106 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:07.106 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:07.106 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:07.106 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:07.106 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:07.106 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:07.106 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:07.106 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:07.106 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:07.106 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:07.106 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:07.106 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:07.106 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:07.106 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:07.106 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:07.106 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:07.106 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:07.106 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:07.106 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:07.106 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:07.106 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:07.106 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:07.106 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:07.106 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:07.106 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:07.106 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:07.106 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:07.106 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:07.106 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:07.106 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:07.106 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:07.106 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:07.106 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:07.106 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:07.106 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:07.106 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:30:07.106 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:07.106 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:07.106 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:07.106 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:07.106 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:07.106 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:07.106 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:07.106 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:07.106 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:07.106 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:07.106 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:07.106 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:07.106 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:07.106 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:07.106 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:07.106 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:07.106 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:07.106 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:07.106 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:07.106 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:07.106 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:07.106 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:07.106 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:07.106 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:07.106 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:07.106 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:07.106 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:07.106 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.577 ms 00:30:07.107 00:30:07.107 --- 10.0.0.2 ping statistics --- 00:30:07.107 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:07.107 rtt min/avg/max/mdev = 0.577/0.577/0.577/0.000 ms 00:30:07.107 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:07.107 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:07.107 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:30:07.107 00:30:07.107 --- 10.0.0.1 ping statistics --- 00:30:07.107 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:07.107 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:30:07.107 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:07.107 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:30:07.107 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:07.107 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:07.107 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:07.107 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:07.107 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:07.107 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:07.107 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:07.107 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:30:07.107 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:07.107 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:07.107 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:07.107 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=2925707 00:30:07.107 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 2925707 00:30:07.107 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:30:07.107 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 2925707 ']' 00:30:07.107 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:07.107 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:07.107 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:07.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:07.107 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:07.107 09:47:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:07.107 [2024-12-09 09:47:41.817279] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:30:07.107 [2024-12-09 09:47:41.817349] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:07.107 [2024-12-09 09:47:41.918159] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:07.107 [2024-12-09 09:47:41.946035] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:07.107 [2024-12-09 09:47:41.946083] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:07.107 [2024-12-09 09:47:41.946092] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:07.107 [2024-12-09 09:47:41.946099] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:07.107 [2024-12-09 09:47:41.946105] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:07.107 [2024-12-09 09:47:41.948301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:07.107 [2024-12-09 09:47:41.948472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:07.107 [2024-12-09 09:47:41.948633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:07.107 [2024-12-09 09:47:41.948634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:07.368 09:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:07.368 09:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:30:07.368 09:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:07.368 09:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:07.368 09:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:07.368 09:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:07.368 09:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:07.368 09:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.368 09:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:07.368 [2024-12-09 09:47:42.665576] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:07.368 09:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.368 09:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:30:07.368 09:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:30:07.368 09:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:07.368 09:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:07.369 09:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:07.369 09:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:07.369 09:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:30:07.369 09:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:07.369 09:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:30:07.369 09:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:07.369 09:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:30:07.369 09:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:07.369 09:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:30:07.369 09:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:07.369 09:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:30:07.369 09:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:07.369 09:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:30:07.369 09:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:07.369 09:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:30:07.369 09:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:07.369 09:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:30:07.369 09:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:07.369 09:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:30:07.369 09:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:07.369 09:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:30:07.369 09:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:30:07.369 09:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.369 09:47:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:07.369 Malloc1 00:30:07.369 [2024-12-09 09:47:42.788514] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:07.369 Malloc2 00:30:07.640 Malloc3 00:30:07.640 Malloc4 00:30:07.640 Malloc5 00:30:07.640 Malloc6 00:30:07.640 Malloc7 00:30:07.640 Malloc8 00:30:07.640 Malloc9 00:30:07.902 Malloc10 00:30:07.902 09:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.902 09:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:30:07.902 09:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:07.902 09:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:07.902 09:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=2926078 00:30:07.902 09:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 2926078 /var/tmp/bdevperf.sock 00:30:07.902 09:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 2926078 ']' 00:30:07.902 09:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:07.902 09:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:07.902 09:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:30:07.902 09:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:07.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:07.902 09:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:30:07.902 09:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:07.902 09:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:07.902 09:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:30:07.902 09:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:30:07.902 09:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:07.902 09:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:07.902 { 00:30:07.902 "params": { 00:30:07.902 "name": "Nvme$subsystem", 00:30:07.902 "trtype": "$TEST_TRANSPORT", 00:30:07.902 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:07.902 "adrfam": "ipv4", 00:30:07.902 "trsvcid": "$NVMF_PORT", 00:30:07.902 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:07.902 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:07.902 "hdgst": ${hdgst:-false}, 00:30:07.902 "ddgst": ${ddgst:-false} 00:30:07.902 }, 00:30:07.902 "method": "bdev_nvme_attach_controller" 00:30:07.902 } 00:30:07.902 EOF 00:30:07.902 )") 00:30:07.902 09:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:30:07.902 09:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:07.902 09:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:07.902 { 00:30:07.902 "params": { 00:30:07.902 "name": "Nvme$subsystem", 00:30:07.902 "trtype": "$TEST_TRANSPORT", 00:30:07.902 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:07.902 "adrfam": "ipv4", 00:30:07.902 "trsvcid": "$NVMF_PORT", 00:30:07.902 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:07.902 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:07.902 "hdgst": ${hdgst:-false}, 00:30:07.902 "ddgst": ${ddgst:-false} 00:30:07.902 }, 00:30:07.902 "method": "bdev_nvme_attach_controller" 00:30:07.902 } 00:30:07.902 EOF 00:30:07.902 )") 00:30:07.902 09:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:30:07.902 09:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:07.902 09:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:07.902 { 00:30:07.902 "params": { 00:30:07.902 "name": "Nvme$subsystem", 00:30:07.902 "trtype": "$TEST_TRANSPORT", 00:30:07.902 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:07.902 "adrfam": "ipv4", 00:30:07.902 "trsvcid": "$NVMF_PORT", 00:30:07.902 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:07.902 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:07.902 "hdgst": ${hdgst:-false}, 00:30:07.902 "ddgst": ${ddgst:-false} 00:30:07.902 }, 00:30:07.902 "method": "bdev_nvme_attach_controller" 00:30:07.902 } 00:30:07.902 EOF 00:30:07.902 )") 00:30:07.902 09:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:30:07.902 09:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:07.902 09:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:07.902 { 00:30:07.902 "params": { 00:30:07.902 "name": "Nvme$subsystem", 00:30:07.902 "trtype": "$TEST_TRANSPORT", 00:30:07.902 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:07.902 "adrfam": "ipv4", 00:30:07.902 "trsvcid": "$NVMF_PORT", 00:30:07.902 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:07.902 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:07.902 "hdgst": ${hdgst:-false}, 00:30:07.902 "ddgst": ${ddgst:-false} 00:30:07.902 }, 00:30:07.902 "method": "bdev_nvme_attach_controller" 00:30:07.902 } 00:30:07.902 EOF 00:30:07.902 )") 00:30:07.902 09:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:30:07.902 09:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:07.902 09:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:07.902 { 00:30:07.902 "params": { 00:30:07.902 "name": "Nvme$subsystem", 00:30:07.902 "trtype": "$TEST_TRANSPORT", 00:30:07.902 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:07.902 "adrfam": "ipv4", 00:30:07.902 "trsvcid": "$NVMF_PORT", 00:30:07.902 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:07.902 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:07.902 "hdgst": ${hdgst:-false}, 00:30:07.902 "ddgst": ${ddgst:-false} 00:30:07.902 }, 00:30:07.902 "method": "bdev_nvme_attach_controller" 00:30:07.902 } 00:30:07.902 EOF 00:30:07.902 )") 00:30:07.902 09:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:30:07.902 09:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:07.902 09:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:07.902 { 00:30:07.902 "params": { 00:30:07.902 "name": "Nvme$subsystem", 00:30:07.902 "trtype": "$TEST_TRANSPORT", 00:30:07.902 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:07.902 "adrfam": "ipv4", 00:30:07.902 "trsvcid": "$NVMF_PORT", 00:30:07.902 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:07.902 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:07.902 "hdgst": ${hdgst:-false}, 00:30:07.902 "ddgst": ${ddgst:-false} 00:30:07.902 }, 00:30:07.902 "method": "bdev_nvme_attach_controller" 00:30:07.902 } 00:30:07.902 EOF 00:30:07.902 )") 00:30:07.902 09:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:30:07.902 09:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:07.902 09:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:07.902 { 00:30:07.902 "params": { 00:30:07.902 "name": "Nvme$subsystem", 00:30:07.902 "trtype": "$TEST_TRANSPORT", 00:30:07.903 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:07.903 "adrfam": "ipv4", 00:30:07.903 "trsvcid": "$NVMF_PORT", 00:30:07.903 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:07.903 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:07.903 "hdgst": ${hdgst:-false}, 00:30:07.903 "ddgst": ${ddgst:-false} 00:30:07.903 }, 00:30:07.903 "method": "bdev_nvme_attach_controller" 00:30:07.903 } 00:30:07.903 EOF 00:30:07.903 )") 00:30:07.903 09:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:30:07.903 09:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:07.903 09:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:07.903 { 00:30:07.903 "params": { 00:30:07.903 "name": "Nvme$subsystem", 00:30:07.903 "trtype": "$TEST_TRANSPORT", 00:30:07.903 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:07.903 "adrfam": "ipv4", 00:30:07.903 "trsvcid": "$NVMF_PORT", 00:30:07.903 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:07.903 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:07.903 "hdgst": ${hdgst:-false}, 00:30:07.903 "ddgst": ${ddgst:-false} 00:30:07.903 }, 00:30:07.903 "method": "bdev_nvme_attach_controller" 00:30:07.903 } 00:30:07.903 EOF 00:30:07.903 )") 00:30:07.903 [2024-12-09 09:47:43.253795] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:30:07.903 [2024-12-09 09:47:43.253849] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:30:07.903 09:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:30:07.903 09:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:07.903 09:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:07.903 { 00:30:07.903 "params": { 00:30:07.903 "name": "Nvme$subsystem", 00:30:07.903 "trtype": "$TEST_TRANSPORT", 00:30:07.903 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:07.903 "adrfam": "ipv4", 00:30:07.903 "trsvcid": "$NVMF_PORT", 00:30:07.903 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:07.903 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:07.903 "hdgst": ${hdgst:-false}, 00:30:07.903 "ddgst": ${ddgst:-false} 00:30:07.903 }, 00:30:07.903 "method": "bdev_nvme_attach_controller" 00:30:07.903 } 00:30:07.903 EOF 00:30:07.903 )") 00:30:07.903 09:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:30:07.903 09:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:07.903 09:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:07.903 { 00:30:07.903 "params": { 00:30:07.903 "name": "Nvme$subsystem", 00:30:07.903 "trtype": "$TEST_TRANSPORT", 00:30:07.903 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:07.903 "adrfam": "ipv4", 00:30:07.903 "trsvcid": "$NVMF_PORT", 00:30:07.903 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:07.903 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:07.903 "hdgst": ${hdgst:-false}, 00:30:07.903 "ddgst": ${ddgst:-false} 00:30:07.903 }, 00:30:07.903 "method": "bdev_nvme_attach_controller" 00:30:07.903 } 00:30:07.903 EOF 00:30:07.903 )") 00:30:07.903 09:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:30:07.903 09:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:30:07.903 09:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:30:07.903 09:47:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:07.903 "params": { 00:30:07.903 "name": "Nvme1", 00:30:07.903 "trtype": "tcp", 00:30:07.903 "traddr": "10.0.0.2", 00:30:07.903 "adrfam": "ipv4", 00:30:07.903 "trsvcid": "4420", 00:30:07.903 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:07.903 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:07.903 "hdgst": false, 00:30:07.903 "ddgst": false 00:30:07.903 }, 00:30:07.903 "method": "bdev_nvme_attach_controller" 00:30:07.903 },{ 00:30:07.903 "params": { 00:30:07.903 "name": "Nvme2", 00:30:07.903 "trtype": "tcp", 00:30:07.903 "traddr": "10.0.0.2", 00:30:07.903 "adrfam": "ipv4", 00:30:07.903 "trsvcid": "4420", 00:30:07.903 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:07.903 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:07.903 "hdgst": false, 00:30:07.903 "ddgst": false 00:30:07.903 }, 00:30:07.903 "method": "bdev_nvme_attach_controller" 00:30:07.903 },{ 00:30:07.903 "params": { 00:30:07.903 "name": "Nvme3", 00:30:07.903 "trtype": "tcp", 00:30:07.903 "traddr": "10.0.0.2", 00:30:07.903 "adrfam": "ipv4", 00:30:07.903 "trsvcid": "4420", 00:30:07.903 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:30:07.903 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:30:07.903 "hdgst": false, 00:30:07.903 "ddgst": false 00:30:07.903 }, 00:30:07.903 "method": "bdev_nvme_attach_controller" 00:30:07.903 },{ 00:30:07.903 "params": { 00:30:07.903 "name": "Nvme4", 00:30:07.903 "trtype": "tcp", 00:30:07.903 "traddr": "10.0.0.2", 00:30:07.903 "adrfam": "ipv4", 00:30:07.903 "trsvcid": "4420", 00:30:07.903 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:30:07.903 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:30:07.903 "hdgst": false, 00:30:07.903 "ddgst": false 00:30:07.903 }, 00:30:07.903 "method": "bdev_nvme_attach_controller" 00:30:07.903 },{ 00:30:07.903 "params": { 00:30:07.903 "name": "Nvme5", 00:30:07.903 "trtype": "tcp", 00:30:07.903 "traddr": "10.0.0.2", 00:30:07.903 "adrfam": "ipv4", 00:30:07.903 "trsvcid": "4420", 00:30:07.903 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:30:07.903 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:30:07.903 "hdgst": false, 00:30:07.903 "ddgst": false 00:30:07.903 }, 00:30:07.903 "method": "bdev_nvme_attach_controller" 00:30:07.903 },{ 00:30:07.903 "params": { 00:30:07.903 "name": "Nvme6", 00:30:07.903 "trtype": "tcp", 00:30:07.903 "traddr": "10.0.0.2", 00:30:07.903 "adrfam": "ipv4", 00:30:07.903 "trsvcid": "4420", 00:30:07.903 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:30:07.903 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:30:07.903 "hdgst": false, 00:30:07.903 "ddgst": false 00:30:07.903 }, 00:30:07.903 "method": "bdev_nvme_attach_controller" 00:30:07.903 },{ 00:30:07.903 "params": { 00:30:07.903 "name": "Nvme7", 00:30:07.903 "trtype": "tcp", 00:30:07.903 "traddr": "10.0.0.2", 00:30:07.903 "adrfam": "ipv4", 00:30:07.903 "trsvcid": "4420", 00:30:07.903 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:30:07.903 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:30:07.903 "hdgst": false, 00:30:07.903 "ddgst": false 00:30:07.903 }, 00:30:07.903 "method": "bdev_nvme_attach_controller" 00:30:07.903 },{ 00:30:07.903 "params": { 00:30:07.903 "name": "Nvme8", 00:30:07.903 "trtype": "tcp", 00:30:07.903 "traddr": "10.0.0.2", 00:30:07.903 "adrfam": "ipv4", 00:30:07.903 "trsvcid": "4420", 00:30:07.903 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:30:07.903 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:30:07.903 "hdgst": false, 00:30:07.903 "ddgst": false 00:30:07.903 }, 00:30:07.903 "method": "bdev_nvme_attach_controller" 00:30:07.903 },{ 00:30:07.903 "params": { 00:30:07.903 "name": "Nvme9", 00:30:07.903 "trtype": "tcp", 00:30:07.903 "traddr": "10.0.0.2", 00:30:07.903 "adrfam": "ipv4", 00:30:07.903 "trsvcid": "4420", 00:30:07.903 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:30:07.903 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:30:07.903 "hdgst": false, 00:30:07.903 "ddgst": false 00:30:07.903 }, 00:30:07.903 "method": "bdev_nvme_attach_controller" 00:30:07.903 },{ 00:30:07.903 "params": { 00:30:07.903 "name": "Nvme10", 00:30:07.903 "trtype": "tcp", 00:30:07.903 "traddr": "10.0.0.2", 00:30:07.903 "adrfam": "ipv4", 00:30:07.903 "trsvcid": "4420", 00:30:07.903 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:30:07.903 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:30:07.903 "hdgst": false, 00:30:07.903 "ddgst": false 00:30:07.903 }, 00:30:07.903 "method": "bdev_nvme_attach_controller" 00:30:07.903 }' 00:30:07.903 [2024-12-09 09:47:43.342927] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:08.165 [2024-12-09 09:47:43.361212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:09.552 09:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:09.552 09:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:30:09.552 09:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:30:09.552 09:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.552 09:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:09.552 09:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.552 09:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 2926078 00:30:09.552 09:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:30:09.552 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 2926078 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:30:09.552 09:47:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:30:10.540 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 2925707 00:30:10.540 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:30:10.540 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:30:10.540 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:30:10.540 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:30:10.540 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:10.540 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:10.540 { 00:30:10.540 "params": { 00:30:10.540 "name": "Nvme$subsystem", 00:30:10.540 "trtype": "$TEST_TRANSPORT", 00:30:10.540 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:10.540 "adrfam": "ipv4", 00:30:10.540 "trsvcid": "$NVMF_PORT", 00:30:10.540 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:10.540 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:10.540 "hdgst": ${hdgst:-false}, 00:30:10.540 "ddgst": ${ddgst:-false} 00:30:10.540 }, 00:30:10.540 "method": "bdev_nvme_attach_controller" 00:30:10.540 } 00:30:10.540 EOF 00:30:10.540 )") 00:30:10.540 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:30:10.540 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:10.540 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:10.540 { 00:30:10.540 "params": { 00:30:10.540 "name": "Nvme$subsystem", 00:30:10.540 "trtype": "$TEST_TRANSPORT", 00:30:10.540 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:10.540 "adrfam": "ipv4", 00:30:10.540 "trsvcid": "$NVMF_PORT", 00:30:10.540 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:10.540 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:10.540 "hdgst": ${hdgst:-false}, 00:30:10.540 "ddgst": ${ddgst:-false} 00:30:10.540 }, 00:30:10.540 "method": "bdev_nvme_attach_controller" 00:30:10.540 } 00:30:10.540 EOF 00:30:10.540 )") 00:30:10.540 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:30:10.540 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:10.540 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:10.540 { 00:30:10.540 "params": { 00:30:10.540 "name": "Nvme$subsystem", 00:30:10.540 "trtype": "$TEST_TRANSPORT", 00:30:10.540 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:10.540 "adrfam": "ipv4", 00:30:10.540 "trsvcid": "$NVMF_PORT", 00:30:10.540 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:10.540 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:10.540 "hdgst": ${hdgst:-false}, 00:30:10.540 "ddgst": ${ddgst:-false} 00:30:10.540 }, 00:30:10.540 "method": "bdev_nvme_attach_controller" 00:30:10.540 } 00:30:10.540 EOF 00:30:10.540 )") 00:30:10.540 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:30:10.540 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:10.540 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:10.540 { 00:30:10.540 "params": { 00:30:10.540 "name": "Nvme$subsystem", 00:30:10.540 "trtype": "$TEST_TRANSPORT", 00:30:10.540 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:10.540 "adrfam": "ipv4", 00:30:10.540 "trsvcid": "$NVMF_PORT", 00:30:10.540 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:10.540 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:10.540 "hdgst": ${hdgst:-false}, 00:30:10.540 "ddgst": ${ddgst:-false} 00:30:10.540 }, 00:30:10.540 "method": "bdev_nvme_attach_controller" 00:30:10.540 } 00:30:10.540 EOF 00:30:10.540 )") 00:30:10.540 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:30:10.540 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:10.540 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:10.540 { 00:30:10.540 "params": { 00:30:10.540 "name": "Nvme$subsystem", 00:30:10.540 "trtype": "$TEST_TRANSPORT", 00:30:10.540 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:10.540 "adrfam": "ipv4", 00:30:10.540 "trsvcid": "$NVMF_PORT", 00:30:10.540 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:10.540 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:10.540 "hdgst": ${hdgst:-false}, 00:30:10.540 "ddgst": ${ddgst:-false} 00:30:10.540 }, 00:30:10.540 "method": "bdev_nvme_attach_controller" 00:30:10.540 } 00:30:10.540 EOF 00:30:10.540 )") 00:30:10.540 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:30:10.540 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:10.540 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:10.540 { 00:30:10.540 "params": { 00:30:10.540 "name": "Nvme$subsystem", 00:30:10.540 "trtype": "$TEST_TRANSPORT", 00:30:10.540 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:10.540 "adrfam": "ipv4", 00:30:10.540 "trsvcid": "$NVMF_PORT", 00:30:10.540 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:10.540 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:10.540 "hdgst": ${hdgst:-false}, 00:30:10.540 "ddgst": ${ddgst:-false} 00:30:10.540 }, 00:30:10.540 "method": "bdev_nvme_attach_controller" 00:30:10.540 } 00:30:10.540 EOF 00:30:10.540 )") 00:30:10.540 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:30:10.540 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:10.540 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:10.540 { 00:30:10.540 "params": { 00:30:10.540 "name": "Nvme$subsystem", 00:30:10.540 "trtype": "$TEST_TRANSPORT", 00:30:10.540 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:10.540 "adrfam": "ipv4", 00:30:10.540 "trsvcid": "$NVMF_PORT", 00:30:10.540 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:10.540 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:10.540 "hdgst": ${hdgst:-false}, 00:30:10.540 "ddgst": ${ddgst:-false} 00:30:10.540 }, 00:30:10.540 "method": "bdev_nvme_attach_controller" 00:30:10.540 } 00:30:10.540 EOF 00:30:10.540 )") 00:30:10.540 [2024-12-09 09:47:45.841148] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:30:10.540 [2024-12-09 09:47:45.841202] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2926463 ] 00:30:10.540 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:30:10.540 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:10.540 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:10.540 { 00:30:10.540 "params": { 00:30:10.540 "name": "Nvme$subsystem", 00:30:10.540 "trtype": "$TEST_TRANSPORT", 00:30:10.540 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:10.540 "adrfam": "ipv4", 00:30:10.540 "trsvcid": "$NVMF_PORT", 00:30:10.540 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:10.540 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:10.540 "hdgst": ${hdgst:-false}, 00:30:10.540 "ddgst": ${ddgst:-false} 00:30:10.540 }, 00:30:10.540 "method": "bdev_nvme_attach_controller" 00:30:10.540 } 00:30:10.540 EOF 00:30:10.540 )") 00:30:10.540 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:30:10.540 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:10.540 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:10.540 { 00:30:10.540 "params": { 00:30:10.540 "name": "Nvme$subsystem", 00:30:10.540 "trtype": "$TEST_TRANSPORT", 00:30:10.540 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:10.540 "adrfam": "ipv4", 00:30:10.540 "trsvcid": "$NVMF_PORT", 00:30:10.541 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:10.541 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:10.541 "hdgst": ${hdgst:-false}, 00:30:10.541 "ddgst": ${ddgst:-false} 00:30:10.541 }, 00:30:10.541 "method": "bdev_nvme_attach_controller" 00:30:10.541 } 00:30:10.541 EOF 00:30:10.541 )") 00:30:10.541 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:30:10.541 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:10.541 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:10.541 { 00:30:10.541 "params": { 00:30:10.541 "name": "Nvme$subsystem", 00:30:10.541 "trtype": "$TEST_TRANSPORT", 00:30:10.541 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:10.541 "adrfam": "ipv4", 00:30:10.541 "trsvcid": "$NVMF_PORT", 00:30:10.541 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:10.541 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:10.541 "hdgst": ${hdgst:-false}, 00:30:10.541 "ddgst": ${ddgst:-false} 00:30:10.541 }, 00:30:10.541 "method": "bdev_nvme_attach_controller" 00:30:10.541 } 00:30:10.541 EOF 00:30:10.541 )") 00:30:10.541 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:30:10.541 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:30:10.541 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:30:10.541 09:47:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:10.541 "params": { 00:30:10.541 "name": "Nvme1", 00:30:10.541 "trtype": "tcp", 00:30:10.541 "traddr": "10.0.0.2", 00:30:10.541 "adrfam": "ipv4", 00:30:10.541 "trsvcid": "4420", 00:30:10.541 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:10.541 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:10.541 "hdgst": false, 00:30:10.541 "ddgst": false 00:30:10.541 }, 00:30:10.541 "method": "bdev_nvme_attach_controller" 00:30:10.541 },{ 00:30:10.541 "params": { 00:30:10.541 "name": "Nvme2", 00:30:10.541 "trtype": "tcp", 00:30:10.541 "traddr": "10.0.0.2", 00:30:10.541 "adrfam": "ipv4", 00:30:10.541 "trsvcid": "4420", 00:30:10.541 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:10.541 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:10.541 "hdgst": false, 00:30:10.541 "ddgst": false 00:30:10.541 }, 00:30:10.541 "method": "bdev_nvme_attach_controller" 00:30:10.541 },{ 00:30:10.541 "params": { 00:30:10.541 "name": "Nvme3", 00:30:10.541 "trtype": "tcp", 00:30:10.541 "traddr": "10.0.0.2", 00:30:10.541 "adrfam": "ipv4", 00:30:10.541 "trsvcid": "4420", 00:30:10.541 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:30:10.541 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:30:10.541 "hdgst": false, 00:30:10.541 "ddgst": false 00:30:10.541 }, 00:30:10.541 "method": "bdev_nvme_attach_controller" 00:30:10.541 },{ 00:30:10.541 "params": { 00:30:10.541 "name": "Nvme4", 00:30:10.541 "trtype": "tcp", 00:30:10.541 "traddr": "10.0.0.2", 00:30:10.541 "adrfam": "ipv4", 00:30:10.541 "trsvcid": "4420", 00:30:10.541 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:30:10.541 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:30:10.541 "hdgst": false, 00:30:10.541 "ddgst": false 00:30:10.541 }, 00:30:10.541 "method": "bdev_nvme_attach_controller" 00:30:10.541 },{ 00:30:10.541 "params": { 00:30:10.541 "name": "Nvme5", 00:30:10.541 "trtype": "tcp", 00:30:10.541 "traddr": "10.0.0.2", 00:30:10.541 "adrfam": "ipv4", 00:30:10.541 "trsvcid": "4420", 00:30:10.541 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:30:10.541 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:30:10.541 "hdgst": false, 00:30:10.541 "ddgst": false 00:30:10.541 }, 00:30:10.541 "method": "bdev_nvme_attach_controller" 00:30:10.541 },{ 00:30:10.541 "params": { 00:30:10.541 "name": "Nvme6", 00:30:10.541 "trtype": "tcp", 00:30:10.541 "traddr": "10.0.0.2", 00:30:10.541 "adrfam": "ipv4", 00:30:10.541 "trsvcid": "4420", 00:30:10.541 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:30:10.541 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:30:10.541 "hdgst": false, 00:30:10.541 "ddgst": false 00:30:10.541 }, 00:30:10.541 "method": "bdev_nvme_attach_controller" 00:30:10.541 },{ 00:30:10.541 "params": { 00:30:10.541 "name": "Nvme7", 00:30:10.541 "trtype": "tcp", 00:30:10.541 "traddr": "10.0.0.2", 00:30:10.541 "adrfam": "ipv4", 00:30:10.541 "trsvcid": "4420", 00:30:10.541 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:30:10.541 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:30:10.541 "hdgst": false, 00:30:10.541 "ddgst": false 00:30:10.541 }, 00:30:10.541 "method": "bdev_nvme_attach_controller" 00:30:10.541 },{ 00:30:10.541 "params": { 00:30:10.541 "name": "Nvme8", 00:30:10.541 "trtype": "tcp", 00:30:10.541 "traddr": "10.0.0.2", 00:30:10.541 "adrfam": "ipv4", 00:30:10.541 "trsvcid": "4420", 00:30:10.541 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:30:10.541 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:30:10.541 "hdgst": false, 00:30:10.541 "ddgst": false 00:30:10.541 }, 00:30:10.541 "method": "bdev_nvme_attach_controller" 00:30:10.541 },{ 00:30:10.541 "params": { 00:30:10.541 "name": "Nvme9", 00:30:10.541 "trtype": "tcp", 00:30:10.541 "traddr": "10.0.0.2", 00:30:10.541 "adrfam": "ipv4", 00:30:10.541 "trsvcid": "4420", 00:30:10.541 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:30:10.541 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:30:10.541 "hdgst": false, 00:30:10.541 "ddgst": false 00:30:10.541 }, 00:30:10.541 "method": "bdev_nvme_attach_controller" 00:30:10.541 },{ 00:30:10.541 "params": { 00:30:10.541 "name": "Nvme10", 00:30:10.541 "trtype": "tcp", 00:30:10.541 "traddr": "10.0.0.2", 00:30:10.541 "adrfam": "ipv4", 00:30:10.541 "trsvcid": "4420", 00:30:10.541 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:30:10.541 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:30:10.541 "hdgst": false, 00:30:10.541 "ddgst": false 00:30:10.541 }, 00:30:10.541 "method": "bdev_nvme_attach_controller" 00:30:10.541 }' 00:30:10.541 [2024-12-09 09:47:45.930898] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:10.541 [2024-12-09 09:47:45.948869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:12.490 Running I/O for 1 seconds... 00:30:13.430 1869.00 IOPS, 116.81 MiB/s 00:30:13.430 Latency(us) 00:30:13.430 [2024-12-09T08:47:48.883Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:13.430 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:13.430 Verification LBA range: start 0x0 length 0x400 00:30:13.430 Nvme1n1 : 1.15 222.72 13.92 0.00 0.00 284491.52 17913.17 262144.00 00:30:13.430 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:13.430 Verification LBA range: start 0x0 length 0x400 00:30:13.430 Nvme2n1 : 1.19 215.47 13.47 0.00 0.00 289237.33 17913.17 255153.49 00:30:13.430 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:13.430 Verification LBA range: start 0x0 length 0x400 00:30:13.430 Nvme3n1 : 1.13 225.75 14.11 0.00 0.00 270970.88 34297.17 260396.37 00:30:13.430 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:13.430 Verification LBA range: start 0x0 length 0x400 00:30:13.430 Nvme4n1 : 1.18 270.16 16.89 0.00 0.00 221389.40 13817.17 249910.61 00:30:13.430 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:13.430 Verification LBA range: start 0x0 length 0x400 00:30:13.430 Nvme5n1 : 1.16 221.28 13.83 0.00 0.00 267013.33 18677.76 253405.87 00:30:13.430 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:13.430 Verification LBA range: start 0x0 length 0x400 00:30:13.430 Nvme6n1 : 1.20 213.46 13.34 0.00 0.00 272442.88 22173.01 281367.89 00:30:13.430 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:13.430 Verification LBA range: start 0x0 length 0x400 00:30:13.430 Nvme7n1 : 1.15 223.21 13.95 0.00 0.00 254735.57 14308.69 249910.61 00:30:13.430 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:13.430 Verification LBA range: start 0x0 length 0x400 00:30:13.430 Nvme8n1 : 1.20 270.39 16.90 0.00 0.00 207054.82 4287.15 256901.12 00:30:13.430 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:13.430 Verification LBA range: start 0x0 length 0x400 00:30:13.430 Nvme9n1 : 1.20 271.00 16.94 0.00 0.00 202806.87 3426.99 248162.99 00:30:13.430 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:13.430 Verification LBA range: start 0x0 length 0x400 00:30:13.430 Nvme10n1 : 1.21 264.48 16.53 0.00 0.00 204571.99 11195.73 255153.49 00:30:13.430 [2024-12-09T08:47:48.883Z] =================================================================================================================== 00:30:13.430 [2024-12-09T08:47:48.883Z] Total : 2397.93 149.87 0.00 0.00 243859.44 3426.99 281367.89 00:30:13.430 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:30:13.430 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:30:13.430 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:30:13.430 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:13.430 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:30:13.430 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:13.430 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:30:13.430 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:13.430 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:30:13.430 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:13.430 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:13.430 rmmod nvme_tcp 00:30:13.430 rmmod nvme_fabrics 00:30:13.690 rmmod nvme_keyring 00:30:13.690 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:13.690 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:30:13.690 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:30:13.690 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 2925707 ']' 00:30:13.690 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 2925707 00:30:13.690 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 2925707 ']' 00:30:13.690 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 2925707 00:30:13.690 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:30:13.690 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:13.690 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2925707 00:30:13.690 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:13.690 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:13.690 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2925707' 00:30:13.690 killing process with pid 2925707 00:30:13.690 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 2925707 00:30:13.690 09:47:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 2925707 00:30:13.949 09:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:13.949 09:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:13.949 09:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:13.949 09:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:30:13.949 09:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:30:13.949 09:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:30:13.949 09:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:13.949 09:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:13.949 09:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:13.949 09:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:13.949 09:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:13.949 09:47:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:15.860 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:15.860 00:30:15.860 real 0m17.059s 00:30:15.860 user 0m35.343s 00:30:15.860 sys 0m6.850s 00:30:15.860 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:15.860 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:15.860 ************************************ 00:30:15.860 END TEST nvmf_shutdown_tc1 00:30:15.860 ************************************ 00:30:16.120 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:30:16.120 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:16.120 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:16.120 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:16.120 ************************************ 00:30:16.120 START TEST nvmf_shutdown_tc2 00:30:16.120 ************************************ 00:30:16.120 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:30:16.120 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:30:16.120 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:30:16.120 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:16.120 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:16.120 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:16.120 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:16.120 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:16.120 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:16.120 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:16.120 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:16.120 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:16.120 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:16.120 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:30:16.120 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:16.120 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:16.120 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:30:16.120 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:16.120 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:16.120 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:16.120 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:16.120 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:16.120 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:30:16.120 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:16.120 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:30:16.120 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:30:16.120 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:30:16.120 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:30:16.120 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:30:16.120 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:30:16.120 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:16.120 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:16.120 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:16.120 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:16.120 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:16.120 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:16.120 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:16.120 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:16.120 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:16.120 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:16.120 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:16.120 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:16.120 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:16.120 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:16.120 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:16.120 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:16.120 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:16.120 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:16.120 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:16.120 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:16.120 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:16.120 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:16.120 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:16.120 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:16.120 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:16.120 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:16.120 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:16.120 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:16.120 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:16.120 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:16.120 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:16.120 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:16.120 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:16.120 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:16.120 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:16.120 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:16.120 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:16.120 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:16.120 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:16.120 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:16.120 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:16.120 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:16.121 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:16.121 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:16.121 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:16.121 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:16.121 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:16.121 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:16.121 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:16.121 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:16.121 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:16.121 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:16.121 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:16.121 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:16.121 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:16.121 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:16.121 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:16.121 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:16.121 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:30:16.121 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:16.121 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:16.121 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:16.121 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:16.121 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:16.121 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:16.121 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:16.121 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:16.121 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:16.121 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:16.121 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:16.121 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:16.121 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:16.121 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:16.121 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:16.121 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:16.121 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:16.121 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:16.121 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:16.121 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:16.121 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:16.121 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:16.381 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:16.381 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:16.381 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:16.381 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:16.381 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:16.381 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.540 ms 00:30:16.381 00:30:16.381 --- 10.0.0.2 ping statistics --- 00:30:16.381 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:16.381 rtt min/avg/max/mdev = 0.540/0.540/0.540/0.000 ms 00:30:16.381 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:16.381 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:16.381 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.298 ms 00:30:16.381 00:30:16.381 --- 10.0.0.1 ping statistics --- 00:30:16.381 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:16.381 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:30:16.381 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:16.381 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:30:16.381 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:16.381 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:16.381 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:16.381 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:16.381 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:16.381 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:16.381 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:16.381 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:30:16.381 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:16.381 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:16.381 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:16.381 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2927718 00:30:16.381 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2927718 00:30:16.381 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:30:16.381 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2927718 ']' 00:30:16.382 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:16.382 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:16.382 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:16.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:16.382 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:16.382 09:47:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:16.382 [2024-12-09 09:47:51.826362] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:30:16.382 [2024-12-09 09:47:51.826432] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:16.641 [2024-12-09 09:47:51.921718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:16.641 [2024-12-09 09:47:51.940916] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:16.641 [2024-12-09 09:47:51.940948] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:16.641 [2024-12-09 09:47:51.940954] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:16.641 [2024-12-09 09:47:51.940959] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:16.641 [2024-12-09 09:47:51.940963] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:16.641 [2024-12-09 09:47:51.942322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:16.641 [2024-12-09 09:47:51.942482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:16.641 [2024-12-09 09:47:51.942643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:16.641 [2024-12-09 09:47:51.942660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:17.212 09:47:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:17.212 09:47:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:30:17.212 09:47:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:17.212 09:47:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:17.212 09:47:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:17.212 09:47:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:17.212 09:47:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:17.212 09:47:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.212 09:47:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:17.473 [2024-12-09 09:47:52.669152] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:17.473 09:47:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.473 09:47:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:30:17.473 09:47:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:30:17.473 09:47:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:17.473 09:47:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:17.473 09:47:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:17.473 09:47:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:17.473 09:47:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:17.473 09:47:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:17.473 09:47:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:17.473 09:47:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:17.473 09:47:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:17.473 09:47:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:17.473 09:47:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:17.473 09:47:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:17.473 09:47:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:17.473 09:47:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:17.473 09:47:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:17.473 09:47:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:17.473 09:47:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:17.473 09:47:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:17.473 09:47:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:17.473 09:47:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:17.473 09:47:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:17.473 09:47:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:17.473 09:47:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:30:17.473 09:47:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:30:17.473 09:47:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.473 09:47:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:17.473 Malloc1 00:30:17.473 [2024-12-09 09:47:52.778537] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:17.473 Malloc2 00:30:17.473 Malloc3 00:30:17.473 Malloc4 00:30:17.473 Malloc5 00:30:17.734 Malloc6 00:30:17.735 Malloc7 00:30:17.735 Malloc8 00:30:17.735 Malloc9 00:30:17.735 Malloc10 00:30:17.735 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.735 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:30:17.735 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:17.735 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:17.735 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=2927966 00:30:17.735 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 2927966 /var/tmp/bdevperf.sock 00:30:17.735 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2927966 ']' 00:30:17.735 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:17.735 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:17.735 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:17.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:17.735 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:30:17.735 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:30:17.735 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:17.735 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:17.735 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:30:17.735 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:30:17.735 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:17.735 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:17.735 { 00:30:17.735 "params": { 00:30:17.735 "name": "Nvme$subsystem", 00:30:17.735 "trtype": "$TEST_TRANSPORT", 00:30:17.735 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:17.735 "adrfam": "ipv4", 00:30:17.735 "trsvcid": "$NVMF_PORT", 00:30:17.735 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:17.735 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:17.735 "hdgst": ${hdgst:-false}, 00:30:17.735 "ddgst": ${ddgst:-false} 00:30:17.735 }, 00:30:17.735 "method": "bdev_nvme_attach_controller" 00:30:17.735 } 00:30:17.735 EOF 00:30:17.735 )") 00:30:17.735 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:30:17.735 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:17.735 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:17.735 { 00:30:17.735 "params": { 00:30:17.735 "name": "Nvme$subsystem", 00:30:17.735 "trtype": "$TEST_TRANSPORT", 00:30:17.735 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:17.735 "adrfam": "ipv4", 00:30:17.735 "trsvcid": "$NVMF_PORT", 00:30:17.735 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:17.735 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:17.735 "hdgst": ${hdgst:-false}, 00:30:17.735 "ddgst": ${ddgst:-false} 00:30:17.735 }, 00:30:17.735 "method": "bdev_nvme_attach_controller" 00:30:17.735 } 00:30:17.735 EOF 00:30:17.735 )") 00:30:17.998 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:30:17.998 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:17.998 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:17.998 { 00:30:17.998 "params": { 00:30:17.998 "name": "Nvme$subsystem", 00:30:17.998 "trtype": "$TEST_TRANSPORT", 00:30:17.998 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:17.998 "adrfam": "ipv4", 00:30:17.998 "trsvcid": "$NVMF_PORT", 00:30:17.998 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:17.998 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:17.998 "hdgst": ${hdgst:-false}, 00:30:17.998 "ddgst": ${ddgst:-false} 00:30:17.998 }, 00:30:17.998 "method": "bdev_nvme_attach_controller" 00:30:17.998 } 00:30:17.998 EOF 00:30:17.998 )") 00:30:17.998 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:30:17.998 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:17.998 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:17.998 { 00:30:17.998 "params": { 00:30:17.998 "name": "Nvme$subsystem", 00:30:17.998 "trtype": "$TEST_TRANSPORT", 00:30:17.998 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:17.998 "adrfam": "ipv4", 00:30:17.998 "trsvcid": "$NVMF_PORT", 00:30:17.998 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:17.998 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:17.998 "hdgst": ${hdgst:-false}, 00:30:17.998 "ddgst": ${ddgst:-false} 00:30:17.998 }, 00:30:17.998 "method": "bdev_nvme_attach_controller" 00:30:17.998 } 00:30:17.998 EOF 00:30:17.998 )") 00:30:17.998 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:30:17.998 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:17.998 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:17.998 { 00:30:17.998 "params": { 00:30:17.998 "name": "Nvme$subsystem", 00:30:17.998 "trtype": "$TEST_TRANSPORT", 00:30:17.998 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:17.998 "adrfam": "ipv4", 00:30:17.998 "trsvcid": "$NVMF_PORT", 00:30:17.998 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:17.998 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:17.998 "hdgst": ${hdgst:-false}, 00:30:17.998 "ddgst": ${ddgst:-false} 00:30:17.998 }, 00:30:17.998 "method": "bdev_nvme_attach_controller" 00:30:17.998 } 00:30:17.998 EOF 00:30:17.998 )") 00:30:17.998 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:30:17.998 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:17.998 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:17.998 { 00:30:17.998 "params": { 00:30:17.998 "name": "Nvme$subsystem", 00:30:17.998 "trtype": "$TEST_TRANSPORT", 00:30:17.998 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:17.998 "adrfam": "ipv4", 00:30:17.998 "trsvcid": "$NVMF_PORT", 00:30:17.998 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:17.998 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:17.998 "hdgst": ${hdgst:-false}, 00:30:17.998 "ddgst": ${ddgst:-false} 00:30:17.998 }, 00:30:17.998 "method": "bdev_nvme_attach_controller" 00:30:17.998 } 00:30:17.998 EOF 00:30:17.998 )") 00:30:17.998 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:30:17.998 [2024-12-09 09:47:53.222994] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:30:17.998 [2024-12-09 09:47:53.223047] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2927966 ] 00:30:17.998 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:17.998 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:17.998 { 00:30:17.998 "params": { 00:30:17.998 "name": "Nvme$subsystem", 00:30:17.999 "trtype": "$TEST_TRANSPORT", 00:30:17.999 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:17.999 "adrfam": "ipv4", 00:30:17.999 "trsvcid": "$NVMF_PORT", 00:30:17.999 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:17.999 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:17.999 "hdgst": ${hdgst:-false}, 00:30:17.999 "ddgst": ${ddgst:-false} 00:30:17.999 }, 00:30:17.999 "method": "bdev_nvme_attach_controller" 00:30:17.999 } 00:30:17.999 EOF 00:30:17.999 )") 00:30:17.999 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:30:17.999 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:17.999 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:17.999 { 00:30:17.999 "params": { 00:30:17.999 "name": "Nvme$subsystem", 00:30:17.999 "trtype": "$TEST_TRANSPORT", 00:30:17.999 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:17.999 "adrfam": "ipv4", 00:30:17.999 "trsvcid": "$NVMF_PORT", 00:30:17.999 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:17.999 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:17.999 "hdgst": ${hdgst:-false}, 00:30:17.999 "ddgst": ${ddgst:-false} 00:30:17.999 }, 00:30:17.999 "method": "bdev_nvme_attach_controller" 00:30:17.999 } 00:30:17.999 EOF 00:30:17.999 )") 00:30:17.999 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:30:17.999 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:17.999 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:17.999 { 00:30:17.999 "params": { 00:30:17.999 "name": "Nvme$subsystem", 00:30:17.999 "trtype": "$TEST_TRANSPORT", 00:30:17.999 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:17.999 "adrfam": "ipv4", 00:30:17.999 "trsvcid": "$NVMF_PORT", 00:30:17.999 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:17.999 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:17.999 "hdgst": ${hdgst:-false}, 00:30:17.999 "ddgst": ${ddgst:-false} 00:30:17.999 }, 00:30:17.999 "method": "bdev_nvme_attach_controller" 00:30:17.999 } 00:30:17.999 EOF 00:30:17.999 )") 00:30:17.999 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:30:17.999 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:17.999 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:17.999 { 00:30:17.999 "params": { 00:30:17.999 "name": "Nvme$subsystem", 00:30:17.999 "trtype": "$TEST_TRANSPORT", 00:30:17.999 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:17.999 "adrfam": "ipv4", 00:30:17.999 "trsvcid": "$NVMF_PORT", 00:30:17.999 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:17.999 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:17.999 "hdgst": ${hdgst:-false}, 00:30:17.999 "ddgst": ${ddgst:-false} 00:30:17.999 }, 00:30:17.999 "method": "bdev_nvme_attach_controller" 00:30:17.999 } 00:30:17.999 EOF 00:30:17.999 )") 00:30:17.999 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:30:17.999 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:30:17.999 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:30:17.999 09:47:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:17.999 "params": { 00:30:17.999 "name": "Nvme1", 00:30:17.999 "trtype": "tcp", 00:30:17.999 "traddr": "10.0.0.2", 00:30:17.999 "adrfam": "ipv4", 00:30:17.999 "trsvcid": "4420", 00:30:17.999 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:17.999 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:17.999 "hdgst": false, 00:30:17.999 "ddgst": false 00:30:17.999 }, 00:30:17.999 "method": "bdev_nvme_attach_controller" 00:30:17.999 },{ 00:30:17.999 "params": { 00:30:17.999 "name": "Nvme2", 00:30:17.999 "trtype": "tcp", 00:30:17.999 "traddr": "10.0.0.2", 00:30:17.999 "adrfam": "ipv4", 00:30:17.999 "trsvcid": "4420", 00:30:17.999 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:17.999 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:17.999 "hdgst": false, 00:30:17.999 "ddgst": false 00:30:17.999 }, 00:30:17.999 "method": "bdev_nvme_attach_controller" 00:30:17.999 },{ 00:30:17.999 "params": { 00:30:17.999 "name": "Nvme3", 00:30:17.999 "trtype": "tcp", 00:30:17.999 "traddr": "10.0.0.2", 00:30:17.999 "adrfam": "ipv4", 00:30:17.999 "trsvcid": "4420", 00:30:17.999 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:30:17.999 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:30:17.999 "hdgst": false, 00:30:17.999 "ddgst": false 00:30:17.999 }, 00:30:17.999 "method": "bdev_nvme_attach_controller" 00:30:17.999 },{ 00:30:17.999 "params": { 00:30:17.999 "name": "Nvme4", 00:30:17.999 "trtype": "tcp", 00:30:17.999 "traddr": "10.0.0.2", 00:30:17.999 "adrfam": "ipv4", 00:30:17.999 "trsvcid": "4420", 00:30:17.999 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:30:17.999 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:30:17.999 "hdgst": false, 00:30:17.999 "ddgst": false 00:30:17.999 }, 00:30:17.999 "method": "bdev_nvme_attach_controller" 00:30:17.999 },{ 00:30:17.999 "params": { 00:30:17.999 "name": "Nvme5", 00:30:17.999 "trtype": "tcp", 00:30:17.999 "traddr": "10.0.0.2", 00:30:17.999 "adrfam": "ipv4", 00:30:17.999 "trsvcid": "4420", 00:30:17.999 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:30:17.999 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:30:17.999 "hdgst": false, 00:30:17.999 "ddgst": false 00:30:17.999 }, 00:30:17.999 "method": "bdev_nvme_attach_controller" 00:30:17.999 },{ 00:30:17.999 "params": { 00:30:17.999 "name": "Nvme6", 00:30:17.999 "trtype": "tcp", 00:30:17.999 "traddr": "10.0.0.2", 00:30:17.999 "adrfam": "ipv4", 00:30:17.999 "trsvcid": "4420", 00:30:17.999 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:30:17.999 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:30:17.999 "hdgst": false, 00:30:17.999 "ddgst": false 00:30:17.999 }, 00:30:17.999 "method": "bdev_nvme_attach_controller" 00:30:17.999 },{ 00:30:17.999 "params": { 00:30:17.999 "name": "Nvme7", 00:30:17.999 "trtype": "tcp", 00:30:17.999 "traddr": "10.0.0.2", 00:30:17.999 "adrfam": "ipv4", 00:30:17.999 "trsvcid": "4420", 00:30:17.999 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:30:17.999 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:30:17.999 "hdgst": false, 00:30:17.999 "ddgst": false 00:30:17.999 }, 00:30:17.999 "method": "bdev_nvme_attach_controller" 00:30:17.999 },{ 00:30:17.999 "params": { 00:30:17.999 "name": "Nvme8", 00:30:17.999 "trtype": "tcp", 00:30:17.999 "traddr": "10.0.0.2", 00:30:17.999 "adrfam": "ipv4", 00:30:17.999 "trsvcid": "4420", 00:30:17.999 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:30:17.999 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:30:17.999 "hdgst": false, 00:30:17.999 "ddgst": false 00:30:17.999 }, 00:30:17.999 "method": "bdev_nvme_attach_controller" 00:30:17.999 },{ 00:30:17.999 "params": { 00:30:17.999 "name": "Nvme9", 00:30:17.999 "trtype": "tcp", 00:30:17.999 "traddr": "10.0.0.2", 00:30:17.999 "adrfam": "ipv4", 00:30:17.999 "trsvcid": "4420", 00:30:17.999 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:30:17.999 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:30:17.999 "hdgst": false, 00:30:17.999 "ddgst": false 00:30:17.999 }, 00:30:17.999 "method": "bdev_nvme_attach_controller" 00:30:17.999 },{ 00:30:17.999 "params": { 00:30:17.999 "name": "Nvme10", 00:30:17.999 "trtype": "tcp", 00:30:17.999 "traddr": "10.0.0.2", 00:30:17.999 "adrfam": "ipv4", 00:30:17.999 "trsvcid": "4420", 00:30:17.999 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:30:17.999 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:30:17.999 "hdgst": false, 00:30:17.999 "ddgst": false 00:30:17.999 }, 00:30:17.999 "method": "bdev_nvme_attach_controller" 00:30:17.999 }' 00:30:17.999 [2024-12-09 09:47:53.314314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:17.999 [2024-12-09 09:47:53.332674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:19.924 Running I/O for 10 seconds... 00:30:19.924 09:47:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:19.924 09:47:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:30:19.924 09:47:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:30:19.924 09:47:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:19.924 09:47:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:19.924 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:19.924 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:30:19.924 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:30:19.924 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:30:19.924 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:30:19.924 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:30:19.924 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:30:19.924 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:30:19.924 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:30:19.924 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:30:19.924 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:19.924 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:19.924 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:19.924 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:30:19.924 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:30:19.924 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:30:20.184 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:30:20.184 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:30:20.184 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:30:20.184 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:30:20.184 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:20.184 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:20.184 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:20.184 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:30:20.184 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:30:20.184 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:30:20.444 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:30:20.444 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:30:20.444 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:30:20.444 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:30:20.444 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:20.444 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:20.444 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:20.444 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:30:20.444 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:30:20.444 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:30:20.444 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:30:20.444 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:30:20.444 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 2927966 00:30:20.444 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 2927966 ']' 00:30:20.444 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 2927966 00:30:20.444 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:30:20.444 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:20.444 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2927966 00:30:20.444 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:20.444 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:20.444 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2927966' 00:30:20.444 killing process with pid 2927966 00:30:20.444 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 2927966 00:30:20.444 09:47:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 2927966 00:30:20.704 Received shutdown signal, test time was about 0.978745 seconds 00:30:20.704 00:30:20.704 Latency(us) 00:30:20.704 [2024-12-09T08:47:56.157Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:20.704 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:20.704 Verification LBA range: start 0x0 length 0x400 00:30:20.704 Nvme1n1 : 0.96 199.67 12.48 0.00 0.00 316694.47 22282.24 283115.52 00:30:20.704 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:20.704 Verification LBA range: start 0x0 length 0x400 00:30:20.704 Nvme2n1 : 0.96 200.55 12.53 0.00 0.00 308747.09 21299.20 279620.27 00:30:20.704 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:20.704 Verification LBA range: start 0x0 length 0x400 00:30:20.704 Nvme3n1 : 0.97 263.89 16.49 0.00 0.00 229762.77 13271.04 267386.88 00:30:20.704 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:20.704 Verification LBA range: start 0x0 length 0x400 00:30:20.704 Nvme4n1 : 0.94 203.98 12.75 0.00 0.00 290574.22 23592.96 279620.27 00:30:20.704 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:20.704 Verification LBA range: start 0x0 length 0x400 00:30:20.704 Nvme5n1 : 0.95 207.33 12.96 0.00 0.00 278720.32 3686.40 272629.76 00:30:20.704 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:20.704 Verification LBA range: start 0x0 length 0x400 00:30:20.704 Nvme6n1 : 0.94 203.51 12.72 0.00 0.00 278442.38 24357.55 270882.13 00:30:20.704 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:20.704 Verification LBA range: start 0x0 length 0x400 00:30:20.704 Nvme7n1 : 0.98 262.05 16.38 0.00 0.00 211955.84 18896.21 286610.77 00:30:20.704 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:20.704 Verification LBA range: start 0x0 length 0x400 00:30:20.704 Nvme8n1 : 0.98 259.75 16.23 0.00 0.00 208549.83 16274.77 277872.64 00:30:20.704 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:20.704 Verification LBA range: start 0x0 length 0x400 00:30:20.704 Nvme9n1 : 0.97 197.71 12.36 0.00 0.00 267397.97 18786.99 302339.41 00:30:20.704 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:20.704 Verification LBA range: start 0x0 length 0x400 00:30:20.704 Nvme10n1 : 0.97 198.75 12.42 0.00 0.00 260067.84 21626.88 284863.15 00:30:20.704 [2024-12-09T08:47:56.157Z] =================================================================================================================== 00:30:20.704 [2024-12-09T08:47:56.157Z] Total : 2197.21 137.33 0.00 0.00 260789.09 3686.40 302339.41 00:30:20.704 09:47:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:30:21.645 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 2927718 00:30:21.645 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:30:21.645 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:30:21.645 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:30:21.645 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:21.645 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:30:21.645 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:21.645 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:30:21.645 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:21.645 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:30:21.645 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:21.645 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:21.645 rmmod nvme_tcp 00:30:21.645 rmmod nvme_fabrics 00:30:21.645 rmmod nvme_keyring 00:30:21.907 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:21.907 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:30:21.907 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:30:21.907 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 2927718 ']' 00:30:21.907 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 2927718 00:30:21.907 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 2927718 ']' 00:30:21.907 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 2927718 00:30:21.907 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:30:21.907 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:21.907 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2927718 00:30:21.907 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:21.907 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:21.907 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2927718' 00:30:21.907 killing process with pid 2927718 00:30:21.907 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 2927718 00:30:21.907 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 2927718 00:30:22.167 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:22.167 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:22.167 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:22.167 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:30:22.167 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:30:22.167 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:22.167 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:30:22.167 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:22.167 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:22.167 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:22.167 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:22.167 09:47:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:24.078 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:24.078 00:30:24.078 real 0m8.117s 00:30:24.078 user 0m24.852s 00:30:24.078 sys 0m1.350s 00:30:24.078 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:24.078 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:24.078 ************************************ 00:30:24.078 END TEST nvmf_shutdown_tc2 00:30:24.078 ************************************ 00:30:24.078 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:30:24.078 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:24.078 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:24.078 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:24.339 ************************************ 00:30:24.339 START TEST nvmf_shutdown_tc3 00:30:24.339 ************************************ 00:30:24.339 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:30:24.339 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:30:24.339 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:30:24.339 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:24.339 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:24.339 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:24.339 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:24.339 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:24.339 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:24.339 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:24.339 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:24.339 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:24.339 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:24.339 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:30:24.339 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:24.339 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:24.339 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:30:24.339 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:24.339 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:24.339 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:24.339 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:24.339 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:24.339 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:30:24.339 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:24.339 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:30:24.339 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:30:24.339 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:30:24.339 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:30:24.339 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:30:24.339 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:30:24.339 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:24.339 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:24.339 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:24.339 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:24.339 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:24.339 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:24.339 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:24.339 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:24.339 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:24.339 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:24.339 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:24.339 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:24.339 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:24.339 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:24.339 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:24.339 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:24.339 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:24.339 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:24.339 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:24.339 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:24.339 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:24.339 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:24.339 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:24.339 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:24.339 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:24.339 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:24.339 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:24.339 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:24.339 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:24.339 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:24.339 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:24.339 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:24.339 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:24.339 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:24.339 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:24.339 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:24.339 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:24.339 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:24.339 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:24.340 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:24.340 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:24.340 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:24.340 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:24.340 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:24.340 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:24.340 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:24.340 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:24.340 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:24.340 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:24.340 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:24.340 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:24.340 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:24.340 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:24.340 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:24.340 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:24.340 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:24.340 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:24.340 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:24.340 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:30:24.340 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:24.340 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:24.340 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:24.340 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:24.340 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:24.340 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:24.340 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:24.340 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:24.340 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:24.340 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:24.340 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:24.340 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:24.340 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:24.340 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:24.340 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:24.340 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:24.340 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:24.340 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:24.340 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:24.340 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:24.340 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:24.340 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:24.600 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:24.600 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:24.600 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:24.600 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:24.600 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:24.600 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.599 ms 00:30:24.600 00:30:24.600 --- 10.0.0.2 ping statistics --- 00:30:24.600 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:24.600 rtt min/avg/max/mdev = 0.599/0.599/0.599/0.000 ms 00:30:24.600 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:24.600 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:24.600 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.299 ms 00:30:24.600 00:30:24.600 --- 10.0.0.1 ping statistics --- 00:30:24.600 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:24.600 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:30:24.600 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:24.600 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:30:24.600 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:24.600 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:24.600 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:24.600 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:24.600 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:24.600 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:24.601 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:24.601 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:30:24.601 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:24.601 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:24.601 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:24.601 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=2929435 00:30:24.601 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 2929435 00:30:24.601 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:30:24.601 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 2929435 ']' 00:30:24.601 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:24.601 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:24.601 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:24.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:24.601 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:24.601 09:47:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:24.601 [2024-12-09 09:48:00.006468] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:30:24.601 [2024-12-09 09:48:00.006541] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:24.862 [2024-12-09 09:48:00.096837] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:24.862 [2024-12-09 09:48:00.116019] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:24.862 [2024-12-09 09:48:00.116055] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:24.862 [2024-12-09 09:48:00.116061] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:24.862 [2024-12-09 09:48:00.116066] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:24.862 [2024-12-09 09:48:00.116070] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:24.862 [2024-12-09 09:48:00.117344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:24.862 [2024-12-09 09:48:00.117504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:24.862 [2024-12-09 09:48:00.117669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:24.862 [2024-12-09 09:48:00.117670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:25.433 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:25.433 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:30:25.433 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:25.433 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:25.433 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:25.433 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:25.433 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:25.433 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:25.433 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:25.434 [2024-12-09 09:48:00.855995] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:25.434 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:25.434 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:30:25.434 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:30:25.434 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:25.434 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:25.434 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:25.434 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:25.434 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:25.434 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:25.434 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:25.434 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:25.434 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:25.695 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:25.695 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:25.695 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:25.695 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:25.695 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:25.695 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:25.695 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:25.695 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:25.695 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:25.695 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:25.695 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:25.695 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:25.695 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:25.695 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:30:25.695 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:30:25.695 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:25.695 09:48:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:25.695 Malloc1 00:30:25.695 [2024-12-09 09:48:00.977530] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:25.695 Malloc2 00:30:25.695 Malloc3 00:30:25.695 Malloc4 00:30:25.695 Malloc5 00:30:25.695 Malloc6 00:30:25.957 Malloc7 00:30:25.957 Malloc8 00:30:25.957 Malloc9 00:30:25.957 Malloc10 00:30:25.957 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:25.957 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:30:25.957 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:25.957 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:25.957 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=2929890 00:30:25.957 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 2929890 /var/tmp/bdevperf.sock 00:30:25.957 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 2929890 ']' 00:30:25.957 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:25.957 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:25.957 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:25.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:25.957 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:30:25.957 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:30:25.957 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:25.957 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:25.957 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:30:25.957 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:30:25.957 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:25.957 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:25.957 { 00:30:25.957 "params": { 00:30:25.957 "name": "Nvme$subsystem", 00:30:25.957 "trtype": "$TEST_TRANSPORT", 00:30:25.957 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:25.957 "adrfam": "ipv4", 00:30:25.957 "trsvcid": "$NVMF_PORT", 00:30:25.957 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:25.957 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:25.957 "hdgst": ${hdgst:-false}, 00:30:25.957 "ddgst": ${ddgst:-false} 00:30:25.957 }, 00:30:25.957 "method": "bdev_nvme_attach_controller" 00:30:25.957 } 00:30:25.957 EOF 00:30:25.957 )") 00:30:25.957 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:30:25.957 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:25.957 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:25.957 { 00:30:25.957 "params": { 00:30:25.957 "name": "Nvme$subsystem", 00:30:25.957 "trtype": "$TEST_TRANSPORT", 00:30:25.957 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:25.957 "adrfam": "ipv4", 00:30:25.957 "trsvcid": "$NVMF_PORT", 00:30:25.957 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:25.957 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:25.957 "hdgst": ${hdgst:-false}, 00:30:25.957 "ddgst": ${ddgst:-false} 00:30:25.957 }, 00:30:25.957 "method": "bdev_nvme_attach_controller" 00:30:25.957 } 00:30:25.957 EOF 00:30:25.957 )") 00:30:25.957 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:30:25.957 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:25.957 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:25.957 { 00:30:25.957 "params": { 00:30:25.957 "name": "Nvme$subsystem", 00:30:25.957 "trtype": "$TEST_TRANSPORT", 00:30:25.957 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:25.957 "adrfam": "ipv4", 00:30:25.957 "trsvcid": "$NVMF_PORT", 00:30:25.957 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:25.957 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:25.957 "hdgst": ${hdgst:-false}, 00:30:25.957 "ddgst": ${ddgst:-false} 00:30:25.957 }, 00:30:25.957 "method": "bdev_nvme_attach_controller" 00:30:25.957 } 00:30:25.957 EOF 00:30:25.957 )") 00:30:25.957 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:30:25.957 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:25.957 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:25.957 { 00:30:25.957 "params": { 00:30:25.957 "name": "Nvme$subsystem", 00:30:25.957 "trtype": "$TEST_TRANSPORT", 00:30:25.957 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:25.957 "adrfam": "ipv4", 00:30:25.957 "trsvcid": "$NVMF_PORT", 00:30:25.957 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:25.957 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:25.957 "hdgst": ${hdgst:-false}, 00:30:25.957 "ddgst": ${ddgst:-false} 00:30:25.957 }, 00:30:25.957 "method": "bdev_nvme_attach_controller" 00:30:25.957 } 00:30:25.957 EOF 00:30:25.957 )") 00:30:25.957 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:30:25.957 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:25.957 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:25.957 { 00:30:25.957 "params": { 00:30:25.957 "name": "Nvme$subsystem", 00:30:25.957 "trtype": "$TEST_TRANSPORT", 00:30:25.957 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:25.957 "adrfam": "ipv4", 00:30:25.957 "trsvcid": "$NVMF_PORT", 00:30:25.957 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:25.957 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:25.957 "hdgst": ${hdgst:-false}, 00:30:25.957 "ddgst": ${ddgst:-false} 00:30:25.957 }, 00:30:25.957 "method": "bdev_nvme_attach_controller" 00:30:25.957 } 00:30:25.957 EOF 00:30:25.957 )") 00:30:26.218 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:30:26.218 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:26.218 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:26.218 { 00:30:26.218 "params": { 00:30:26.218 "name": "Nvme$subsystem", 00:30:26.218 "trtype": "$TEST_TRANSPORT", 00:30:26.218 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:26.218 "adrfam": "ipv4", 00:30:26.218 "trsvcid": "$NVMF_PORT", 00:30:26.218 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:26.218 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:26.218 "hdgst": ${hdgst:-false}, 00:30:26.218 "ddgst": ${ddgst:-false} 00:30:26.218 }, 00:30:26.218 "method": "bdev_nvme_attach_controller" 00:30:26.218 } 00:30:26.218 EOF 00:30:26.218 )") 00:30:26.218 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:30:26.218 [2024-12-09 09:48:01.419691] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:30:26.218 [2024-12-09 09:48:01.419746] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2929890 ] 00:30:26.218 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:26.218 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:26.218 { 00:30:26.218 "params": { 00:30:26.218 "name": "Nvme$subsystem", 00:30:26.218 "trtype": "$TEST_TRANSPORT", 00:30:26.218 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:26.218 "adrfam": "ipv4", 00:30:26.218 "trsvcid": "$NVMF_PORT", 00:30:26.218 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:26.218 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:26.218 "hdgst": ${hdgst:-false}, 00:30:26.218 "ddgst": ${ddgst:-false} 00:30:26.218 }, 00:30:26.218 "method": "bdev_nvme_attach_controller" 00:30:26.218 } 00:30:26.218 EOF 00:30:26.218 )") 00:30:26.218 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:30:26.218 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:26.218 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:26.218 { 00:30:26.218 "params": { 00:30:26.218 "name": "Nvme$subsystem", 00:30:26.218 "trtype": "$TEST_TRANSPORT", 00:30:26.218 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:26.218 "adrfam": "ipv4", 00:30:26.218 "trsvcid": "$NVMF_PORT", 00:30:26.218 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:26.218 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:26.218 "hdgst": ${hdgst:-false}, 00:30:26.218 "ddgst": ${ddgst:-false} 00:30:26.218 }, 00:30:26.218 "method": "bdev_nvme_attach_controller" 00:30:26.218 } 00:30:26.218 EOF 00:30:26.218 )") 00:30:26.218 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:30:26.218 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:26.218 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:26.218 { 00:30:26.218 "params": { 00:30:26.218 "name": "Nvme$subsystem", 00:30:26.218 "trtype": "$TEST_TRANSPORT", 00:30:26.218 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:26.218 "adrfam": "ipv4", 00:30:26.218 "trsvcid": "$NVMF_PORT", 00:30:26.218 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:26.218 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:26.218 "hdgst": ${hdgst:-false}, 00:30:26.218 "ddgst": ${ddgst:-false} 00:30:26.218 }, 00:30:26.218 "method": "bdev_nvme_attach_controller" 00:30:26.218 } 00:30:26.218 EOF 00:30:26.218 )") 00:30:26.218 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:30:26.218 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:26.218 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:26.218 { 00:30:26.218 "params": { 00:30:26.218 "name": "Nvme$subsystem", 00:30:26.218 "trtype": "$TEST_TRANSPORT", 00:30:26.218 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:26.218 "adrfam": "ipv4", 00:30:26.218 "trsvcid": "$NVMF_PORT", 00:30:26.218 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:26.218 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:26.218 "hdgst": ${hdgst:-false}, 00:30:26.218 "ddgst": ${ddgst:-false} 00:30:26.218 }, 00:30:26.218 "method": "bdev_nvme_attach_controller" 00:30:26.218 } 00:30:26.218 EOF 00:30:26.218 )") 00:30:26.218 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:30:26.218 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:30:26.218 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:30:26.218 09:48:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:26.218 "params": { 00:30:26.218 "name": "Nvme1", 00:30:26.218 "trtype": "tcp", 00:30:26.218 "traddr": "10.0.0.2", 00:30:26.218 "adrfam": "ipv4", 00:30:26.218 "trsvcid": "4420", 00:30:26.218 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:26.218 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:26.218 "hdgst": false, 00:30:26.218 "ddgst": false 00:30:26.218 }, 00:30:26.218 "method": "bdev_nvme_attach_controller" 00:30:26.218 },{ 00:30:26.218 "params": { 00:30:26.218 "name": "Nvme2", 00:30:26.218 "trtype": "tcp", 00:30:26.218 "traddr": "10.0.0.2", 00:30:26.218 "adrfam": "ipv4", 00:30:26.218 "trsvcid": "4420", 00:30:26.218 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:26.218 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:26.218 "hdgst": false, 00:30:26.218 "ddgst": false 00:30:26.218 }, 00:30:26.218 "method": "bdev_nvme_attach_controller" 00:30:26.218 },{ 00:30:26.218 "params": { 00:30:26.218 "name": "Nvme3", 00:30:26.218 "trtype": "tcp", 00:30:26.218 "traddr": "10.0.0.2", 00:30:26.218 "adrfam": "ipv4", 00:30:26.218 "trsvcid": "4420", 00:30:26.218 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:30:26.218 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:30:26.218 "hdgst": false, 00:30:26.218 "ddgst": false 00:30:26.218 }, 00:30:26.218 "method": "bdev_nvme_attach_controller" 00:30:26.218 },{ 00:30:26.218 "params": { 00:30:26.218 "name": "Nvme4", 00:30:26.218 "trtype": "tcp", 00:30:26.218 "traddr": "10.0.0.2", 00:30:26.218 "adrfam": "ipv4", 00:30:26.218 "trsvcid": "4420", 00:30:26.218 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:30:26.218 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:30:26.218 "hdgst": false, 00:30:26.218 "ddgst": false 00:30:26.218 }, 00:30:26.218 "method": "bdev_nvme_attach_controller" 00:30:26.218 },{ 00:30:26.218 "params": { 00:30:26.218 "name": "Nvme5", 00:30:26.218 "trtype": "tcp", 00:30:26.218 "traddr": "10.0.0.2", 00:30:26.218 "adrfam": "ipv4", 00:30:26.218 "trsvcid": "4420", 00:30:26.218 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:30:26.218 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:30:26.218 "hdgst": false, 00:30:26.218 "ddgst": false 00:30:26.218 }, 00:30:26.218 "method": "bdev_nvme_attach_controller" 00:30:26.218 },{ 00:30:26.218 "params": { 00:30:26.218 "name": "Nvme6", 00:30:26.218 "trtype": "tcp", 00:30:26.218 "traddr": "10.0.0.2", 00:30:26.218 "adrfam": "ipv4", 00:30:26.218 "trsvcid": "4420", 00:30:26.218 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:30:26.218 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:30:26.218 "hdgst": false, 00:30:26.218 "ddgst": false 00:30:26.218 }, 00:30:26.218 "method": "bdev_nvme_attach_controller" 00:30:26.218 },{ 00:30:26.218 "params": { 00:30:26.218 "name": "Nvme7", 00:30:26.218 "trtype": "tcp", 00:30:26.218 "traddr": "10.0.0.2", 00:30:26.218 "adrfam": "ipv4", 00:30:26.218 "trsvcid": "4420", 00:30:26.218 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:30:26.218 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:30:26.218 "hdgst": false, 00:30:26.218 "ddgst": false 00:30:26.218 }, 00:30:26.218 "method": "bdev_nvme_attach_controller" 00:30:26.218 },{ 00:30:26.218 "params": { 00:30:26.218 "name": "Nvme8", 00:30:26.218 "trtype": "tcp", 00:30:26.218 "traddr": "10.0.0.2", 00:30:26.218 "adrfam": "ipv4", 00:30:26.218 "trsvcid": "4420", 00:30:26.218 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:30:26.218 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:30:26.218 "hdgst": false, 00:30:26.218 "ddgst": false 00:30:26.218 }, 00:30:26.218 "method": "bdev_nvme_attach_controller" 00:30:26.218 },{ 00:30:26.218 "params": { 00:30:26.218 "name": "Nvme9", 00:30:26.218 "trtype": "tcp", 00:30:26.218 "traddr": "10.0.0.2", 00:30:26.218 "adrfam": "ipv4", 00:30:26.218 "trsvcid": "4420", 00:30:26.218 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:30:26.218 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:30:26.218 "hdgst": false, 00:30:26.218 "ddgst": false 00:30:26.218 }, 00:30:26.218 "method": "bdev_nvme_attach_controller" 00:30:26.218 },{ 00:30:26.218 "params": { 00:30:26.218 "name": "Nvme10", 00:30:26.218 "trtype": "tcp", 00:30:26.218 "traddr": "10.0.0.2", 00:30:26.218 "adrfam": "ipv4", 00:30:26.218 "trsvcid": "4420", 00:30:26.218 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:30:26.218 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:30:26.218 "hdgst": false, 00:30:26.219 "ddgst": false 00:30:26.219 }, 00:30:26.219 "method": "bdev_nvme_attach_controller" 00:30:26.219 }' 00:30:26.219 [2024-12-09 09:48:01.510218] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:26.219 [2024-12-09 09:48:01.528573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:27.603 Running I/O for 10 seconds... 00:30:27.603 09:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:27.603 09:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:30:27.603 09:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:30:27.603 09:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:27.603 09:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:27.864 09:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:27.864 09:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:27.864 09:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:30:27.864 09:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:30:27.864 09:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:30:27.864 09:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:30:27.864 09:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:30:27.864 09:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:30:27.864 09:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:30:27.864 09:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:30:27.864 09:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:30:27.864 09:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:27.864 09:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:27.864 09:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:27.864 09:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:30:27.864 09:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:30:27.864 09:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:30:28.125 09:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:30:28.125 09:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:30:28.125 09:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:30:28.125 09:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:30:28.125 09:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.125 09:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:28.125 09:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.125 09:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:30:28.387 09:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:30:28.387 09:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:30:28.387 09:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:30:28.387 09:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:30:28.387 09:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:30:28.387 09:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:30:28.387 09:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.387 09:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:28.664 09:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.664 09:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:30:28.664 09:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:30:28.664 09:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:30:28.664 09:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:30:28.664 09:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:30:28.664 09:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 2929435 00:30:28.664 09:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 2929435 ']' 00:30:28.664 09:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 2929435 00:30:28.664 09:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:30:28.664 09:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:28.664 09:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2929435 00:30:28.664 09:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:28.664 09:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:28.664 09:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2929435' 00:30:28.664 killing process with pid 2929435 00:30:28.664 09:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 2929435 00:30:28.664 09:48:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 2929435 00:30:28.664 [2024-12-09 09:48:03.950161] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x642410 is same with the state(6) to be set 00:30:28.664 [2024-12-09 09:48:03.950208] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x642410 is same with the state(6) to be set 00:30:28.664 [2024-12-09 09:48:03.950215] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x642410 is same with the state(6) to be set 00:30:28.664 [2024-12-09 09:48:03.950220] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x642410 is same with the state(6) to be set 00:30:28.664 [2024-12-09 09:48:03.950226] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x642410 is same with the state(6) to be set 00:30:28.664 [2024-12-09 09:48:03.950230] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x642410 is same with the state(6) to be set 00:30:28.664 [2024-12-09 09:48:03.950235] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x642410 is same with the state(6) to be set 00:30:28.664 [2024-12-09 09:48:03.950240] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x642410 is same with the state(6) to be set 00:30:28.664 [2024-12-09 09:48:03.950245] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x642410 is same with the state(6) to be set 00:30:28.664 [2024-12-09 09:48:03.950250] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x642410 is same with the state(6) to be set 00:30:28.664 [2024-12-09 09:48:03.950254] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x642410 is same with the state(6) to be set 00:30:28.664 [2024-12-09 09:48:03.950259] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x642410 is same with the state(6) to be set 00:30:28.664 [2024-12-09 09:48:03.950264] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x642410 is same with the state(6) to be set 00:30:28.664 [2024-12-09 09:48:03.950274] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x642410 is same with the state(6) to be set 00:30:28.664 [2024-12-09 09:48:03.950279] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x642410 is same with the state(6) to be set 00:30:28.664 [2024-12-09 09:48:03.950283] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x642410 is same with the state(6) to be set 00:30:28.664 [2024-12-09 09:48:03.950288] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x642410 is same with the state(6) to be set 00:30:28.664 [2024-12-09 09:48:03.950293] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x642410 is same with the state(6) to be set 00:30:28.664 [2024-12-09 09:48:03.950298] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x642410 is same with the state(6) to be set 00:30:28.664 [2024-12-09 09:48:03.950302] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x642410 is same with the state(6) to be set 00:30:28.664 [2024-12-09 09:48:03.950307] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x642410 is same with the state(6) to be set 00:30:28.664 [2024-12-09 09:48:03.950312] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x642410 is same with the state(6) to be set 00:30:28.664 [2024-12-09 09:48:03.950317] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x642410 is same with the state(6) to be set 00:30:28.664 [2024-12-09 09:48:03.950321] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x642410 is same with the state(6) to be set 00:30:28.664 [2024-12-09 09:48:03.950326] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x642410 is same with the state(6) to be set 00:30:28.664 [2024-12-09 09:48:03.950330] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x642410 is same with the state(6) to be set 00:30:28.664 [2024-12-09 09:48:03.950335] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x642410 is same with the state(6) to be set 00:30:28.664 [2024-12-09 09:48:03.950339] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x642410 is same with the state(6) to be set 00:30:28.664 [2024-12-09 09:48:03.950345] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x642410 is same with the state(6) to be set 00:30:28.664 [2024-12-09 09:48:03.950350] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x642410 is same with the state(6) to be set 00:30:28.664 [2024-12-09 09:48:03.950354] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x642410 is same with the state(6) to be set 00:30:28.664 [2024-12-09 09:48:03.950359] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x642410 is same with the state(6) to be set 00:30:28.664 [2024-12-09 09:48:03.950364] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x642410 is same with the state(6) to be set 00:30:28.664 [2024-12-09 09:48:03.950369] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x642410 is same with the state(6) to be set 00:30:28.664 [2024-12-09 09:48:03.950373] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x642410 is same with the state(6) to be set 00:30:28.664 [2024-12-09 09:48:03.950378] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x642410 is same with the state(6) to be set 00:30:28.664 [2024-12-09 09:48:03.950382] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x642410 is same with the state(6) to be set 00:30:28.664 [2024-12-09 09:48:03.950387] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x642410 is same with the state(6) to be set 00:30:28.664 [2024-12-09 09:48:03.950392] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x642410 is same with the state(6) to be set 00:30:28.664 [2024-12-09 09:48:03.950397] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x642410 is same with the state(6) to be set 00:30:28.664 [2024-12-09 09:48:03.950403] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x642410 is same with the state(6) to be set 00:30:28.664 [2024-12-09 09:48:03.950408] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x642410 is same with the state(6) to be set 00:30:28.664 [2024-12-09 09:48:03.950413] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x642410 is same with the state(6) to be set 00:30:28.664 [2024-12-09 09:48:03.950418] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x642410 is same with the state(6) to be set 00:30:28.664 [2024-12-09 09:48:03.950422] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x642410 is same with the state(6) to be set 00:30:28.664 [2024-12-09 09:48:03.950427] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x642410 is same with the state(6) to be set 00:30:28.665 [2024-12-09 09:48:03.950431] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x642410 is same with the state(6) to be set 00:30:28.665 [2024-12-09 09:48:03.950436] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x642410 is same with the state(6) to be set 00:30:28.665 [2024-12-09 09:48:03.950441] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x642410 is same with the state(6) to be set 00:30:28.665 [2024-12-09 09:48:03.950445] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x642410 is same with the state(6) to be set 00:30:28.665 [2024-12-09 09:48:03.950450] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x642410 is same with the state(6) to be set 00:30:28.665 [2024-12-09 09:48:03.950454] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x642410 is same with the state(6) to be set 00:30:28.665 [2024-12-09 09:48:03.950459] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x642410 is same with the state(6) to be set 00:30:28.665 [2024-12-09 09:48:03.950463] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x642410 is same with the state(6) to be set 00:30:28.665 [2024-12-09 09:48:03.950468] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x642410 is same with the state(6) to be set 00:30:28.665 [2024-12-09 09:48:03.950472] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x642410 is same with the state(6) to be set 00:30:28.665 [2024-12-09 09:48:03.950477] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x642410 is same with the state(6) to be set 00:30:28.665 [2024-12-09 09:48:03.950482] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x642410 is same with the state(6) to be set 00:30:28.665 [2024-12-09 09:48:03.950486] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x642410 is same with the state(6) to be set 00:30:28.665 [2024-12-09 09:48:03.950644] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:28.665 [2024-12-09 09:48:03.950682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.665 [2024-12-09 09:48:03.950692] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:28.665 [2024-12-09 09:48:03.950700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.665 [2024-12-09 09:48:03.950708] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:28.665 [2024-12-09 09:48:03.950716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.665 [2024-12-09 09:48:03.950724] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:28.665 [2024-12-09 09:48:03.950732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.665 [2024-12-09 09:48:03.950745] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x986440 is same with the state(6) to be set 00:30:28.665 [2024-12-09 09:48:03.951230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.665 [2024-12-09 09:48:03.951248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.665 [2024-12-09 09:48:03.951264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.665 [2024-12-09 09:48:03.951273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.665 [2024-12-09 09:48:03.951283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.665 [2024-12-09 09:48:03.951290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.665 [2024-12-09 09:48:03.951300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.665 [2024-12-09 09:48:03.951308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.665 [2024-12-09 09:48:03.951317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.665 [2024-12-09 09:48:03.951325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.665 [2024-12-09 09:48:03.951334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.665 [2024-12-09 09:48:03.951342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.665 [2024-12-09 09:48:03.951353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.665 [2024-12-09 09:48:03.951360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.665 [2024-12-09 09:48:03.951370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.665 [2024-12-09 09:48:03.951377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.665 [2024-12-09 09:48:03.951387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.665 [2024-12-09 09:48:03.951394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.665 [2024-12-09 09:48:03.951403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.665 [2024-12-09 09:48:03.951411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.665 [2024-12-09 09:48:03.951420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.665 [2024-12-09 09:48:03.951428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.665 [2024-12-09 09:48:03.951438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.665 [2024-12-09 09:48:03.951446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.665 [2024-12-09 09:48:03.951469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.665 [2024-12-09 09:48:03.951476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.665 [2024-12-09 09:48:03.951486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.665 [2024-12-09 09:48:03.951479] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644d00 is same with the state(6) to be set 00:30:28.665 [2024-12-09 09:48:03.951494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.665 [2024-12-09 09:48:03.951502] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644d00 is same with the state(6) to be set 00:30:28.665 [2024-12-09 09:48:03.951505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.665 [2024-12-09 09:48:03.951508] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644d00 is same with the state(6) to be set 00:30:28.665 [2024-12-09 09:48:03.951513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-09 09:48:03.951514] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644d00 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.665 he state(6) to be set 00:30:28.665 [2024-12-09 09:48:03.951522] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644d00 is same with the state(6) to be set 00:30:28.665 [2024-12-09 09:48:03.951524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.665 [2024-12-09 09:48:03.951527] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644d00 is same with the state(6) to be set 00:30:28.665 [2024-12-09 09:48:03.951533] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644d00 is same with t[2024-12-09 09:48:03.951532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:30:28.665 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.665 [2024-12-09 09:48:03.951540] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644d00 is same with the state(6) to be set 00:30:28.665 [2024-12-09 09:48:03.951544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:1[2024-12-09 09:48:03.951545] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644d00 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.665 he state(6) to be set 00:30:28.665 [2024-12-09 09:48:03.951553] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644d00 is same with the state(6) to be set 00:30:28.665 [2024-12-09 09:48:03.951554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.665 [2024-12-09 09:48:03.951558] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644d00 is same with the state(6) to be set 00:30:28.665 [2024-12-09 09:48:03.951564] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644d00 is same with the state(6) to be set 00:30:28.665 [2024-12-09 09:48:03.951564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.665 [2024-12-09 09:48:03.951569] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644d00 is same with the state(6) to be set 00:30:28.665 [2024-12-09 09:48:03.951572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.665 [2024-12-09 09:48:03.951574] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644d00 is same with the state(6) to be set 00:30:28.665 [2024-12-09 09:48:03.951582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.665 [2024-12-09 09:48:03.951584] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644d00 is same with the state(6) to be set 00:30:28.665 [2024-12-09 09:48:03.951590] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644d00 is same with the state(6) to be set 00:30:28.665 [2024-12-09 09:48:03.951591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.666 [2024-12-09 09:48:03.951595] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644d00 is same with the state(6) to be set 00:30:28.666 [2024-12-09 09:48:03.951601] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644d00 is same with the state(6) to be set 00:30:28.666 [2024-12-09 09:48:03.951600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.666 [2024-12-09 09:48:03.951606] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644d00 is same with the state(6) to be set 00:30:28.666 [2024-12-09 09:48:03.951609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.666 [2024-12-09 09:48:03.951612] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644d00 is same with the state(6) to be set 00:30:28.666 [2024-12-09 09:48:03.951617] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644d00 is same with the state(6) to be set 00:30:28.666 [2024-12-09 09:48:03.951619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.666 [2024-12-09 09:48:03.951621] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644d00 is same with the state(6) to be set 00:30:28.666 [2024-12-09 09:48:03.951627] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644d00 is same with the state(6) to be set 00:30:28.666 [2024-12-09 09:48:03.951627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.666 [2024-12-09 09:48:03.951633] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644d00 is same with the state(6) to be set 00:30:28.666 [2024-12-09 09:48:03.951642] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644d00 is same with the state(6) to be set 00:30:28.666 [2024-12-09 09:48:03.951645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.666 [2024-12-09 09:48:03.951647] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644d00 is same with the state(6) to be set 00:30:28.666 [2024-12-09 09:48:03.951652] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644d00 is same with the state(6) to be set 00:30:28.666 [2024-12-09 09:48:03.951653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.666 [2024-12-09 09:48:03.951658] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644d00 is same with the state(6) to be set 00:30:28.666 [2024-12-09 09:48:03.951663] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644d00 is same with t[2024-12-09 09:48:03.951663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:1he state(6) to be set 00:30:28.666 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.666 [2024-12-09 09:48:03.951670] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644d00 is same with the state(6) to be set 00:30:28.666 [2024-12-09 09:48:03.951672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.666 [2024-12-09 09:48:03.951675] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644d00 is same with the state(6) to be set 00:30:28.666 [2024-12-09 09:48:03.951682] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644d00 is same with the state(6) to be set 00:30:28.666 [2024-12-09 09:48:03.951684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.666 [2024-12-09 09:48:03.951687] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644d00 is same with the state(6) to be set 00:30:28.666 [2024-12-09 09:48:03.951692] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644d00 is same with the state(6) to be set 00:30:28.666 [2024-12-09 09:48:03.951692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.666 [2024-12-09 09:48:03.951698] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644d00 is same with the state(6) to be set 00:30:28.666 [2024-12-09 09:48:03.951703] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644d00 is same with the state(6) to be set 00:30:28.666 [2024-12-09 09:48:03.951703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.666 [2024-12-09 09:48:03.951710] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644d00 is same with the state(6) to be set 00:30:28.666 [2024-12-09 09:48:03.951712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.666 [2024-12-09 09:48:03.951715] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644d00 is same with the state(6) to be set 00:30:28.666 [2024-12-09 09:48:03.951721] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644d00 is same with the state(6) to be set 00:30:28.666 [2024-12-09 09:48:03.951722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.666 [2024-12-09 09:48:03.951725] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644d00 is same with the state(6) to be set 00:30:28.666 [2024-12-09 09:48:03.951730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-09 09:48:03.951731] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644d00 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.666 he state(6) to be set 00:30:28.666 [2024-12-09 09:48:03.951740] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644d00 is same with the state(6) to be set 00:30:28.666 [2024-12-09 09:48:03.951742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.666 [2024-12-09 09:48:03.951745] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644d00 is same with the state(6) to be set 00:30:28.666 [2024-12-09 09:48:03.951750] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644d00 is same with t[2024-12-09 09:48:03.951750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:30:28.666 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.666 [2024-12-09 09:48:03.951757] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644d00 is same with the state(6) to be set 00:30:28.666 [2024-12-09 09:48:03.951762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:1[2024-12-09 09:48:03.951763] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644d00 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.666 he state(6) to be set 00:30:28.666 [2024-12-09 09:48:03.951770] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644d00 is same with the state(6) to be set 00:30:28.666 [2024-12-09 09:48:03.951771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.666 [2024-12-09 09:48:03.951775] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644d00 is same with the state(6) to be set 00:30:28.666 [2024-12-09 09:48:03.951781] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644d00 is same with the state(6) to be set 00:30:28.666 [2024-12-09 09:48:03.951784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:1[2024-12-09 09:48:03.951786] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644d00 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.666 he state(6) to be set 00:30:28.666 [2024-12-09 09:48:03.951792] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644d00 is same with the state(6) to be set 00:30:28.666 [2024-12-09 09:48:03.951794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.666 [2024-12-09 09:48:03.951797] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644d00 is same with the state(6) to be set 00:30:28.666 [2024-12-09 09:48:03.951803] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644d00 is same with the state(6) to be set 00:30:28.666 [2024-12-09 09:48:03.951804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.666 [2024-12-09 09:48:03.951808] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644d00 is same with the state(6) to be set 00:30:28.666 [2024-12-09 09:48:03.951812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-09 09:48:03.951814] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644d00 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.666 he state(6) to be set 00:30:28.666 [2024-12-09 09:48:03.951820] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644d00 is same with the state(6) to be set 00:30:28.666 [2024-12-09 09:48:03.951823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.666 [2024-12-09 09:48:03.951825] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644d00 is same with the state(6) to be set 00:30:28.666 [2024-12-09 09:48:03.951831] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644d00 is same with the state(6) to be set 00:30:28.666 [2024-12-09 09:48:03.951831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.666 [2024-12-09 09:48:03.951836] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644d00 is same with the state(6) to be set 00:30:28.666 [2024-12-09 09:48:03.951841] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644d00 is same with the state(6) to be set 00:30:28.666 [2024-12-09 09:48:03.951842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.666 [2024-12-09 09:48:03.951846] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644d00 is same with the state(6) to be set 00:30:28.666 [2024-12-09 09:48:03.951850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.666 [2024-12-09 09:48:03.951851] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644d00 is same with the state(6) to be set 00:30:28.666 [2024-12-09 09:48:03.951858] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644d00 is same with the state(6) to be set 00:30:28.667 [2024-12-09 09:48:03.951860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.667 [2024-12-09 09:48:03.951867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.667 [2024-12-09 09:48:03.951879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.667 [2024-12-09 09:48:03.951886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.667 [2024-12-09 09:48:03.951895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.667 [2024-12-09 09:48:03.951903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.667 [2024-12-09 09:48:03.951912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.667 [2024-12-09 09:48:03.951920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.667 [2024-12-09 09:48:03.951929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.667 [2024-12-09 09:48:03.951937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.667 [2024-12-09 09:48:03.951947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.667 [2024-12-09 09:48:03.951954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.667 [2024-12-09 09:48:03.951964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.667 [2024-12-09 09:48:03.951971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.667 [2024-12-09 09:48:03.951981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.667 [2024-12-09 09:48:03.951989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.667 [2024-12-09 09:48:03.951998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.667 [2024-12-09 09:48:03.952005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.667 [2024-12-09 09:48:03.952015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.667 [2024-12-09 09:48:03.952022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.667 [2024-12-09 09:48:03.952032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.667 [2024-12-09 09:48:03.952039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.667 [2024-12-09 09:48:03.952048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.667 [2024-12-09 09:48:03.952055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.667 [2024-12-09 09:48:03.952065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.667 [2024-12-09 09:48:03.952073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.667 [2024-12-09 09:48:03.952082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.667 [2024-12-09 09:48:03.952092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.667 [2024-12-09 09:48:03.952101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.667 [2024-12-09 09:48:03.952108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.667 [2024-12-09 09:48:03.952118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.667 [2024-12-09 09:48:03.952125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.667 [2024-12-09 09:48:03.952135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.667 [2024-12-09 09:48:03.952142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.667 [2024-12-09 09:48:03.952151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.667 [2024-12-09 09:48:03.952158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.667 [2024-12-09 09:48:03.952168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.667 [2024-12-09 09:48:03.952175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.667 [2024-12-09 09:48:03.952185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.667 [2024-12-09 09:48:03.952193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.667 [2024-12-09 09:48:03.952202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.667 [2024-12-09 09:48:03.952210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.667 [2024-12-09 09:48:03.952219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.667 [2024-12-09 09:48:03.952227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.667 [2024-12-09 09:48:03.952236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.667 [2024-12-09 09:48:03.952243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.667 [2024-12-09 09:48:03.952253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.667 [2024-12-09 09:48:03.952260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.667 [2024-12-09 09:48:03.952269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.667 [2024-12-09 09:48:03.952277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.667 [2024-12-09 09:48:03.952286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.667 [2024-12-09 09:48:03.952293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.667 [2024-12-09 09:48:03.952304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.667 [2024-12-09 09:48:03.952312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.667 [2024-12-09 09:48:03.952322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.667 [2024-12-09 09:48:03.952329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.667 [2024-12-09 09:48:03.952339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.667 [2024-12-09 09:48:03.952347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.667 [2024-12-09 09:48:03.952358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.667 [2024-12-09 09:48:03.952365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.667 [2024-12-09 09:48:03.952375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.667 [2024-12-09 09:48:03.952382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.667 [2024-12-09 09:48:03.952392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.667 [2024-12-09 09:48:03.952399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.667 [2024-12-09 09:48:03.952424] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.667 [2024-12-09 09:48:03.953850] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x642db0 is same with the state(6) to be set 00:30:28.667 [2024-12-09 09:48:03.955040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.667 [2024-12-09 09:48:03.955053] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643770 is same with the state(6) to be set 00:30:28.667 [2024-12-09 09:48:03.955063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.667 [2024-12-09 09:48:03.955068] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643770 is same with the state(6) to be set 00:30:28.667 [2024-12-09 09:48:03.955074] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643770 is same with the state(6) to be set 00:30:28.667 [2024-12-09 09:48:03.955079] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643770 is same with t[2024-12-09 09:48:03.955078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:1he state(6) to be set 00:30:28.668 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.668 [2024-12-09 09:48:03.955085] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643770 is same with the state(6) to be set 00:30:28.668 [2024-12-09 09:48:03.955090] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643770 is same with t[2024-12-09 09:48:03.955089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:30:28.668 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.668 [2024-12-09 09:48:03.955097] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643770 is same with the state(6) to be set 00:30:28.668 [2024-12-09 09:48:03.955102] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643770 is same with t[2024-12-09 09:48:03.955102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:1he state(6) to be set 00:30:28.668 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.668 [2024-12-09 09:48:03.955112] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643770 is same with the state(6) to be set 00:30:28.668 [2024-12-09 09:48:03.955116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-09 09:48:03.955118] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643770 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.668 he state(6) to be set 00:30:28.668 [2024-12-09 09:48:03.955124] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643770 is same with the state(6) to be set 00:30:28.668 [2024-12-09 09:48:03.955129] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643770 is same with the state(6) to be set 00:30:28.668 [2024-12-09 09:48:03.955129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.668 [2024-12-09 09:48:03.955135] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643770 is same with the state(6) to be set 00:30:28.668 [2024-12-09 09:48:03.955140] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643770 is same with t[2024-12-09 09:48:03.955140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:30:28.668 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.668 [2024-12-09 09:48:03.955147] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643770 is same with the state(6) to be set 00:30:28.668 [2024-12-09 09:48:03.955152] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643770 is same with the state(6) to be set 00:30:28.668 [2024-12-09 09:48:03.955152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.668 [2024-12-09 09:48:03.955157] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643770 is same with the state(6) to be set 00:30:28.668 [2024-12-09 09:48:03.955162] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643770 is same with the state(6) to be set 00:30:28.668 [2024-12-09 09:48:03.955162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.668 [2024-12-09 09:48:03.955167] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643770 is same with the state(6) to be set 00:30:28.668 [2024-12-09 09:48:03.955173] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643770 is same with the state(6) to be set 00:30:28.668 [2024-12-09 09:48:03.955174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.668 [2024-12-09 09:48:03.955177] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643770 is same with the state(6) to be set 00:30:28.668 [2024-12-09 09:48:03.955183] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643770 is same with the state(6) to be set 00:30:28.668 [2024-12-09 09:48:03.955184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.668 [2024-12-09 09:48:03.955188] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643770 is same with the state(6) to be set 00:30:28.668 [2024-12-09 09:48:03.955193] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643770 is same with the state(6) to be set 00:30:28.668 [2024-12-09 09:48:03.955198] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643770 is same with t[2024-12-09 09:48:03.955197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:1he state(6) to be set 00:30:28.668 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.668 [2024-12-09 09:48:03.955207] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643770 is same with the state(6) to be set 00:30:28.668 [2024-12-09 09:48:03.955210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-09 09:48:03.955212] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643770 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.668 he state(6) to be set 00:30:28.668 [2024-12-09 09:48:03.955219] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643770 is same with the state(6) to be set 00:30:28.668 [2024-12-09 09:48:03.955224] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643770 is same with t[2024-12-09 09:48:03.955223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:1he state(6) to be set 00:30:28.668 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.668 [2024-12-09 09:48:03.955231] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643770 is same with the state(6) to be set 00:30:28.668 [2024-12-09 09:48:03.955233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.668 [2024-12-09 09:48:03.955236] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643770 is same with the state(6) to be set 00:30:28.668 [2024-12-09 09:48:03.955241] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643770 is same with the state(6) to be set 00:30:28.668 [2024-12-09 09:48:03.955244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.668 [2024-12-09 09:48:03.955245] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643770 is same with the state(6) to be set 00:30:28.668 [2024-12-09 09:48:03.955252] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643770 is same with the state(6) to be set 00:30:28.668 [2024-12-09 09:48:03.955252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.668 [2024-12-09 09:48:03.955256] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643770 is same with the state(6) to be set 00:30:28.668 [2024-12-09 09:48:03.955262] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643770 is same with the state(6) to be set 00:30:28.668 [2024-12-09 09:48:03.955262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.668 [2024-12-09 09:48:03.955267] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643770 is same with the state(6) to be set 00:30:28.668 [2024-12-09 09:48:03.955270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.668 [2024-12-09 09:48:03.955273] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643770 is same with the state(6) to be set 00:30:28.668 [2024-12-09 09:48:03.955278] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643770 is same with the state(6) to be set 00:30:28.668 [2024-12-09 09:48:03.955280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.668 [2024-12-09 09:48:03.955282] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643770 is same with the state(6) to be set 00:30:28.668 [2024-12-09 09:48:03.955288] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643770 is same with the state(6) to be set 00:30:28.668 [2024-12-09 09:48:03.955289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.668 [2024-12-09 09:48:03.955293] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643770 is same with the state(6) to be set 00:30:28.668 [2024-12-09 09:48:03.955298] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643770 is same with t[2024-12-09 09:48:03.955299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:1he state(6) to be set 00:30:28.668 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.668 [2024-12-09 09:48:03.955305] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643770 is same with the state(6) to be set 00:30:28.668 [2024-12-09 09:48:03.955308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.668 [2024-12-09 09:48:03.955310] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643770 is same with the state(6) to be set 00:30:28.668 [2024-12-09 09:48:03.955316] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643770 is same with the state(6) to be set 00:30:28.668 [2024-12-09 09:48:03.955317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.668 [2024-12-09 09:48:03.955320] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643770 is same with the state(6) to be set 00:30:28.668 [2024-12-09 09:48:03.955326] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643770 is same with t[2024-12-09 09:48:03.955325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:30:28.668 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.668 [2024-12-09 09:48:03.955332] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643770 is same with the state(6) to be set 00:30:28.668 [2024-12-09 09:48:03.955338] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643770 is same with t[2024-12-09 09:48:03.955337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:1he state(6) to be set 00:30:28.668 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.668 [2024-12-09 09:48:03.955344] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643770 is same with the state(6) to be set 00:30:28.668 [2024-12-09 09:48:03.955346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.668 [2024-12-09 09:48:03.955349] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643770 is same with the state(6) to be set 00:30:28.668 [2024-12-09 09:48:03.955354] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643770 is same with the state(6) to be set 00:30:28.668 [2024-12-09 09:48:03.955356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.669 [2024-12-09 09:48:03.955358] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643770 is same with the state(6) to be set 00:30:28.669 [2024-12-09 09:48:03.955364] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643770 is same with the state(6) to be set 00:30:28.669 [2024-12-09 09:48:03.955365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.669 [2024-12-09 09:48:03.955370] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643770 is same with the state(6) to be set 00:30:28.669 [2024-12-09 09:48:03.955375] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643770 is same with the state(6) to be set 00:30:28.669 [2024-12-09 09:48:03.955375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.669 [2024-12-09 09:48:03.955380] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643770 is same with the state(6) to be set 00:30:28.669 [2024-12-09 09:48:03.955383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.669 [2024-12-09 09:48:03.955385] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643770 is same with the state(6) to be set 00:30:28.669 [2024-12-09 09:48:03.955393] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643770 is same with the state(6) to be set 00:30:28.669 [2024-12-09 09:48:03.955394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.669 [2024-12-09 09:48:03.955397] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643770 is same with the state(6) to be set 00:30:28.669 [2024-12-09 09:48:03.955402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-09 09:48:03.955403] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643770 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.669 he state(6) to be set 00:30:28.669 [2024-12-09 09:48:03.955410] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643770 is same with the state(6) to be set 00:30:28.669 [2024-12-09 09:48:03.955413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.669 [2024-12-09 09:48:03.955421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.669 [2024-12-09 09:48:03.955431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.669 [2024-12-09 09:48:03.955439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.669 [2024-12-09 09:48:03.955448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.669 [2024-12-09 09:48:03.955456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.669 [2024-12-09 09:48:03.955465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.669 [2024-12-09 09:48:03.955472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.669 [2024-12-09 09:48:03.955482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.669 [2024-12-09 09:48:03.955490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.669 [2024-12-09 09:48:03.955499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.669 [2024-12-09 09:48:03.955507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.669 [2024-12-09 09:48:03.955516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.669 [2024-12-09 09:48:03.955523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.669 [2024-12-09 09:48:03.955533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.669 [2024-12-09 09:48:03.955541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.669 [2024-12-09 09:48:03.955550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.669 [2024-12-09 09:48:03.955558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.669 [2024-12-09 09:48:03.955567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.669 [2024-12-09 09:48:03.955576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.669 [2024-12-09 09:48:03.955586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.669 [2024-12-09 09:48:03.955594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.669 [2024-12-09 09:48:03.955604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.669 [2024-12-09 09:48:03.955612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.669 [2024-12-09 09:48:03.955621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.669 [2024-12-09 09:48:03.955628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.669 [2024-12-09 09:48:03.955643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.669 [2024-12-09 09:48:03.955651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.669 [2024-12-09 09:48:03.955660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.669 [2024-12-09 09:48:03.955668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.669 [2024-12-09 09:48:03.955677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.669 [2024-12-09 09:48:03.955684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.669 [2024-12-09 09:48:03.955694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.669 [2024-12-09 09:48:03.955701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.669 [2024-12-09 09:48:03.955711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.669 [2024-12-09 09:48:03.955718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.669 [2024-12-09 09:48:03.955728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.669 [2024-12-09 09:48:03.955735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.669 [2024-12-09 09:48:03.955745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.669 [2024-12-09 09:48:03.955752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.669 [2024-12-09 09:48:03.955762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.669 [2024-12-09 09:48:03.955769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.670 [2024-12-09 09:48:03.955779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.670 [2024-12-09 09:48:03.955786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.670 [2024-12-09 09:48:03.955797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.670 [2024-12-09 09:48:03.955805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.670 [2024-12-09 09:48:03.955814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.670 [2024-12-09 09:48:03.955822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.670 [2024-12-09 09:48:03.955831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.670 [2024-12-09 09:48:03.955839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.670 [2024-12-09 09:48:03.955848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.670 [2024-12-09 09:48:03.955855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.670 [2024-12-09 09:48:03.955865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.670 [2024-12-09 09:48:03.955872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.670 [2024-12-09 09:48:03.955882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.670 [2024-12-09 09:48:03.955889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.670 [2024-12-09 09:48:03.955898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.670 [2024-12-09 09:48:03.955906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.670 [2024-12-09 09:48:03.955915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.670 [2024-12-09 09:48:03.955922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.670 [2024-12-09 09:48:03.955932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.670 [2024-12-09 09:48:03.955940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.670 [2024-12-09 09:48:03.955949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.670 [2024-12-09 09:48:03.955956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.670 [2024-12-09 09:48:03.955966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.670 [2024-12-09 09:48:03.955974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.670 [2024-12-09 09:48:03.955983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.670 [2024-12-09 09:48:03.955991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.670 [2024-12-09 09:48:03.956000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.670 [2024-12-09 09:48:03.956009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.670 [2024-12-09 09:48:03.956019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.670 [2024-12-09 09:48:03.956027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.670 [2024-12-09 09:48:03.956036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.670 [2024-12-09 09:48:03.956044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.670 [2024-12-09 09:48:03.956053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.670 [2024-12-09 09:48:03.956061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.670 [2024-12-09 09:48:03.956070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.670 [2024-12-09 09:48:03.956078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.670 [2024-12-09 09:48:03.956087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.670 [2024-12-09 09:48:03.956094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.670 [2024-12-09 09:48:03.956104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.670 [2024-12-09 09:48:03.956111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.670 [2024-12-09 09:48:03.956120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.670 [2024-12-09 09:48:03.956128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.670 [2024-12-09 09:48:03.956137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.670 [2024-12-09 09:48:03.956144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.670 [2024-12-09 09:48:03.956154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.670 [2024-12-09 09:48:03.956162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.670 [2024-12-09 09:48:03.956171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.670 [2024-12-09 09:48:03.956179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.670 [2024-12-09 09:48:03.956188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.670 [2024-12-09 09:48:03.956196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.670 [2024-12-09 09:48:03.956205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.670 [2024-12-09 09:48:03.956212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.670 [2024-12-09 09:48:03.956324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.670 [2024-12-09 09:48:03.956336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.670 [2024-12-09 09:48:03.956347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.670 [2024-12-09 09:48:03.956355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.670 [2024-12-09 09:48:03.956355] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643af0 is same with the state(6) to be set 00:30:28.670 [2024-12-09 09:48:03.956365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.670 [2024-12-09 09:48:03.956369] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643af0 is same with the state(6) to be set 00:30:28.670 [2024-12-09 09:48:03.956373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.670 [2024-12-09 09:48:03.956376] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643af0 is same with the state(6) to be set 00:30:28.670 [2024-12-09 09:48:03.956381] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643af0 is same with the state(6) to be set 00:30:28.670 [2024-12-09 09:48:03.956386] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643af0 is same with t[2024-12-09 09:48:03.956385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:1he state(6) to be set 00:30:28.670 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.670 [2024-12-09 09:48:03.956393] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643af0 is same with the state(6) to be set 00:30:28.670 [2024-12-09 09:48:03.956396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.670 [2024-12-09 09:48:03.956398] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643af0 is same with the state(6) to be set 00:30:28.670 [2024-12-09 09:48:03.956404] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643af0 is same with the state(6) to be set 00:30:28.670 [2024-12-09 09:48:03.956406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.670 [2024-12-09 09:48:03.956410] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643af0 is same with the state(6) to be set 00:30:28.670 [2024-12-09 09:48:03.956414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-09 09:48:03.956416] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643af0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.670 he state(6) to be set 00:30:28.671 [2024-12-09 09:48:03.956422] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643af0 is same with the state(6) to be set 00:30:28.671 [2024-12-09 09:48:03.956425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.671 [2024-12-09 09:48:03.956427] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643af0 is same with the state(6) to be set 00:30:28.671 [2024-12-09 09:48:03.956433] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643af0 is same with the state(6) to be set 00:30:28.671 [2024-12-09 09:48:03.956433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.671 [2024-12-09 09:48:03.956439] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643af0 is same with the state(6) to be set 00:30:28.671 [2024-12-09 09:48:03.956444] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643af0 is same with t[2024-12-09 09:48:03.956444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:1he state(6) to be set 00:30:28.671 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.671 [2024-12-09 09:48:03.956454] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643af0 is same with the state(6) to be set 00:30:28.671 [2024-12-09 09:48:03.956457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.671 [2024-12-09 09:48:03.956460] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643af0 is same with the state(6) to be set 00:30:28.671 [2024-12-09 09:48:03.956465] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643af0 is same with the state(6) to be set 00:30:28.671 [2024-12-09 09:48:03.956467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.671 [2024-12-09 09:48:03.956471] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643af0 is same with the state(6) to be set 00:30:28.671 [2024-12-09 09:48:03.956475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-09 09:48:03.956476] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643af0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.671 he state(6) to be set 00:30:28.671 [2024-12-09 09:48:03.956483] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643af0 is same with the state(6) to be set 00:30:28.671 [2024-12-09 09:48:03.956486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.671 [2024-12-09 09:48:03.956488] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643af0 is same with the state(6) to be set 00:30:28.671 [2024-12-09 09:48:03.956494] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643af0 is same with the state(6) to be set 00:30:28.671 [2024-12-09 09:48:03.956494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.671 [2024-12-09 09:48:03.956499] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643af0 is same with the state(6) to be set 00:30:28.671 [2024-12-09 09:48:03.956504] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643af0 is same with the state(6) to be set 00:30:28.671 [2024-12-09 09:48:03.956504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.671 [2024-12-09 09:48:03.956509] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643af0 is same with the state(6) to be set 00:30:28.671 [2024-12-09 09:48:03.956512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.671 [2024-12-09 09:48:03.956514] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643af0 is same with the state(6) to be set 00:30:28.671 [2024-12-09 09:48:03.956520] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643af0 is same with the state(6) to be set 00:30:28.671 [2024-12-09 09:48:03.956522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.671 [2024-12-09 09:48:03.956525] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643af0 is same with the state(6) to be set 00:30:28.671 [2024-12-09 09:48:03.956531] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643af0 is same with t[2024-12-09 09:48:03.956530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:30:28.671 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.671 [2024-12-09 09:48:03.956537] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643af0 is same with the state(6) to be set 00:30:28.671 [2024-12-09 09:48:03.956543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:1[2024-12-09 09:48:03.956544] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643af0 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.671 he state(6) to be set 00:30:28.671 [2024-12-09 09:48:03.956551] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643af0 is same with the state(6) to be set 00:30:28.671 [2024-12-09 09:48:03.956552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.671 [2024-12-09 09:48:03.956556] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643af0 is same with the state(6) to be set 00:30:28.671 [2024-12-09 09:48:03.956562] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643af0 is same with the state(6) to be set 00:30:28.671 [2024-12-09 09:48:03.956563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.671 [2024-12-09 09:48:03.956567] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643af0 is same with the state(6) to be set 00:30:28.671 [2024-12-09 09:48:03.956570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.671 [2024-12-09 09:48:03.956573] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643af0 is same with the state(6) to be set 00:30:28.671 [2024-12-09 09:48:03.956578] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643af0 is same with the state(6) to be set 00:30:28.671 [2024-12-09 09:48:03.956580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.671 [2024-12-09 09:48:03.956583] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643af0 is same with the state(6) to be set 00:30:28.671 [2024-12-09 09:48:03.956589] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643af0 is same with the state(6) to be set 00:30:28.671 [2024-12-09 09:48:03.956589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.671 [2024-12-09 09:48:03.956594] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643af0 is same with the state(6) to be set 00:30:28.671 [2024-12-09 09:48:03.956600] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643af0 is same with t[2024-12-09 09:48:03.956599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:1he state(6) to be set 00:30:28.671 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.671 [2024-12-09 09:48:03.956607] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643af0 is same with the state(6) to be set 00:30:28.671 [2024-12-09 09:48:03.956609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.671 [2024-12-09 09:48:03.956612] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643af0 is same with the state(6) to be set 00:30:28.671 [2024-12-09 09:48:03.956618] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643af0 is same with the state(6) to be set 00:30:28.671 [2024-12-09 09:48:03.956619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.671 [2024-12-09 09:48:03.956623] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643af0 is same with the state(6) to be set 00:30:28.671 [2024-12-09 09:48:03.956627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-09 09:48:03.956628] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643af0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.671 he state(6) to be set 00:30:28.671 [2024-12-09 09:48:03.956641] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643af0 is same with the state(6) to be set 00:30:28.671 [2024-12-09 09:48:03.956642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.671 [2024-12-09 09:48:03.956646] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643af0 is same with the state(6) to be set 00:30:28.671 [2024-12-09 09:48:03.956651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-09 09:48:03.956652] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643af0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.671 he state(6) to be set 00:30:28.671 [2024-12-09 09:48:03.956659] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643af0 is same with the state(6) to be set 00:30:28.671 [2024-12-09 09:48:03.956662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.671 [2024-12-09 09:48:03.956664] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643af0 is same with the state(6) to be set 00:30:28.671 [2024-12-09 09:48:03.956670] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643af0 is same with the state(6) to be set 00:30:28.671 [2024-12-09 09:48:03.956670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.671 [2024-12-09 09:48:03.956676] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643af0 is same with the state(6) to be set 00:30:28.671 [2024-12-09 09:48:03.956681] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643af0 is same with t[2024-12-09 09:48:03.956680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:1he state(6) to be set 00:30:28.671 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.671 [2024-12-09 09:48:03.956687] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643af0 is same with the state(6) to be set 00:30:28.671 [2024-12-09 09:48:03.956690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.671 [2024-12-09 09:48:03.956693] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643af0 is same with the state(6) to be set 00:30:28.672 [2024-12-09 09:48:03.956698] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643af0 is same with the state(6) to be set 00:30:28.672 [2024-12-09 09:48:03.956700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.672 [2024-12-09 09:48:03.956703] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643af0 is same with the state(6) to be set 00:30:28.672 [2024-12-09 09:48:03.956707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-09 09:48:03.956709] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643af0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.672 he state(6) to be set 00:30:28.672 [2024-12-09 09:48:03.956716] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643af0 is same with the state(6) to be set 00:30:28.672 [2024-12-09 09:48:03.956719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.672 [2024-12-09 09:48:03.956721] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643af0 is same with the state(6) to be set 00:30:28.672 [2024-12-09 09:48:03.956727] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643af0 is same with the state(6) to be set 00:30:28.672 [2024-12-09 09:48:03.956727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.672 [2024-12-09 09:48:03.956737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.672 [2024-12-09 09:48:03.956747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.672 [2024-12-09 09:48:03.956757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.672 [2024-12-09 09:48:03.956765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.672 [2024-12-09 09:48:03.956775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.672 [2024-12-09 09:48:03.956782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.672 [2024-12-09 09:48:03.957622] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643fc0 is same with the state(6) to be set 00:30:28.672 [2024-12-09 09:48:03.957642] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643fc0 is same with the state(6) to be set 00:30:28.672 [2024-12-09 09:48:03.957648] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643fc0 is same with the state(6) to be set 00:30:28.672 [2024-12-09 09:48:03.957653] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643fc0 is same with the state(6) to be set 00:30:28.672 [2024-12-09 09:48:03.957658] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643fc0 is same with the state(6) to be set 00:30:28.672 [2024-12-09 09:48:03.957663] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643fc0 is same with the state(6) to be set 00:30:28.672 [2024-12-09 09:48:03.957668] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643fc0 is same with the state(6) to be set 00:30:28.672 [2024-12-09 09:48:03.957673] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643fc0 is same with the state(6) to be set 00:30:28.672 [2024-12-09 09:48:03.957678] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643fc0 is same with the state(6) to be set 00:30:28.672 [2024-12-09 09:48:03.957682] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643fc0 is same with the state(6) to be set 00:30:28.672 [2024-12-09 09:48:03.957687] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643fc0 is same with the state(6) to be set 00:30:28.672 [2024-12-09 09:48:03.957692] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643fc0 is same with the state(6) to be set 00:30:28.672 [2024-12-09 09:48:03.957697] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643fc0 is same with the state(6) to be set 00:30:28.672 [2024-12-09 09:48:03.957702] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643fc0 is same with the state(6) to be set 00:30:28.672 [2024-12-09 09:48:03.957707] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643fc0 is same with the state(6) to be set 00:30:28.672 [2024-12-09 09:48:03.957711] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643fc0 is same with the state(6) to be set 00:30:28.672 [2024-12-09 09:48:03.957716] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643fc0 is same with the state(6) to be set 00:30:28.672 [2024-12-09 09:48:03.957721] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643fc0 is same with the state(6) to be set 00:30:28.672 [2024-12-09 09:48:03.957725] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643fc0 is same with the state(6) to be set 00:30:28.672 [2024-12-09 09:48:03.957730] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643fc0 is same with the state(6) to be set 00:30:28.672 [2024-12-09 09:48:03.957735] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643fc0 is same with the state(6) to be set 00:30:28.672 [2024-12-09 09:48:03.957740] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643fc0 is same with the state(6) to be set 00:30:28.672 [2024-12-09 09:48:03.957753] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643fc0 is same with the state(6) to be set 00:30:28.672 [2024-12-09 09:48:03.957758] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643fc0 is same with the state(6) to be set 00:30:28.672 [2024-12-09 09:48:03.957763] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643fc0 is same with the state(6) to be set 00:30:28.672 [2024-12-09 09:48:03.957768] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643fc0 is same with the state(6) to be set 00:30:28.672 [2024-12-09 09:48:03.957773] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643fc0 is same with the state(6) to be set 00:30:28.672 [2024-12-09 09:48:03.957778] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643fc0 is same with the state(6) to be set 00:30:28.672 [2024-12-09 09:48:03.957783] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643fc0 is same with the state(6) to be set 00:30:28.672 [2024-12-09 09:48:03.957787] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643fc0 is same with the state(6) to be set 00:30:28.672 [2024-12-09 09:48:03.957792] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643fc0 is same with the state(6) to be set 00:30:28.672 [2024-12-09 09:48:03.957796] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643fc0 is same with the state(6) to be set 00:30:28.672 [2024-12-09 09:48:03.957801] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643fc0 is same with the state(6) to be set 00:30:28.672 [2024-12-09 09:48:03.957806] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643fc0 is same with the state(6) to be set 00:30:28.672 [2024-12-09 09:48:03.957812] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643fc0 is same with the state(6) to be set 00:30:28.672 [2024-12-09 09:48:03.957817] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643fc0 is same with the state(6) to be set 00:30:28.672 [2024-12-09 09:48:03.957821] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643fc0 is same with the state(6) to be set 00:30:28.672 [2024-12-09 09:48:03.957826] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643fc0 is same with the state(6) to be set 00:30:28.672 [2024-12-09 09:48:03.957831] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643fc0 is same with the state(6) to be set 00:30:28.672 [2024-12-09 09:48:03.957836] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643fc0 is same with the state(6) to be set 00:30:28.672 [2024-12-09 09:48:03.957841] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643fc0 is same with the state(6) to be set 00:30:28.672 [2024-12-09 09:48:03.957845] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643fc0 is same with the state(6) to be set 00:30:28.672 [2024-12-09 09:48:03.957850] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643fc0 is same with the state(6) to be set 00:30:28.672 [2024-12-09 09:48:03.957855] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643fc0 is same with the state(6) to be set 00:30:28.672 [2024-12-09 09:48:03.957859] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643fc0 is same with the state(6) to be set 00:30:28.672 [2024-12-09 09:48:03.957864] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643fc0 is same with the state(6) to be set 00:30:28.672 [2024-12-09 09:48:03.957869] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643fc0 is same with the state(6) to be set 00:30:28.672 [2024-12-09 09:48:03.957874] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643fc0 is same with the state(6) to be set 00:30:28.672 [2024-12-09 09:48:03.957878] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643fc0 is same with the state(6) to be set 00:30:28.672 [2024-12-09 09:48:03.957884] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643fc0 is same with the state(6) to be set 00:30:28.672 [2024-12-09 09:48:03.957889] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643fc0 is same with the state(6) to be set 00:30:28.672 [2024-12-09 09:48:03.957894] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643fc0 is same with the state(6) to be set 00:30:28.672 [2024-12-09 09:48:03.957898] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643fc0 is same with the state(6) to be set 00:30:28.672 [2024-12-09 09:48:03.957903] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643fc0 is same with the state(6) to be set 00:30:28.672 [2024-12-09 09:48:03.957908] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643fc0 is same with the state(6) to be set 00:30:28.672 [2024-12-09 09:48:03.957913] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643fc0 is same with the state(6) to be set 00:30:28.672 [2024-12-09 09:48:03.957918] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643fc0 is same with the state(6) to be set 00:30:28.672 [2024-12-09 09:48:03.957923] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643fc0 is same with the state(6) to be set 00:30:28.672 [2024-12-09 09:48:03.957928] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643fc0 is same with the state(6) to be set 00:30:28.672 [2024-12-09 09:48:03.957933] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643fc0 is same with the state(6) to be set 00:30:28.672 [2024-12-09 09:48:03.957937] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643fc0 is same with the state(6) to be set 00:30:28.672 [2024-12-09 09:48:03.957942] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643fc0 is same with the state(6) to be set 00:30:28.673 [2024-12-09 09:48:03.957947] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x643fc0 is same with the state(6) to be set 00:30:28.673 [2024-12-09 09:48:03.958588] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6444b0 is same with the state(6) to be set 00:30:28.673 [2024-12-09 09:48:03.958610] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6444b0 is same with the state(6) to be set 00:30:28.673 [2024-12-09 09:48:03.958616] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6444b0 is same with the state(6) to be set 00:30:28.673 [2024-12-09 09:48:03.958622] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6444b0 is same with the state(6) to be set 00:30:28.673 [2024-12-09 09:48:03.958627] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6444b0 is same with the state(6) to be set 00:30:28.673 [2024-12-09 09:48:03.958631] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6444b0 is same with the state(6) to be set 00:30:28.673 [2024-12-09 09:48:03.958642] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6444b0 is same with the state(6) to be set 00:30:28.673 [2024-12-09 09:48:03.958648] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6444b0 is same with the state(6) to be set 00:30:28.673 [2024-12-09 09:48:03.958653] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6444b0 is same with the state(6) to be set 00:30:28.673 [2024-12-09 09:48:03.958658] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6444b0 is same with the state(6) to be set 00:30:28.673 [2024-12-09 09:48:03.958662] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6444b0 is same with the state(6) to be set 00:30:28.673 [2024-12-09 09:48:03.958667] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6444b0 is same with the state(6) to be set 00:30:28.673 [2024-12-09 09:48:03.958672] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6444b0 is same with the state(6) to be set 00:30:28.673 [2024-12-09 09:48:03.958680] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6444b0 is same with the state(6) to be set 00:30:28.673 [2024-12-09 09:48:03.958685] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6444b0 is same with the state(6) to be set 00:30:28.673 [2024-12-09 09:48:03.958690] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6444b0 is same with the state(6) to be set 00:30:28.673 [2024-12-09 09:48:03.958695] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6444b0 is same with the state(6) to be set 00:30:28.673 [2024-12-09 09:48:03.958699] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6444b0 is same with the state(6) to be set 00:30:28.673 [2024-12-09 09:48:03.958704] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6444b0 is same with the state(6) to be set 00:30:28.673 [2024-12-09 09:48:03.958709] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6444b0 is same with the state(6) to be set 00:30:28.673 [2024-12-09 09:48:03.958713] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6444b0 is same with the state(6) to be set 00:30:28.673 [2024-12-09 09:48:03.958718] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6444b0 is same with the state(6) to be set 00:30:28.673 [2024-12-09 09:48:03.958755] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6444b0 is same with the state(6) to be set 00:30:28.673 [2024-12-09 09:48:03.958787] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6444b0 is same with the state(6) to be set 00:30:28.673 [2024-12-09 09:48:03.958837] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6444b0 is same with the state(6) to be set 00:30:28.673 [2024-12-09 09:48:03.958890] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6444b0 is same with the state(6) to be set 00:30:28.673 [2024-12-09 09:48:03.958940] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6444b0 is same with the state(6) to be set 00:30:28.673 [2024-12-09 09:48:03.958991] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6444b0 is same with the state(6) to be set 00:30:28.673 [2024-12-09 09:48:03.959041] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6444b0 is same with the state(6) to be set 00:30:28.673 [2024-12-09 09:48:03.959090] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6444b0 is same with the state(6) to be set 00:30:28.673 [2024-12-09 09:48:03.959144] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6444b0 is same with the state(6) to be set 00:30:28.673 [2024-12-09 09:48:03.959198] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6444b0 is same with the state(6) to be set 00:30:28.673 [2024-12-09 09:48:03.959249] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6444b0 is same with the state(6) to be set 00:30:28.673 [2024-12-09 09:48:03.959298] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6444b0 is same with the state(6) to be set 00:30:28.673 [2024-12-09 09:48:03.959348] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6444b0 is same with the state(6) to be set 00:30:28.673 [2024-12-09 09:48:03.959397] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6444b0 is same with the state(6) to be set 00:30:28.673 [2024-12-09 09:48:03.959449] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6444b0 is same with the state(6) to be set 00:30:28.673 [2024-12-09 09:48:03.959498] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6444b0 is same with the state(6) to be set 00:30:28.673 [2024-12-09 09:48:03.959548] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6444b0 is same with the state(6) to be set 00:30:28.673 [2024-12-09 09:48:03.959598] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6444b0 is same with the state(6) to be set 00:30:28.673 [2024-12-09 09:48:03.959654] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6444b0 is same with the state(6) to be set 00:30:28.673 [2024-12-09 09:48:03.959709] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6444b0 is same with the state(6) to be set 00:30:28.673 [2024-12-09 09:48:03.959763] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6444b0 is same with the state(6) to be set 00:30:28.673 [2024-12-09 09:48:03.959812] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6444b0 is same with the state(6) to be set 00:30:28.673 [2024-12-09 09:48:03.959862] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6444b0 is same with the state(6) to be set 00:30:28.673 [2024-12-09 09:48:03.959912] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6444b0 is same with the state(6) to be set 00:30:28.673 [2024-12-09 09:48:03.959960] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6444b0 is same with the state(6) to be set 00:30:28.673 [2024-12-09 09:48:03.960020] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6444b0 is same with the state(6) to be set 00:30:28.673 [2024-12-09 09:48:03.960072] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6444b0 is same with the state(6) to be set 00:30:28.673 [2024-12-09 09:48:03.960129] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6444b0 is same with the state(6) to be set 00:30:28.673 [2024-12-09 09:48:03.960182] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6444b0 is same with the state(6) to be set 00:30:28.673 [2024-12-09 09:48:03.960231] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6444b0 is same with the state(6) to be set 00:30:28.673 [2024-12-09 09:48:03.960284] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6444b0 is same with the state(6) to be set 00:30:28.673 [2024-12-09 09:48:03.960335] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6444b0 is same with the state(6) to be set 00:30:28.673 [2024-12-09 09:48:03.960384] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6444b0 is same with the state(6) to be set 00:30:28.673 [2024-12-09 09:48:03.960435] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6444b0 is same with the state(6) to be set 00:30:28.673 [2024-12-09 09:48:03.960486] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6444b0 is same with the state(6) to be set 00:30:28.673 [2024-12-09 09:48:03.960544] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6444b0 is same with the state(6) to be set 00:30:28.673 [2024-12-09 09:48:03.960593] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6444b0 is same with the state(6) to be set 00:30:28.673 [2024-12-09 09:48:03.960651] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6444b0 is same with the state(6) to be set 00:30:28.673 [2024-12-09 09:48:03.960700] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6444b0 is same with the state(6) to be set 00:30:28.673 [2024-12-09 09:48:03.960750] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6444b0 is same with the state(6) to be set 00:30:28.673 [2024-12-09 09:48:03.960801] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6444b0 is same with the state(6) to be set 00:30:28.673 [2024-12-09 09:48:03.961312] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644830 is same with the state(6) to be set 00:30:28.673 [2024-12-09 09:48:03.961326] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644830 is same with the state(6) to be set 00:30:28.673 [2024-12-09 09:48:03.961332] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644830 is same with the state(6) to be set 00:30:28.673 [2024-12-09 09:48:03.961336] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644830 is same with the state(6) to be set 00:30:28.673 [2024-12-09 09:48:03.961342] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644830 is same with the state(6) to be set 00:30:28.673 [2024-12-09 09:48:03.961350] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644830 is same with the state(6) to be set 00:30:28.673 [2024-12-09 09:48:03.961355] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644830 is same with the state(6) to be set 00:30:28.673 [2024-12-09 09:48:03.961360] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644830 is same with the state(6) to be set 00:30:28.673 [2024-12-09 09:48:03.961365] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644830 is same with the state(6) to be set 00:30:28.673 [2024-12-09 09:48:03.961370] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644830 is same with the state(6) to be set 00:30:28.673 [2024-12-09 09:48:03.961374] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644830 is same with the state(6) to be set 00:30:28.673 [2024-12-09 09:48:03.961409] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644830 is same with the state(6) to be set 00:30:28.674 [2024-12-09 09:48:03.961457] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644830 is same with the state(6) to be set 00:30:28.674 [2024-12-09 09:48:03.961511] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644830 is same with the state(6) to be set 00:30:28.674 [2024-12-09 09:48:03.961561] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644830 is same with the state(6) to be set 00:30:28.674 [2024-12-09 09:48:03.961612] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644830 is same with the state(6) to be set 00:30:28.674 [2024-12-09 09:48:03.961670] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644830 is same with the state(6) to be set 00:30:28.674 [2024-12-09 09:48:03.961722] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644830 is same with the state(6) to be set 00:30:28.674 [2024-12-09 09:48:03.961777] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644830 is same with the state(6) to be set 00:30:28.674 [2024-12-09 09:48:03.961827] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644830 is same with the state(6) to be set 00:30:28.674 [2024-12-09 09:48:03.961877] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644830 is same with the state(6) to be set 00:30:28.674 [2024-12-09 09:48:03.961926] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644830 is same with the state(6) to be set 00:30:28.674 [2024-12-09 09:48:03.961977] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644830 is same with the state(6) to be set 00:30:28.674 [2024-12-09 09:48:03.962032] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644830 is same with the state(6) to be set 00:30:28.674 [2024-12-09 09:48:03.962086] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644830 is same with the state(6) to be set 00:30:28.674 [2024-12-09 09:48:03.962136] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644830 is same with the state(6) to be set 00:30:28.674 [2024-12-09 09:48:03.972195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.674 [2024-12-09 09:48:03.972224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.674 [2024-12-09 09:48:03.972235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.674 [2024-12-09 09:48:03.972244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.674 [2024-12-09 09:48:03.972253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.674 [2024-12-09 09:48:03.972261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.674 [2024-12-09 09:48:03.972277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.674 [2024-12-09 09:48:03.972284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.674 [2024-12-09 09:48:03.972295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.674 [2024-12-09 09:48:03.972304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.674 [2024-12-09 09:48:03.972314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.674 [2024-12-09 09:48:03.972322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.674 [2024-12-09 09:48:03.972331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.674 [2024-12-09 09:48:03.972339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.674 [2024-12-09 09:48:03.972348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.674 [2024-12-09 09:48:03.972356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.674 [2024-12-09 09:48:03.972366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.674 [2024-12-09 09:48:03.972373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.674 [2024-12-09 09:48:03.972383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.674 [2024-12-09 09:48:03.972390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.674 [2024-12-09 09:48:03.972400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.674 [2024-12-09 09:48:03.972407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.674 [2024-12-09 09:48:03.972417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.674 [2024-12-09 09:48:03.972425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.674 [2024-12-09 09:48:03.972436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.674 [2024-12-09 09:48:03.972443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.674 [2024-12-09 09:48:03.972453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.674 [2024-12-09 09:48:03.972461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.674 [2024-12-09 09:48:03.972470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.674 [2024-12-09 09:48:03.972477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.674 [2024-12-09 09:48:03.972487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.674 [2024-12-09 09:48:03.972499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.674 [2024-12-09 09:48:03.972509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.674 [2024-12-09 09:48:03.972517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.674 [2024-12-09 09:48:03.972527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.674 [2024-12-09 09:48:03.972535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.674 [2024-12-09 09:48:03.972545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.674 [2024-12-09 09:48:03.972553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.674 [2024-12-09 09:48:03.972563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.674 [2024-12-09 09:48:03.972570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.674 [2024-12-09 09:48:03.972580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.674 [2024-12-09 09:48:03.972587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.674 [2024-12-09 09:48:03.972597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.674 [2024-12-09 09:48:03.972604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.674 [2024-12-09 09:48:03.972614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.674 [2024-12-09 09:48:03.972621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.674 [2024-12-09 09:48:03.972631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.674 [2024-12-09 09:48:03.972645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.674 [2024-12-09 09:48:03.972655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.674 [2024-12-09 09:48:03.972662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.674 [2024-12-09 09:48:03.972672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.674 [2024-12-09 09:48:03.972679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.674 [2024-12-09 09:48:03.972689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.674 [2024-12-09 09:48:03.972696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.674 [2024-12-09 09:48:03.972706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.674 [2024-12-09 09:48:03.972713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.674 [2024-12-09 09:48:03.972725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.674 [2024-12-09 09:48:03.972732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.674 [2024-12-09 09:48:03.972742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.674 [2024-12-09 09:48:03.972750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.674 [2024-12-09 09:48:03.972759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.674 [2024-12-09 09:48:03.972767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.675 [2024-12-09 09:48:03.972776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.675 [2024-12-09 09:48:03.972784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.675 [2024-12-09 09:48:03.972793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.675 [2024-12-09 09:48:03.972800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.675 [2024-12-09 09:48:03.972810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.675 [2024-12-09 09:48:03.972817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.675 [2024-12-09 09:48:03.972827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.675 [2024-12-09 09:48:03.972834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.675 [2024-12-09 09:48:03.972843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.675 [2024-12-09 09:48:03.972851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.675 [2024-12-09 09:48:03.972861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.675 [2024-12-09 09:48:03.972869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.675 [2024-12-09 09:48:03.972878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.675 [2024-12-09 09:48:03.972886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.675 [2024-12-09 09:48:03.972895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.675 [2024-12-09 09:48:03.972903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.675 [2024-12-09 09:48:03.972914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.675 [2024-12-09 09:48:03.972921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.675 [2024-12-09 09:48:03.973049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.675 [2024-12-09 09:48:03.973063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.675 [2024-12-09 09:48:03.973075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.675 [2024-12-09 09:48:03.973083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.675 [2024-12-09 09:48:03.973093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.675 [2024-12-09 09:48:03.973100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.675 [2024-12-09 09:48:03.973110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.675 [2024-12-09 09:48:03.973118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.675 [2024-12-09 09:48:03.973127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.675 [2024-12-09 09:48:03.973135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.675 [2024-12-09 09:48:03.973144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.675 [2024-12-09 09:48:03.973152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.675 [2024-12-09 09:48:03.973161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.675 [2024-12-09 09:48:03.973169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.675 [2024-12-09 09:48:03.973179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.675 [2024-12-09 09:48:03.973186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.675 [2024-12-09 09:48:03.973196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.675 [2024-12-09 09:48:03.973203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.675 [2024-12-09 09:48:03.973213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.675 [2024-12-09 09:48:03.973220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.675 [2024-12-09 09:48:03.973230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.675 [2024-12-09 09:48:03.973237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.675 [2024-12-09 09:48:03.973246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.675 [2024-12-09 09:48:03.973254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.675 [2024-12-09 09:48:03.973263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.675 [2024-12-09 09:48:03.973270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.675 [2024-12-09 09:48:03.973281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.675 [2024-12-09 09:48:03.973289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.675 [2024-12-09 09:48:03.973298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.675 [2024-12-09 09:48:03.973305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.675 [2024-12-09 09:48:03.973315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.675 [2024-12-09 09:48:03.973322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.675 [2024-12-09 09:48:03.973331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.675 [2024-12-09 09:48:03.973339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.675 [2024-12-09 09:48:03.973348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.675 [2024-12-09 09:48:03.973356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.675 [2024-12-09 09:48:03.973365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.675 [2024-12-09 09:48:03.973372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.675 [2024-12-09 09:48:03.973382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.675 [2024-12-09 09:48:03.973389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.675 [2024-12-09 09:48:03.973399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.675 [2024-12-09 09:48:03.973406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.675 [2024-12-09 09:48:03.973415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.675 [2024-12-09 09:48:03.973422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.675 [2024-12-09 09:48:03.973432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.675 [2024-12-09 09:48:03.973439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.675 [2024-12-09 09:48:03.973448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.675 [2024-12-09 09:48:03.973456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.675 [2024-12-09 09:48:03.973465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.675 [2024-12-09 09:48:03.973473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.675 [2024-12-09 09:48:03.973482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.675 [2024-12-09 09:48:03.973491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.675 [2024-12-09 09:48:03.973500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.675 [2024-12-09 09:48:03.973508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.675 [2024-12-09 09:48:03.973517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.675 [2024-12-09 09:48:03.973525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.675 [2024-12-09 09:48:03.973535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.675 [2024-12-09 09:48:03.973542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.675 [2024-12-09 09:48:03.973552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.675 [2024-12-09 09:48:03.973559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.675 [2024-12-09 09:48:03.973569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.676 [2024-12-09 09:48:03.973576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.676 [2024-12-09 09:48:03.973586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.676 [2024-12-09 09:48:03.973593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.676 [2024-12-09 09:48:03.973602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.676 [2024-12-09 09:48:03.973609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.676 [2024-12-09 09:48:03.973619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.676 [2024-12-09 09:48:03.973626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.676 [2024-12-09 09:48:03.973636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.676 [2024-12-09 09:48:03.973651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.676 [2024-12-09 09:48:03.973660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.676 [2024-12-09 09:48:03.973668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.676 [2024-12-09 09:48:03.973677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.676 [2024-12-09 09:48:03.973684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.676 [2024-12-09 09:48:03.973693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.676 [2024-12-09 09:48:03.973701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.676 [2024-12-09 09:48:03.973712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.676 [2024-12-09 09:48:03.973719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.676 [2024-12-09 09:48:03.973729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.676 [2024-12-09 09:48:03.973736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.676 [2024-12-09 09:48:03.973746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.676 [2024-12-09 09:48:03.973753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.676 [2024-12-09 09:48:03.973762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.676 [2024-12-09 09:48:03.973770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.676 [2024-12-09 09:48:03.973779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.676 [2024-12-09 09:48:03.973786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.676 [2024-12-09 09:48:03.973796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.676 [2024-12-09 09:48:03.973803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.676 [2024-12-09 09:48:03.973813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.676 [2024-12-09 09:48:03.973820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.676 [2024-12-09 09:48:03.973829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.676 [2024-12-09 09:48:03.973837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.676 [2024-12-09 09:48:03.973847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.676 [2024-12-09 09:48:03.973854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.676 [2024-12-09 09:48:03.973863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.676 [2024-12-09 09:48:03.973870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.676 [2024-12-09 09:48:03.973880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.676 [2024-12-09 09:48:03.973887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.676 [2024-12-09 09:48:03.973896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.676 [2024-12-09 09:48:03.973904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.676 [2024-12-09 09:48:03.973913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.676 [2024-12-09 09:48:03.973922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.676 [2024-12-09 09:48:03.973931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.676 [2024-12-09 09:48:03.973939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.676 [2024-12-09 09:48:03.973948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.676 [2024-12-09 09:48:03.973956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.676 [2024-12-09 09:48:03.973965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.676 [2024-12-09 09:48:03.973973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.676 [2024-12-09 09:48:03.973982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.676 [2024-12-09 09:48:03.973989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.676 [2024-12-09 09:48:03.973999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.676 [2024-12-09 09:48:03.974006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.676 [2024-12-09 09:48:03.974016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.676 [2024-12-09 09:48:03.974023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.676 [2024-12-09 09:48:03.974032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.676 [2024-12-09 09:48:03.974040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.676 [2024-12-09 09:48:03.974049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.676 [2024-12-09 09:48:03.974058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.676 [2024-12-09 09:48:03.974069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.676 [2024-12-09 09:48:03.974077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.676 [2024-12-09 09:48:03.974088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.676 [2024-12-09 09:48:03.974095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.676 [2024-12-09 09:48:03.974107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.676 [2024-12-09 09:48:03.974114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.676 [2024-12-09 09:48:03.974123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.676 [2024-12-09 09:48:03.974130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.676 [2024-12-09 09:48:03.974142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.676 [2024-12-09 09:48:03.974149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.676 [2024-12-09 09:48:03.974432] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:30:28.676 [2024-12-09 09:48:03.974478] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x986440 (9): Bad file descriptor 00:30:28.676 [2024-12-09 09:48:03.974525] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:28.676 [2024-12-09 09:48:03.974536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.676 [2024-12-09 09:48:03.974545] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:28.676 [2024-12-09 09:48:03.974552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.676 [2024-12-09 09:48:03.974560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:28.676 [2024-12-09 09:48:03.974568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.676 [2024-12-09 09:48:03.974576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:28.676 [2024-12-09 09:48:03.974584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.676 [2024-12-09 09:48:03.974592] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb1d50 is same with the state(6) to be set 00:30:28.677 [2024-12-09 09:48:03.974611] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:28.677 [2024-12-09 09:48:03.974619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.677 [2024-12-09 09:48:03.974628] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:28.677 [2024-12-09 09:48:03.974635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.677 [2024-12-09 09:48:03.974651] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:28.677 [2024-12-09 09:48:03.974658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.677 [2024-12-09 09:48:03.974667] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:28.677 [2024-12-09 09:48:03.974674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.677 [2024-12-09 09:48:03.974681] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x983fa0 is same with the state(6) to be set 00:30:28.677 [2024-12-09 09:48:03.974707] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:28.677 [2024-12-09 09:48:03.974716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.677 [2024-12-09 09:48:03.974724] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:28.677 [2024-12-09 09:48:03.974731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.677 [2024-12-09 09:48:03.974743] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:28.677 [2024-12-09 09:48:03.974750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.677 [2024-12-09 09:48:03.974758] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:28.677 [2024-12-09 09:48:03.974766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.677 [2024-12-09 09:48:03.974772] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda6e40 is same with the state(6) to be set 00:30:28.677 [2024-12-09 09:48:03.974797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:28.677 [2024-12-09 09:48:03.974806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.677 [2024-12-09 09:48:03.974814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:28.677 [2024-12-09 09:48:03.974821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.677 [2024-12-09 09:48:03.974829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:28.677 [2024-12-09 09:48:03.974836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.677 [2024-12-09 09:48:03.974844] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:28.677 [2024-12-09 09:48:03.974852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.677 [2024-12-09 09:48:03.974863] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x894610 is same with the state(6) to be set 00:30:28.677 [2024-12-09 09:48:03.974897] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:28.677 [2024-12-09 09:48:03.974907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.677 [2024-12-09 09:48:03.974915] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:28.677 [2024-12-09 09:48:03.974922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.677 [2024-12-09 09:48:03.974930] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:28.677 [2024-12-09 09:48:03.974937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.677 [2024-12-09 09:48:03.974945] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:28.677 [2024-12-09 09:48:03.974952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.677 [2024-12-09 09:48:03.974959] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2640 is same with the state(6) to be set 00:30:28.677 [2024-12-09 09:48:03.979195] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644830 is same with the state(6) to be set 00:30:28.677 [2024-12-09 09:48:03.979216] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644830 is same with the state(6) to be set 00:30:28.677 [2024-12-09 09:48:03.979226] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644830 is same with the state(6) to be set 00:30:28.677 [2024-12-09 09:48:03.979233] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644830 is same with the state(6) to be set 00:30:28.677 [2024-12-09 09:48:03.979238] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644830 is same with the state(6) to be set 00:30:28.677 [2024-12-09 09:48:03.979243] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644830 is same with the state(6) to be set 00:30:28.677 [2024-12-09 09:48:03.979248] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644830 is same with the state(6) to be set 00:30:28.677 [2024-12-09 09:48:03.979252] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644830 is same with the state(6) to be set 00:30:28.677 [2024-12-09 09:48:03.979257] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644830 is same with the state(6) to be set 00:30:28.677 [2024-12-09 09:48:03.979262] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644830 is same with the state(6) to be set 00:30:28.677 [2024-12-09 09:48:03.979266] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644830 is same with the state(6) to be set 00:30:28.677 [2024-12-09 09:48:03.979271] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644830 is same with the state(6) to be set 00:30:28.677 [2024-12-09 09:48:03.979276] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644830 is same with the state(6) to be set 00:30:28.677 [2024-12-09 09:48:03.979280] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644830 is same with the state(6) to be set 00:30:28.677 [2024-12-09 09:48:03.979285] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644830 is same with the state(6) to be set 00:30:28.677 [2024-12-09 09:48:03.979290] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644830 is same with the state(6) to be set 00:30:28.677 [2024-12-09 09:48:03.979295] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644830 is same with the state(6) to be set 00:30:28.677 [2024-12-09 09:48:03.979300] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644830 is same with the state(6) to be set 00:30:28.677 [2024-12-09 09:48:03.979305] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644830 is same with the state(6) to be set 00:30:28.677 [2024-12-09 09:48:03.979309] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644830 is same with the state(6) to be set 00:30:28.677 [2024-12-09 09:48:03.979314] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644830 is same with the state(6) to be set 00:30:28.677 [2024-12-09 09:48:03.979319] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644830 is same with the state(6) to be set 00:30:28.677 [2024-12-09 09:48:03.979324] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644830 is same with the state(6) to be set 00:30:28.677 [2024-12-09 09:48:03.979329] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644830 is same with the state(6) to be set 00:30:28.677 [2024-12-09 09:48:03.979333] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644830 is same with the state(6) to be set 00:30:28.677 [2024-12-09 09:48:03.979338] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644830 is same with the state(6) to be set 00:30:28.677 [2024-12-09 09:48:03.979342] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644830 is same with the state(6) to be set 00:30:28.677 [2024-12-09 09:48:03.979347] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644830 is same with the state(6) to be set 00:30:28.677 [2024-12-09 09:48:03.979352] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644830 is same with the state(6) to be set 00:30:28.677 [2024-12-09 09:48:03.979356] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644830 is same with the state(6) to be set 00:30:28.677 [2024-12-09 09:48:03.979362] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644830 is same with the state(6) to be set 00:30:28.677 [2024-12-09 09:48:03.979367] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644830 is same with the state(6) to be set 00:30:28.677 [2024-12-09 09:48:03.979371] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644830 is same with the state(6) to be set 00:30:28.677 [2024-12-09 09:48:03.979376] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644830 is same with the state(6) to be set 00:30:28.677 [2024-12-09 09:48:03.979381] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644830 is same with the state(6) to be set 00:30:28.677 [2024-12-09 09:48:03.979385] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644830 is same with the state(6) to be set 00:30:28.677 [2024-12-09 09:48:03.979390] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x644830 is same with the state(6) to be set 00:30:28.677 [2024-12-09 09:48:03.985064] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:28.677 [2024-12-09 09:48:03.985097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.677 [2024-12-09 09:48:03.985110] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:28.677 [2024-12-09 09:48:03.985121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.677 [2024-12-09 09:48:03.985131] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:28.677 [2024-12-09 09:48:03.985140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.677 [2024-12-09 09:48:03.985150] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:28.677 [2024-12-09 09:48:03.985159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.677 [2024-12-09 09:48:03.985169] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde5ca0 is same with the state(6) to be set 00:30:28.678 [2024-12-09 09:48:03.985208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:28.678 [2024-12-09 09:48:03.985220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.678 [2024-12-09 09:48:03.985230] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:28.678 [2024-12-09 09:48:03.985239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.678 [2024-12-09 09:48:03.985250] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:28.678 [2024-12-09 09:48:03.985259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.678 [2024-12-09 09:48:03.985270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:28.678 [2024-12-09 09:48:03.985279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.678 [2024-12-09 09:48:03.985288] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97a430 is same with the state(6) to be set 00:30:28.678 [2024-12-09 09:48:03.985313] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:28.678 [2024-12-09 09:48:03.985329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.678 [2024-12-09 09:48:03.985340] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:28.678 [2024-12-09 09:48:03.985349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.678 [2024-12-09 09:48:03.985359] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:28.678 [2024-12-09 09:48:03.985369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.678 [2024-12-09 09:48:03.985379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:28.678 [2024-12-09 09:48:03.985388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.678 [2024-12-09 09:48:03.985397] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9855a0 is same with the state(6) to be set 00:30:28.678 [2024-12-09 09:48:03.985421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:28.678 [2024-12-09 09:48:03.985431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.678 [2024-12-09 09:48:03.985442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:28.678 [2024-12-09 09:48:03.985451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.678 [2024-12-09 09:48:03.985461] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:28.678 [2024-12-09 09:48:03.985470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.678 [2024-12-09 09:48:03.985481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:28.678 [2024-12-09 09:48:03.985490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.678 [2024-12-09 09:48:03.985499] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdff6a0 is same with the state(6) to be set 00:30:28.678 [2024-12-09 09:48:03.990851] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:30:28.678 [2024-12-09 09:48:03.990900] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:30:28.678 [2024-12-09 09:48:03.990921] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:30:28.678 [2024-12-09 09:48:03.990950] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb1d50 (9): Bad file descriptor 00:30:28.678 [2024-12-09 09:48:03.990970] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9855a0 (9): Bad file descriptor 00:30:28.678 [2024-12-09 09:48:03.990987] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97a430 (9): Bad file descriptor 00:30:28.678 [2024-12-09 09:48:03.991058] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x983fa0 (9): Bad file descriptor 00:30:28.678 [2024-12-09 09:48:03.991089] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xda6e40 (9): Bad file descriptor 00:30:28.678 [2024-12-09 09:48:03.991126] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x894610 (9): Bad file descriptor 00:30:28.678 [2024-12-09 09:48:03.991158] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xde2640 (9): Bad file descriptor 00:30:28.678 [2024-12-09 09:48:03.991180] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xde5ca0 (9): Bad file descriptor 00:30:28.678 [2024-12-09 09:48:03.991205] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdff6a0 (9): Bad file descriptor 00:30:28.678 [2024-12-09 09:48:03.992416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.678 [2024-12-09 09:48:03.992452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x986440 with addr=10.0.0.2, port=4420 00:30:28.678 [2024-12-09 09:48:03.992467] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x986440 is same with the state(6) to be set 00:30:28.678 [2024-12-09 09:48:03.994142] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:28.678 [2024-12-09 09:48:03.994907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.678 [2024-12-09 09:48:03.994967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97a430 with addr=10.0.0.2, port=4420 00:30:28.678 [2024-12-09 09:48:03.994985] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97a430 is same with the state(6) to be set 00:30:28.678 [2024-12-09 09:48:03.995211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.678 [2024-12-09 09:48:03.995231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9855a0 with addr=10.0.0.2, port=4420 00:30:28.678 [2024-12-09 09:48:03.995244] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9855a0 is same with the state(6) to be set 00:30:28.678 [2024-12-09 09:48:03.995546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.678 [2024-12-09 09:48:03.995563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb1d50 with addr=10.0.0.2, port=4420 00:30:28.678 [2024-12-09 09:48:03.995576] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb1d50 is same with the state(6) to be set 00:30:28.678 [2024-12-09 09:48:03.995595] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x986440 (9): Bad file descriptor 00:30:28.678 [2024-12-09 09:48:03.995752] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:28.678 [2024-12-09 09:48:03.995820] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:28.678 [2024-12-09 09:48:03.995878] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:28.678 [2024-12-09 09:48:03.995932] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:28.678 [2024-12-09 09:48:03.996032] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97a430 (9): Bad file descriptor 00:30:28.678 [2024-12-09 09:48:03.996055] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9855a0 (9): Bad file descriptor 00:30:28.678 [2024-12-09 09:48:03.996072] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb1d50 (9): Bad file descriptor 00:30:28.678 [2024-12-09 09:48:03.996086] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:30:28.678 [2024-12-09 09:48:03.996099] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:30:28.678 [2024-12-09 09:48:03.996113] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:30:28.678 [2024-12-09 09:48:03.996127] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:30:28.678 [2024-12-09 09:48:03.996290] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:30:28.678 [2024-12-09 09:48:03.996323] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:30:28.678 [2024-12-09 09:48:03.996336] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:30:28.678 [2024-12-09 09:48:03.996355] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:30:28.678 [2024-12-09 09:48:03.996368] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:30:28.678 [2024-12-09 09:48:03.996381] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:30:28.678 [2024-12-09 09:48:03.996392] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:30:28.678 [2024-12-09 09:48:03.996408] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:30:28.678 [2024-12-09 09:48:03.996425] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:30:28.678 [2024-12-09 09:48:03.996438] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:30:28.679 [2024-12-09 09:48:03.996449] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:30:28.679 [2024-12-09 09:48:03.996461] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:30:28.679 [2024-12-09 09:48:03.996472] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:30:28.679 [2024-12-09 09:48:04.001069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.679 [2024-12-09 09:48:04.001090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.679 [2024-12-09 09:48:04.001111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.679 [2024-12-09 09:48:04.001122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.679 [2024-12-09 09:48:04.001136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.679 [2024-12-09 09:48:04.001145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.679 [2024-12-09 09:48:04.001158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.679 [2024-12-09 09:48:04.001168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.679 [2024-12-09 09:48:04.001181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.679 [2024-12-09 09:48:04.001191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.679 [2024-12-09 09:48:04.001203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.679 [2024-12-09 09:48:04.001213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.679 [2024-12-09 09:48:04.001226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.679 [2024-12-09 09:48:04.001236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.679 [2024-12-09 09:48:04.001248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.679 [2024-12-09 09:48:04.001258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.679 [2024-12-09 09:48:04.001275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.679 [2024-12-09 09:48:04.001285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.679 [2024-12-09 09:48:04.001298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.679 [2024-12-09 09:48:04.001308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.679 [2024-12-09 09:48:04.001321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.679 [2024-12-09 09:48:04.001331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.679 [2024-12-09 09:48:04.001344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.679 [2024-12-09 09:48:04.001353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.679 [2024-12-09 09:48:04.001366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.679 [2024-12-09 09:48:04.001376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.679 [2024-12-09 09:48:04.001389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.679 [2024-12-09 09:48:04.001399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.679 [2024-12-09 09:48:04.001411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.679 [2024-12-09 09:48:04.001421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.679 [2024-12-09 09:48:04.001434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.679 [2024-12-09 09:48:04.001444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.679 [2024-12-09 09:48:04.001457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.679 [2024-12-09 09:48:04.001466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.679 [2024-12-09 09:48:04.001479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.679 [2024-12-09 09:48:04.001489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.679 [2024-12-09 09:48:04.001502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.679 [2024-12-09 09:48:04.001512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.679 [2024-12-09 09:48:04.001524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.679 [2024-12-09 09:48:04.001534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.679 [2024-12-09 09:48:04.001547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.679 [2024-12-09 09:48:04.001559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.679 [2024-12-09 09:48:04.001571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.679 [2024-12-09 09:48:04.001581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.679 [2024-12-09 09:48:04.001594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.679 [2024-12-09 09:48:04.001604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.679 [2024-12-09 09:48:04.001616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.679 [2024-12-09 09:48:04.001626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.679 [2024-12-09 09:48:04.001645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.679 [2024-12-09 09:48:04.001655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.679 [2024-12-09 09:48:04.001668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.679 [2024-12-09 09:48:04.001678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.679 [2024-12-09 09:48:04.001691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.679 [2024-12-09 09:48:04.001700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.679 [2024-12-09 09:48:04.001713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.679 [2024-12-09 09:48:04.001723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.679 [2024-12-09 09:48:04.001736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.679 [2024-12-09 09:48:04.001746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.679 [2024-12-09 09:48:04.001758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.679 [2024-12-09 09:48:04.001768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.679 [2024-12-09 09:48:04.001781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.679 [2024-12-09 09:48:04.001791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.679 [2024-12-09 09:48:04.001803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.679 [2024-12-09 09:48:04.001813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.679 [2024-12-09 09:48:04.001826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.679 [2024-12-09 09:48:04.001836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.679 [2024-12-09 09:48:04.001848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.679 [2024-12-09 09:48:04.001861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.679 [2024-12-09 09:48:04.001874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.679 [2024-12-09 09:48:04.001884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.679 [2024-12-09 09:48:04.001897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.679 [2024-12-09 09:48:04.001906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.679 [2024-12-09 09:48:04.001919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.679 [2024-12-09 09:48:04.001929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.679 [2024-12-09 09:48:04.001942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.679 [2024-12-09 09:48:04.001952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.680 [2024-12-09 09:48:04.001965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.680 [2024-12-09 09:48:04.001975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.680 [2024-12-09 09:48:04.001988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.680 [2024-12-09 09:48:04.001998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.680 [2024-12-09 09:48:04.002011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.680 [2024-12-09 09:48:04.002020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.680 [2024-12-09 09:48:04.002033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.680 [2024-12-09 09:48:04.002043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.680 [2024-12-09 09:48:04.002056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.680 [2024-12-09 09:48:04.002066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.680 [2024-12-09 09:48:04.002078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.680 [2024-12-09 09:48:04.002089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.680 [2024-12-09 09:48:04.002101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.680 [2024-12-09 09:48:04.002112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.680 [2024-12-09 09:48:04.002125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.680 [2024-12-09 09:48:04.002135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.680 [2024-12-09 09:48:04.002153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.680 [2024-12-09 09:48:04.002163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.680 [2024-12-09 09:48:04.002176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.680 [2024-12-09 09:48:04.002186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.680 [2024-12-09 09:48:04.002199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.680 [2024-12-09 09:48:04.002209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.680 [2024-12-09 09:48:04.002223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.680 [2024-12-09 09:48:04.002232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.680 [2024-12-09 09:48:04.002246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.680 [2024-12-09 09:48:04.002255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.680 [2024-12-09 09:48:04.002268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.680 [2024-12-09 09:48:04.002277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.680 [2024-12-09 09:48:04.002290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.680 [2024-12-09 09:48:04.002300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.680 [2024-12-09 09:48:04.002313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.680 [2024-12-09 09:48:04.002322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.680 [2024-12-09 09:48:04.002335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.680 [2024-12-09 09:48:04.002344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.680 [2024-12-09 09:48:04.002357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.680 [2024-12-09 09:48:04.002368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.680 [2024-12-09 09:48:04.002380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.680 [2024-12-09 09:48:04.002390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.680 [2024-12-09 09:48:04.002403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.680 [2024-12-09 09:48:04.002413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.680 [2024-12-09 09:48:04.002427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.680 [2024-12-09 09:48:04.002438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.680 [2024-12-09 09:48:04.002451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.680 [2024-12-09 09:48:04.002461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.680 [2024-12-09 09:48:04.002474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.680 [2024-12-09 09:48:04.002484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.680 [2024-12-09 09:48:04.002497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.680 [2024-12-09 09:48:04.002507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.680 [2024-12-09 09:48:04.002520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.680 [2024-12-09 09:48:04.002530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.680 [2024-12-09 09:48:04.002543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.680 [2024-12-09 09:48:04.002553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.680 [2024-12-09 09:48:04.002564] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88bf0 is same with the state(6) to be set 00:30:28.680 [2024-12-09 09:48:04.004279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.680 [2024-12-09 09:48:04.004299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.680 [2024-12-09 09:48:04.004314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.680 [2024-12-09 09:48:04.004324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.680 [2024-12-09 09:48:04.004337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.680 [2024-12-09 09:48:04.004348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.680 [2024-12-09 09:48:04.004361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.680 [2024-12-09 09:48:04.004371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.680 [2024-12-09 09:48:04.004384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.680 [2024-12-09 09:48:04.004393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.680 [2024-12-09 09:48:04.004406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.680 [2024-12-09 09:48:04.004416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.680 [2024-12-09 09:48:04.004428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.680 [2024-12-09 09:48:04.004442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.680 [2024-12-09 09:48:04.004455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.680 [2024-12-09 09:48:04.004465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.680 [2024-12-09 09:48:04.004478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.680 [2024-12-09 09:48:04.004488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.680 [2024-12-09 09:48:04.004500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.680 [2024-12-09 09:48:04.004510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.680 [2024-12-09 09:48:04.004523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.680 [2024-12-09 09:48:04.004532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.681 [2024-12-09 09:48:04.004545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.681 [2024-12-09 09:48:04.004555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.681 [2024-12-09 09:48:04.004568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.681 [2024-12-09 09:48:04.004578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.681 [2024-12-09 09:48:04.004591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.681 [2024-12-09 09:48:04.004601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.681 [2024-12-09 09:48:04.004615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.681 [2024-12-09 09:48:04.004625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.681 [2024-12-09 09:48:04.004644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.681 [2024-12-09 09:48:04.004655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.681 [2024-12-09 09:48:04.004667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.681 [2024-12-09 09:48:04.004677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.681 [2024-12-09 09:48:04.004690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.681 [2024-12-09 09:48:04.004700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.681 [2024-12-09 09:48:04.004712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.681 [2024-12-09 09:48:04.004722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.681 [2024-12-09 09:48:04.004737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.681 [2024-12-09 09:48:04.004747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.681 [2024-12-09 09:48:04.004759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.681 [2024-12-09 09:48:04.004769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.681 [2024-12-09 09:48:04.004782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.681 [2024-12-09 09:48:04.004791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.681 [2024-12-09 09:48:04.004804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.681 [2024-12-09 09:48:04.004814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.681 [2024-12-09 09:48:04.004827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.681 [2024-12-09 09:48:04.004837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.681 [2024-12-09 09:48:04.004849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.681 [2024-12-09 09:48:04.004859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.681 [2024-12-09 09:48:04.004872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.681 [2024-12-09 09:48:04.004882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.681 [2024-12-09 09:48:04.004895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.681 [2024-12-09 09:48:04.004904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.681 [2024-12-09 09:48:04.004917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.681 [2024-12-09 09:48:04.004927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.681 [2024-12-09 09:48:04.004940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.681 [2024-12-09 09:48:04.004950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.681 [2024-12-09 09:48:04.004962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.681 [2024-12-09 09:48:04.004972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.681 [2024-12-09 09:48:04.004985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.681 [2024-12-09 09:48:04.004994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.681 [2024-12-09 09:48:04.005007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.681 [2024-12-09 09:48:04.005018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.681 [2024-12-09 09:48:04.005031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.681 [2024-12-09 09:48:04.005041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.681 [2024-12-09 09:48:04.005054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.681 [2024-12-09 09:48:04.005064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.681 [2024-12-09 09:48:04.005077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.681 [2024-12-09 09:48:04.005086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.681 [2024-12-09 09:48:04.005099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.681 [2024-12-09 09:48:04.005108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.681 [2024-12-09 09:48:04.005121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.681 [2024-12-09 09:48:04.005131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.681 [2024-12-09 09:48:04.005144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.681 [2024-12-09 09:48:04.005154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.681 [2024-12-09 09:48:04.005167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.681 [2024-12-09 09:48:04.005176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.681 [2024-12-09 09:48:04.005190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.681 [2024-12-09 09:48:04.005199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.681 [2024-12-09 09:48:04.005212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.681 [2024-12-09 09:48:04.005221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.681 [2024-12-09 09:48:04.005234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.681 [2024-12-09 09:48:04.005244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.681 [2024-12-09 09:48:04.005256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.681 [2024-12-09 09:48:04.005266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.681 [2024-12-09 09:48:04.005279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.681 [2024-12-09 09:48:04.005289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.681 [2024-12-09 09:48:04.005304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.681 [2024-12-09 09:48:04.005314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.681 [2024-12-09 09:48:04.005327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.681 [2024-12-09 09:48:04.005336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.681 [2024-12-09 09:48:04.005349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.681 [2024-12-09 09:48:04.005359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.681 [2024-12-09 09:48:04.005372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.681 [2024-12-09 09:48:04.005382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.681 [2024-12-09 09:48:04.005394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.681 [2024-12-09 09:48:04.005404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.682 [2024-12-09 09:48:04.005417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.682 [2024-12-09 09:48:04.005427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.682 [2024-12-09 09:48:04.005440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.682 [2024-12-09 09:48:04.005451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.682 [2024-12-09 09:48:04.005465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.682 [2024-12-09 09:48:04.005477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.682 [2024-12-09 09:48:04.005492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.682 [2024-12-09 09:48:04.005505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.682 [2024-12-09 09:48:04.005518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.682 [2024-12-09 09:48:04.005531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.682 [2024-12-09 09:48:04.005546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.682 [2024-12-09 09:48:04.005556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.682 [2024-12-09 09:48:04.005571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.682 [2024-12-09 09:48:04.005583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.682 [2024-12-09 09:48:04.005596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.682 [2024-12-09 09:48:04.005608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.682 [2024-12-09 09:48:04.005621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.682 [2024-12-09 09:48:04.005631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.682 [2024-12-09 09:48:04.005649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.682 [2024-12-09 09:48:04.005659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.682 [2024-12-09 09:48:04.005672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.682 [2024-12-09 09:48:04.005682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.682 [2024-12-09 09:48:04.005695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.682 [2024-12-09 09:48:04.005705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.682 [2024-12-09 09:48:04.005720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.682 [2024-12-09 09:48:04.005730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.682 [2024-12-09 09:48:04.005743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.682 [2024-12-09 09:48:04.005753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.682 [2024-12-09 09:48:04.005765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.682 [2024-12-09 09:48:04.005776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.682 [2024-12-09 09:48:04.005787] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89eb0 is same with the state(6) to be set 00:30:28.682 [2024-12-09 09:48:04.007478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.682 [2024-12-09 09:48:04.007497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.682 [2024-12-09 09:48:04.007511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.682 [2024-12-09 09:48:04.007522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.682 [2024-12-09 09:48:04.007535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.682 [2024-12-09 09:48:04.007545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.682 [2024-12-09 09:48:04.007557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.682 [2024-12-09 09:48:04.007567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.682 [2024-12-09 09:48:04.007580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.682 [2024-12-09 09:48:04.007593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.682 [2024-12-09 09:48:04.007606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.682 [2024-12-09 09:48:04.007616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.682 [2024-12-09 09:48:04.007629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.682 [2024-12-09 09:48:04.007644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.682 [2024-12-09 09:48:04.007657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.682 [2024-12-09 09:48:04.007666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.682 [2024-12-09 09:48:04.007679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.682 [2024-12-09 09:48:04.007689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.682 [2024-12-09 09:48:04.007702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.682 [2024-12-09 09:48:04.007712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.682 [2024-12-09 09:48:04.007725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.682 [2024-12-09 09:48:04.007735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.682 [2024-12-09 09:48:04.007747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.682 [2024-12-09 09:48:04.007757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.682 [2024-12-09 09:48:04.007770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.682 [2024-12-09 09:48:04.007780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.682 [2024-12-09 09:48:04.007792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.682 [2024-12-09 09:48:04.007802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.682 [2024-12-09 09:48:04.007815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.682 [2024-12-09 09:48:04.007825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.682 [2024-12-09 09:48:04.007837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.682 [2024-12-09 09:48:04.007847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.682 [2024-12-09 09:48:04.007860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.682 [2024-12-09 09:48:04.007870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.682 [2024-12-09 09:48:04.007885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.682 [2024-12-09 09:48:04.007895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.682 [2024-12-09 09:48:04.007907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.682 [2024-12-09 09:48:04.007918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.682 [2024-12-09 09:48:04.007932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.682 [2024-12-09 09:48:04.007943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.682 [2024-12-09 09:48:04.007956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.682 [2024-12-09 09:48:04.007966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.682 [2024-12-09 09:48:04.007979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.682 [2024-12-09 09:48:04.007988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.682 [2024-12-09 09:48:04.008001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.682 [2024-12-09 09:48:04.008011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.683 [2024-12-09 09:48:04.008024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.683 [2024-12-09 09:48:04.008034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.683 [2024-12-09 09:48:04.008046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.683 [2024-12-09 09:48:04.008056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.683 [2024-12-09 09:48:04.008069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.683 [2024-12-09 09:48:04.008079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.683 [2024-12-09 09:48:04.008092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.683 [2024-12-09 09:48:04.008102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.683 [2024-12-09 09:48:04.008114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.683 [2024-12-09 09:48:04.008124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.683 [2024-12-09 09:48:04.008137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.683 [2024-12-09 09:48:04.008147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.683 [2024-12-09 09:48:04.008159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.683 [2024-12-09 09:48:04.008171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.683 [2024-12-09 09:48:04.008184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.683 [2024-12-09 09:48:04.008194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.683 [2024-12-09 09:48:04.008206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.683 [2024-12-09 09:48:04.008216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.683 [2024-12-09 09:48:04.008229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.683 [2024-12-09 09:48:04.008238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.683 [2024-12-09 09:48:04.008251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.683 [2024-12-09 09:48:04.008261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.683 [2024-12-09 09:48:04.008274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.683 [2024-12-09 09:48:04.008284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.683 [2024-12-09 09:48:04.008296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.683 [2024-12-09 09:48:04.008306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.683 [2024-12-09 09:48:04.008318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.683 [2024-12-09 09:48:04.008328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.683 [2024-12-09 09:48:04.008341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.683 [2024-12-09 09:48:04.008351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.683 [2024-12-09 09:48:04.008364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.683 [2024-12-09 09:48:04.008374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.683 [2024-12-09 09:48:04.008386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.683 [2024-12-09 09:48:04.008396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.683 [2024-12-09 09:48:04.008409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.683 [2024-12-09 09:48:04.008419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.683 [2024-12-09 09:48:04.008431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.683 [2024-12-09 09:48:04.008441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.683 [2024-12-09 09:48:04.008456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.683 [2024-12-09 09:48:04.008466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.683 [2024-12-09 09:48:04.008479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.683 [2024-12-09 09:48:04.008489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.683 [2024-12-09 09:48:04.008502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.683 [2024-12-09 09:48:04.008511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.683 [2024-12-09 09:48:04.008524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.683 [2024-12-09 09:48:04.008534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.683 [2024-12-09 09:48:04.008547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.683 [2024-12-09 09:48:04.008556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.683 [2024-12-09 09:48:04.008569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.683 [2024-12-09 09:48:04.008578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.683 [2024-12-09 09:48:04.008591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.683 [2024-12-09 09:48:04.008601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.683 [2024-12-09 09:48:04.008614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.683 [2024-12-09 09:48:04.008624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.683 [2024-12-09 09:48:04.008643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.683 [2024-12-09 09:48:04.008653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.683 [2024-12-09 09:48:04.008666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.683 [2024-12-09 09:48:04.008676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.683 [2024-12-09 09:48:04.008689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.683 [2024-12-09 09:48:04.008699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.683 [2024-12-09 09:48:04.008712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.683 [2024-12-09 09:48:04.008721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.683 [2024-12-09 09:48:04.008734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.683 [2024-12-09 09:48:04.008746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.683 [2024-12-09 09:48:04.008759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.683 [2024-12-09 09:48:04.008768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.683 [2024-12-09 09:48:04.008781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.683 [2024-12-09 09:48:04.008790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.683 [2024-12-09 09:48:04.008803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.683 [2024-12-09 09:48:04.008813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.684 [2024-12-09 09:48:04.008826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.684 [2024-12-09 09:48:04.008837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.684 [2024-12-09 09:48:04.008850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.684 [2024-12-09 09:48:04.008859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.684 [2024-12-09 09:48:04.008872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.684 [2024-12-09 09:48:04.008882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.684 [2024-12-09 09:48:04.008894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.684 [2024-12-09 09:48:04.008904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.684 [2024-12-09 09:48:04.008917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.684 [2024-12-09 09:48:04.008926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.684 [2024-12-09 09:48:04.008939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.684 [2024-12-09 09:48:04.008949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.684 [2024-12-09 09:48:04.008960] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8b1c0 is same with the state(6) to be set 00:30:28.684 [2024-12-09 09:48:04.010299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.684 [2024-12-09 09:48:04.010313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.684 [2024-12-09 09:48:04.010326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.684 [2024-12-09 09:48:04.010335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.684 [2024-12-09 09:48:04.010347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.684 [2024-12-09 09:48:04.010356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.684 [2024-12-09 09:48:04.010372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.684 [2024-12-09 09:48:04.010381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.684 [2024-12-09 09:48:04.010393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.684 [2024-12-09 09:48:04.010401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.684 [2024-12-09 09:48:04.010411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.684 [2024-12-09 09:48:04.010419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.684 [2024-12-09 09:48:04.010428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.684 [2024-12-09 09:48:04.010436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.684 [2024-12-09 09:48:04.010445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.684 [2024-12-09 09:48:04.010453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.684 [2024-12-09 09:48:04.010462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.684 [2024-12-09 09:48:04.010470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.684 [2024-12-09 09:48:04.010479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.684 [2024-12-09 09:48:04.010487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.684 [2024-12-09 09:48:04.010497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.684 [2024-12-09 09:48:04.010504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.684 [2024-12-09 09:48:04.010514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.684 [2024-12-09 09:48:04.010522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.684 [2024-12-09 09:48:04.010532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.684 [2024-12-09 09:48:04.010539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.684 [2024-12-09 09:48:04.010549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.684 [2024-12-09 09:48:04.010556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.684 [2024-12-09 09:48:04.010566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.684 [2024-12-09 09:48:04.010573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.684 [2024-12-09 09:48:04.010583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.684 [2024-12-09 09:48:04.010592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.684 [2024-12-09 09:48:04.010602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.684 [2024-12-09 09:48:04.010609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.684 [2024-12-09 09:48:04.010619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.684 [2024-12-09 09:48:04.010626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.684 [2024-12-09 09:48:04.010636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.684 [2024-12-09 09:48:04.010648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.684 [2024-12-09 09:48:04.010657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.684 [2024-12-09 09:48:04.010665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.684 [2024-12-09 09:48:04.010674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.684 [2024-12-09 09:48:04.010682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.684 [2024-12-09 09:48:04.010691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.684 [2024-12-09 09:48:04.010699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.684 [2024-12-09 09:48:04.010708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.684 [2024-12-09 09:48:04.010716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.684 [2024-12-09 09:48:04.010726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.684 [2024-12-09 09:48:04.010735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.684 [2024-12-09 09:48:04.010746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.684 [2024-12-09 09:48:04.010756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.684 [2024-12-09 09:48:04.010767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.684 [2024-12-09 09:48:04.010775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.684 [2024-12-09 09:48:04.010787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.684 [2024-12-09 09:48:04.010795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.684 [2024-12-09 09:48:04.010807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.684 [2024-12-09 09:48:04.010815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.684 [2024-12-09 09:48:04.010827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.684 [2024-12-09 09:48:04.010836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.684 [2024-12-09 09:48:04.010849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.684 [2024-12-09 09:48:04.010858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.684 [2024-12-09 09:48:04.010869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.684 [2024-12-09 09:48:04.010878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.684 [2024-12-09 09:48:04.010887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.684 [2024-12-09 09:48:04.010895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.684 [2024-12-09 09:48:04.010906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.685 [2024-12-09 09:48:04.010914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.685 [2024-12-09 09:48:04.010924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.685 [2024-12-09 09:48:04.010931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.685 [2024-12-09 09:48:04.010941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.685 [2024-12-09 09:48:04.010948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.685 [2024-12-09 09:48:04.010958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.685 [2024-12-09 09:48:04.010965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.685 [2024-12-09 09:48:04.010975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.685 [2024-12-09 09:48:04.010982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.685 [2024-12-09 09:48:04.010992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.685 [2024-12-09 09:48:04.010999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.685 [2024-12-09 09:48:04.011009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.685 [2024-12-09 09:48:04.011016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.685 [2024-12-09 09:48:04.011026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.685 [2024-12-09 09:48:04.011033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.685 [2024-12-09 09:48:04.011042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.685 [2024-12-09 09:48:04.011054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.685 [2024-12-09 09:48:04.011063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.685 [2024-12-09 09:48:04.011071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.685 [2024-12-09 09:48:04.011081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.685 [2024-12-09 09:48:04.011088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.685 [2024-12-09 09:48:04.011098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.685 [2024-12-09 09:48:04.011105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.685 [2024-12-09 09:48:04.011115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.685 [2024-12-09 09:48:04.011122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.685 [2024-12-09 09:48:04.011133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.685 [2024-12-09 09:48:04.011140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.685 [2024-12-09 09:48:04.011150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.685 [2024-12-09 09:48:04.011158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.685 [2024-12-09 09:48:04.011167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.685 [2024-12-09 09:48:04.011175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.685 [2024-12-09 09:48:04.011184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.685 [2024-12-09 09:48:04.011192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.685 [2024-12-09 09:48:04.011202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.685 [2024-12-09 09:48:04.011209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.685 [2024-12-09 09:48:04.011219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.685 [2024-12-09 09:48:04.011226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.685 [2024-12-09 09:48:04.011236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.685 [2024-12-09 09:48:04.011244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.685 [2024-12-09 09:48:04.011253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.685 [2024-12-09 09:48:04.011260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.685 [2024-12-09 09:48:04.011272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.685 [2024-12-09 09:48:04.011281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.685 [2024-12-09 09:48:04.011291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.685 [2024-12-09 09:48:04.011298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.685 [2024-12-09 09:48:04.011308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.685 [2024-12-09 09:48:04.011315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.685 [2024-12-09 09:48:04.011325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.685 [2024-12-09 09:48:04.011332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.685 [2024-12-09 09:48:04.011342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.685 [2024-12-09 09:48:04.011349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.685 [2024-12-09 09:48:04.011359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.685 [2024-12-09 09:48:04.011366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.685 [2024-12-09 09:48:04.011376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.685 [2024-12-09 09:48:04.011384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.685 [2024-12-09 09:48:04.011394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.685 [2024-12-09 09:48:04.011401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.685 [2024-12-09 09:48:04.011411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.685 [2024-12-09 09:48:04.011418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.685 [2024-12-09 09:48:04.011427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.685 [2024-12-09 09:48:04.011435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.685 [2024-12-09 09:48:04.011445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.685 [2024-12-09 09:48:04.011453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.685 [2024-12-09 09:48:04.011461] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8c4d0 is same with the state(6) to be set 00:30:28.685 [2024-12-09 09:48:04.012732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.685 [2024-12-09 09:48:04.012746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.685 [2024-12-09 09:48:04.012761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.685 [2024-12-09 09:48:04.012771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.685 [2024-12-09 09:48:04.012783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.685 [2024-12-09 09:48:04.012792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.685 [2024-12-09 09:48:04.012804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.685 [2024-12-09 09:48:04.012813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.685 [2024-12-09 09:48:04.012824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.685 [2024-12-09 09:48:04.012832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.685 [2024-12-09 09:48:04.012842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.685 [2024-12-09 09:48:04.012849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.685 [2024-12-09 09:48:04.012859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.685 [2024-12-09 09:48:04.012867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.685 [2024-12-09 09:48:04.012877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.686 [2024-12-09 09:48:04.012884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.686 [2024-12-09 09:48:04.012893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.686 [2024-12-09 09:48:04.012901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.686 [2024-12-09 09:48:04.012911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.686 [2024-12-09 09:48:04.012918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.686 [2024-12-09 09:48:04.012928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.686 [2024-12-09 09:48:04.012935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.686 [2024-12-09 09:48:04.012945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.686 [2024-12-09 09:48:04.012953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.686 [2024-12-09 09:48:04.012962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.686 [2024-12-09 09:48:04.012970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.686 [2024-12-09 09:48:04.012979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.686 [2024-12-09 09:48:04.012988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.686 [2024-12-09 09:48:04.012998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.686 [2024-12-09 09:48:04.013006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.686 [2024-12-09 09:48:04.013016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.686 [2024-12-09 09:48:04.013023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.686 [2024-12-09 09:48:04.013033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.686 [2024-12-09 09:48:04.013040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.686 [2024-12-09 09:48:04.013050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.686 [2024-12-09 09:48:04.013057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.686 [2024-12-09 09:48:04.013067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.686 [2024-12-09 09:48:04.013075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.686 [2024-12-09 09:48:04.013084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.686 [2024-12-09 09:48:04.013092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.686 [2024-12-09 09:48:04.013102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.686 [2024-12-09 09:48:04.013109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.686 [2024-12-09 09:48:04.013119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.686 [2024-12-09 09:48:04.013127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.686 [2024-12-09 09:48:04.013136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.686 [2024-12-09 09:48:04.013143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.686 [2024-12-09 09:48:04.013153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.686 [2024-12-09 09:48:04.013161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.686 [2024-12-09 09:48:04.013171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.686 [2024-12-09 09:48:04.013179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.686 [2024-12-09 09:48:04.013188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.686 [2024-12-09 09:48:04.013196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.686 [2024-12-09 09:48:04.013208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.686 [2024-12-09 09:48:04.013215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.686 [2024-12-09 09:48:04.013225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.686 [2024-12-09 09:48:04.013232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.686 [2024-12-09 09:48:04.013242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.686 [2024-12-09 09:48:04.013250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.686 [2024-12-09 09:48:04.013260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.686 [2024-12-09 09:48:04.013267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.686 [2024-12-09 09:48:04.013277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.686 [2024-12-09 09:48:04.013285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.686 [2024-12-09 09:48:04.013295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.686 [2024-12-09 09:48:04.013302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.686 [2024-12-09 09:48:04.013311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.686 [2024-12-09 09:48:04.013319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.686 [2024-12-09 09:48:04.013329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.686 [2024-12-09 09:48:04.013336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.686 [2024-12-09 09:48:04.013346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.686 [2024-12-09 09:48:04.013353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.686 [2024-12-09 09:48:04.013363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.686 [2024-12-09 09:48:04.013370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.686 [2024-12-09 09:48:04.013380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.686 [2024-12-09 09:48:04.013387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.686 [2024-12-09 09:48:04.013397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.686 [2024-12-09 09:48:04.013404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.686 [2024-12-09 09:48:04.013413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.686 [2024-12-09 09:48:04.013422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.686 [2024-12-09 09:48:04.013432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.686 [2024-12-09 09:48:04.013439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.686 [2024-12-09 09:48:04.013449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.686 [2024-12-09 09:48:04.013456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.686 [2024-12-09 09:48:04.013466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.686 [2024-12-09 09:48:04.013473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.686 [2024-12-09 09:48:04.013483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.686 [2024-12-09 09:48:04.013491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.686 [2024-12-09 09:48:04.013500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.686 [2024-12-09 09:48:04.013507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.686 [2024-12-09 09:48:04.013517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.686 [2024-12-09 09:48:04.013524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.686 [2024-12-09 09:48:04.013534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.686 [2024-12-09 09:48:04.013541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.686 [2024-12-09 09:48:04.013551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.686 [2024-12-09 09:48:04.013559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.687 [2024-12-09 09:48:04.013568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.687 [2024-12-09 09:48:04.013576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.687 [2024-12-09 09:48:04.013586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.687 [2024-12-09 09:48:04.013594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.687 [2024-12-09 09:48:04.013604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.687 [2024-12-09 09:48:04.013611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.687 [2024-12-09 09:48:04.013621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.687 [2024-12-09 09:48:04.013629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.687 [2024-12-09 09:48:04.013643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.687 [2024-12-09 09:48:04.013651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.687 [2024-12-09 09:48:04.013661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.687 [2024-12-09 09:48:04.013668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.687 [2024-12-09 09:48:04.013678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.687 [2024-12-09 09:48:04.013685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.687 [2024-12-09 09:48:04.013695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.687 [2024-12-09 09:48:04.013702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.687 [2024-12-09 09:48:04.013712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.687 [2024-12-09 09:48:04.013720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.687 [2024-12-09 09:48:04.013729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.687 [2024-12-09 09:48:04.013736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.687 [2024-12-09 09:48:04.013746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.687 [2024-12-09 09:48:04.013754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.687 [2024-12-09 09:48:04.013764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.687 [2024-12-09 09:48:04.013771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.687 [2024-12-09 09:48:04.013780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.687 [2024-12-09 09:48:04.013788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.687 [2024-12-09 09:48:04.013798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.687 [2024-12-09 09:48:04.013805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.687 [2024-12-09 09:48:04.013814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.687 [2024-12-09 09:48:04.013822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.687 [2024-12-09 09:48:04.013832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.687 [2024-12-09 09:48:04.013840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.687 [2024-12-09 09:48:04.013850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.687 [2024-12-09 09:48:04.013859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.687 [2024-12-09 09:48:04.013868] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8d7c0 is same with the state(6) to be set 00:30:28.687 [2024-12-09 09:48:04.015145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.687 [2024-12-09 09:48:04.015158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.687 [2024-12-09 09:48:04.015171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.687 [2024-12-09 09:48:04.015180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.687 [2024-12-09 09:48:04.015192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.687 [2024-12-09 09:48:04.015200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.687 [2024-12-09 09:48:04.015210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.687 [2024-12-09 09:48:04.015218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.687 [2024-12-09 09:48:04.015227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.687 [2024-12-09 09:48:04.015235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.687 [2024-12-09 09:48:04.015244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.687 [2024-12-09 09:48:04.015251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.687 [2024-12-09 09:48:04.015261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.687 [2024-12-09 09:48:04.015268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.687 [2024-12-09 09:48:04.015278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.687 [2024-12-09 09:48:04.015286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.687 [2024-12-09 09:48:04.015295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.687 [2024-12-09 09:48:04.015303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.687 [2024-12-09 09:48:04.015312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.687 [2024-12-09 09:48:04.015320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.687 [2024-12-09 09:48:04.015329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.687 [2024-12-09 09:48:04.015336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.687 [2024-12-09 09:48:04.015346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.687 [2024-12-09 09:48:04.015357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.687 [2024-12-09 09:48:04.015367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.687 [2024-12-09 09:48:04.015374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.687 [2024-12-09 09:48:04.015384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.687 [2024-12-09 09:48:04.015391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.687 [2024-12-09 09:48:04.015401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.687 [2024-12-09 09:48:04.015408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.687 [2024-12-09 09:48:04.015418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.687 [2024-12-09 09:48:04.015425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.687 [2024-12-09 09:48:04.015434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.687 [2024-12-09 09:48:04.015442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.688 [2024-12-09 09:48:04.015451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.688 [2024-12-09 09:48:04.015459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.688 [2024-12-09 09:48:04.015468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.688 [2024-12-09 09:48:04.015475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.688 [2024-12-09 09:48:04.015485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.688 [2024-12-09 09:48:04.015492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.688 [2024-12-09 09:48:04.015501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.688 [2024-12-09 09:48:04.015509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.688 [2024-12-09 09:48:04.015518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.688 [2024-12-09 09:48:04.015525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.688 [2024-12-09 09:48:04.015534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.688 [2024-12-09 09:48:04.015542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.688 [2024-12-09 09:48:04.015551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.688 [2024-12-09 09:48:04.015558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.688 [2024-12-09 09:48:04.015569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.688 [2024-12-09 09:48:04.015576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.688 [2024-12-09 09:48:04.015586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.688 [2024-12-09 09:48:04.015593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.688 [2024-12-09 09:48:04.015603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.688 [2024-12-09 09:48:04.015610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.688 [2024-12-09 09:48:04.015619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.688 [2024-12-09 09:48:04.015626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.688 [2024-12-09 09:48:04.015636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.688 [2024-12-09 09:48:04.015649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.688 [2024-12-09 09:48:04.015659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.688 [2024-12-09 09:48:04.015666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.688 [2024-12-09 09:48:04.015677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.688 [2024-12-09 09:48:04.015684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.688 [2024-12-09 09:48:04.015693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.688 [2024-12-09 09:48:04.015701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.688 [2024-12-09 09:48:04.015710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.688 [2024-12-09 09:48:04.015718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.688 [2024-12-09 09:48:04.015727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.688 [2024-12-09 09:48:04.015734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.688 [2024-12-09 09:48:04.015744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.688 [2024-12-09 09:48:04.015751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.688 [2024-12-09 09:48:04.015761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.688 [2024-12-09 09:48:04.015768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.688 [2024-12-09 09:48:04.015777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.688 [2024-12-09 09:48:04.015787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.688 [2024-12-09 09:48:04.015796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.688 [2024-12-09 09:48:04.015804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.688 [2024-12-09 09:48:04.015813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.688 [2024-12-09 09:48:04.015820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.688 [2024-12-09 09:48:04.015831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.688 [2024-12-09 09:48:04.015838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.688 [2024-12-09 09:48:04.015847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.688 [2024-12-09 09:48:04.015855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.688 [2024-12-09 09:48:04.015864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.688 [2024-12-09 09:48:04.015872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.688 [2024-12-09 09:48:04.015881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.688 [2024-12-09 09:48:04.015888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.688 [2024-12-09 09:48:04.015898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.688 [2024-12-09 09:48:04.015905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.688 [2024-12-09 09:48:04.015915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.688 [2024-12-09 09:48:04.015922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.688 [2024-12-09 09:48:04.015932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.688 [2024-12-09 09:48:04.015939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.688 [2024-12-09 09:48:04.015948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.688 [2024-12-09 09:48:04.015956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.688 [2024-12-09 09:48:04.015965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.688 [2024-12-09 09:48:04.015972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.688 [2024-12-09 09:48:04.015982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.688 [2024-12-09 09:48:04.015989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.688 [2024-12-09 09:48:04.015999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.688 [2024-12-09 09:48:04.016008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.688 [2024-12-09 09:48:04.016018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.688 [2024-12-09 09:48:04.016026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.688 [2024-12-09 09:48:04.016036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.688 [2024-12-09 09:48:04.016046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.688 [2024-12-09 09:48:04.016056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.688 [2024-12-09 09:48:04.016066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.688 [2024-12-09 09:48:04.016076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.688 [2024-12-09 09:48:04.016085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.688 [2024-12-09 09:48:04.016094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.688 [2024-12-09 09:48:04.016101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.688 [2024-12-09 09:48:04.016111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.689 [2024-12-09 09:48:04.016118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.689 [2024-12-09 09:48:04.016128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.689 [2024-12-09 09:48:04.016135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.689 [2024-12-09 09:48:04.016144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.689 [2024-12-09 09:48:04.016152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.689 [2024-12-09 09:48:04.016161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.689 [2024-12-09 09:48:04.016168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.689 [2024-12-09 09:48:04.016178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.689 [2024-12-09 09:48:04.016185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.689 [2024-12-09 09:48:04.016195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.689 [2024-12-09 09:48:04.016202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.689 [2024-12-09 09:48:04.016212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.689 [2024-12-09 09:48:04.016219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.689 [2024-12-09 09:48:04.016230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.689 [2024-12-09 09:48:04.016238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.689 [2024-12-09 09:48:04.016247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:28.689 [2024-12-09 09:48:04.016255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:28.689 [2024-12-09 09:48:04.016263] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc75d0 is same with the state(6) to be set 00:30:28.689 [2024-12-09 09:48:04.017781] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:30:28.689 [2024-12-09 09:48:04.017808] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:30:28.689 [2024-12-09 09:48:04.017823] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:30:28.689 [2024-12-09 09:48:04.017837] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:30:28.689 [2024-12-09 09:48:04.017927] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:30:28.689 [2024-12-09 09:48:04.017941] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:30:28.689 [2024-12-09 09:48:04.018014] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:30:28.689 task offset: 24576 on job bdev=Nvme1n1 fails 00:30:28.689 00:30:28.689 Latency(us) 00:30:28.689 [2024-12-09T08:48:04.142Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:28.689 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:28.689 Job: Nvme1n1 ended in about 0.93 seconds with error 00:30:28.689 Verification LBA range: start 0x0 length 0x400 00:30:28.689 Nvme1n1 : 0.93 207.03 12.94 69.01 0.00 229151.84 4942.51 246415.36 00:30:28.689 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:28.689 Job: Nvme2n1 ended in about 0.96 seconds with error 00:30:28.689 Verification LBA range: start 0x0 length 0x400 00:30:28.689 Nvme2n1 : 0.96 200.17 12.51 66.72 0.00 232207.36 18350.08 249910.61 00:30:28.689 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:28.689 Job: Nvme3n1 ended in about 0.96 seconds with error 00:30:28.689 Verification LBA range: start 0x0 length 0x400 00:30:28.689 Nvme3n1 : 0.96 199.87 12.49 66.62 0.00 227712.00 34078.72 221074.77 00:30:28.689 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:28.689 Job: Nvme4n1 ended in about 0.96 seconds with error 00:30:28.689 Verification LBA range: start 0x0 length 0x400 00:30:28.689 Nvme4n1 : 0.96 199.55 12.47 66.52 0.00 223275.09 21080.75 242920.11 00:30:28.689 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:28.689 Job: Nvme5n1 ended in about 0.98 seconds with error 00:30:28.689 Verification LBA range: start 0x0 length 0x400 00:30:28.689 Nvme5n1 : 0.98 131.12 8.19 65.56 0.00 295932.30 16930.13 256901.12 00:30:28.689 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:28.689 Job: Nvme6n1 ended in about 0.98 seconds with error 00:30:28.689 Verification LBA range: start 0x0 length 0x400 00:30:28.689 Nvme6n1 : 0.98 130.69 8.17 65.35 0.00 290540.94 20534.61 284863.15 00:30:28.689 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:28.689 Job: Nvme7n1 ended in about 0.98 seconds with error 00:30:28.689 Verification LBA range: start 0x0 length 0x400 00:30:28.689 Nvme7n1 : 0.98 195.41 12.21 65.14 0.00 213789.87 15947.09 244667.73 00:30:28.689 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:28.689 Job: Nvme8n1 ended in about 0.98 seconds with error 00:30:28.689 Verification LBA range: start 0x0 length 0x400 00:30:28.689 Nvme8n1 : 0.98 194.93 12.18 64.98 0.00 209557.33 14854.83 239424.85 00:30:28.689 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:28.689 Job: Nvme9n1 ended in about 0.99 seconds with error 00:30:28.689 Verification LBA range: start 0x0 length 0x400 00:30:28.689 Nvme9n1 : 0.99 129.64 8.10 64.82 0.00 273800.53 28180.48 248162.99 00:30:28.689 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:28.689 Job: Nvme10n1 ended in about 0.99 seconds with error 00:30:28.689 Verification LBA range: start 0x0 length 0x400 00:30:28.689 Nvme10n1 : 0.99 129.33 8.08 64.66 0.00 268310.76 17367.04 265639.25 00:30:28.689 [2024-12-09T08:48:04.142Z] =================================================================================================================== 00:30:28.689 [2024-12-09T08:48:04.142Z] Total : 1717.74 107.36 659.37 0.00 242459.10 4942.51 284863.15 00:30:28.689 [2024-12-09 09:48:04.043091] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:28.689 [2024-12-09 09:48:04.043140] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:30:28.689 [2024-12-09 09:48:04.043626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.689 [2024-12-09 09:48:04.043653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x983fa0 with addr=10.0.0.2, port=4420 00:30:28.689 [2024-12-09 09:48:04.043664] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x983fa0 is same with the state(6) to be set 00:30:28.689 [2024-12-09 09:48:04.043864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.689 [2024-12-09 09:48:04.043874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda6e40 with addr=10.0.0.2, port=4420 00:30:28.689 [2024-12-09 09:48:04.043882] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda6e40 is same with the state(6) to be set 00:30:28.689 [2024-12-09 09:48:04.044212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.689 [2024-12-09 09:48:04.044222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x894610 with addr=10.0.0.2, port=4420 00:30:28.689 [2024-12-09 09:48:04.044229] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x894610 is same with the state(6) to be set 00:30:28.689 [2024-12-09 09:48:04.044416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.689 [2024-12-09 09:48:04.044426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde5ca0 with addr=10.0.0.2, port=4420 00:30:28.689 [2024-12-09 09:48:04.044434] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde5ca0 is same with the state(6) to be set 00:30:28.689 [2024-12-09 09:48:04.046030] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:30:28.689 [2024-12-09 09:48:04.046045] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:30:28.689 [2024-12-09 09:48:04.046055] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:30:28.689 [2024-12-09 09:48:04.046064] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:30:28.689 [2024-12-09 09:48:04.046397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.689 [2024-12-09 09:48:04.046411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdff6a0 with addr=10.0.0.2, port=4420 00:30:28.689 [2024-12-09 09:48:04.046419] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdff6a0 is same with the state(6) to be set 00:30:28.689 [2024-12-09 09:48:04.046905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.689 [2024-12-09 09:48:04.046950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xde2640 with addr=10.0.0.2, port=4420 00:30:28.689 [2024-12-09 09:48:04.046963] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xde2640 is same with the state(6) to be set 00:30:28.689 [2024-12-09 09:48:04.046981] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x983fa0 (9): Bad file descriptor 00:30:28.689 [2024-12-09 09:48:04.046992] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xda6e40 (9): Bad file descriptor 00:30:28.689 [2024-12-09 09:48:04.047002] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x894610 (9): Bad file descriptor 00:30:28.689 [2024-12-09 09:48:04.047012] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xde5ca0 (9): Bad file descriptor 00:30:28.689 [2024-12-09 09:48:04.047052] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:30:28.689 [2024-12-09 09:48:04.047065] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:30:28.689 [2024-12-09 09:48:04.047079] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:30:28.689 [2024-12-09 09:48:04.047091] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:30:28.689 [2024-12-09 09:48:04.047677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.690 [2024-12-09 09:48:04.047705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x986440 with addr=10.0.0.2, port=4420 00:30:28.690 [2024-12-09 09:48:04.047714] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x986440 is same with the state(6) to be set 00:30:28.690 [2024-12-09 09:48:04.047939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.690 [2024-12-09 09:48:04.047949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdb1d50 with addr=10.0.0.2, port=4420 00:30:28.690 [2024-12-09 09:48:04.047957] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdb1d50 is same with the state(6) to be set 00:30:28.690 [2024-12-09 09:48:04.048259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.690 [2024-12-09 09:48:04.048270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9855a0 with addr=10.0.0.2, port=4420 00:30:28.690 [2024-12-09 09:48:04.048277] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9855a0 is same with the state(6) to be set 00:30:28.690 [2024-12-09 09:48:04.048338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.690 [2024-12-09 09:48:04.048352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x97a430 with addr=10.0.0.2, port=4420 00:30:28.690 [2024-12-09 09:48:04.048359] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97a430 is same with the state(6) to be set 00:30:28.690 [2024-12-09 09:48:04.048369] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdff6a0 (9): Bad file descriptor 00:30:28.690 [2024-12-09 09:48:04.048380] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xde2640 (9): Bad file descriptor 00:30:28.690 [2024-12-09 09:48:04.048389] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:30:28.690 [2024-12-09 09:48:04.048396] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:30:28.690 [2024-12-09 09:48:04.048405] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:30:28.690 [2024-12-09 09:48:04.048414] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:30:28.690 [2024-12-09 09:48:04.048427] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:30:28.690 [2024-12-09 09:48:04.048434] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:30:28.690 [2024-12-09 09:48:04.048441] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:30:28.690 [2024-12-09 09:48:04.048448] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:30:28.690 [2024-12-09 09:48:04.048455] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:30:28.690 [2024-12-09 09:48:04.048461] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:30:28.690 [2024-12-09 09:48:04.048468] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:30:28.690 [2024-12-09 09:48:04.048475] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:30:28.690 [2024-12-09 09:48:04.048482] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:30:28.690 [2024-12-09 09:48:04.048489] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:30:28.690 [2024-12-09 09:48:04.048496] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:30:28.690 [2024-12-09 09:48:04.048502] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:30:28.690 [2024-12-09 09:48:04.048586] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x986440 (9): Bad file descriptor 00:30:28.690 [2024-12-09 09:48:04.048598] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb1d50 (9): Bad file descriptor 00:30:28.690 [2024-12-09 09:48:04.048607] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9855a0 (9): Bad file descriptor 00:30:28.690 [2024-12-09 09:48:04.048616] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x97a430 (9): Bad file descriptor 00:30:28.690 [2024-12-09 09:48:04.048624] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:30:28.690 [2024-12-09 09:48:04.048630] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:30:28.690 [2024-12-09 09:48:04.048642] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:30:28.690 [2024-12-09 09:48:04.048649] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:30:28.690 [2024-12-09 09:48:04.048657] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:30:28.690 [2024-12-09 09:48:04.048663] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:30:28.690 [2024-12-09 09:48:04.048670] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:30:28.690 [2024-12-09 09:48:04.048676] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:30:28.690 [2024-12-09 09:48:04.048704] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:30:28.690 [2024-12-09 09:48:04.048711] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:30:28.690 [2024-12-09 09:48:04.048718] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:30:28.690 [2024-12-09 09:48:04.048725] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:30:28.690 [2024-12-09 09:48:04.048732] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:30:28.690 [2024-12-09 09:48:04.048741] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:30:28.690 [2024-12-09 09:48:04.048748] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:30:28.690 [2024-12-09 09:48:04.048755] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:30:28.690 [2024-12-09 09:48:04.048763] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:30:28.690 [2024-12-09 09:48:04.048769] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:30:28.690 [2024-12-09 09:48:04.048776] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:30:28.690 [2024-12-09 09:48:04.048782] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:30:28.690 [2024-12-09 09:48:04.048790] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:30:28.690 [2024-12-09 09:48:04.048796] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:30:28.690 [2024-12-09 09:48:04.048803] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:30:28.690 [2024-12-09 09:48:04.048810] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:30:28.951 09:48:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:30:29.895 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 2929890 00:30:29.895 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:30:29.895 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2929890 00:30:29.895 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:30:29.895 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:29.895 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:30:29.895 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:29.895 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 2929890 00:30:29.895 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:30:29.895 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:29.895 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:30:29.895 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:30:29.895 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:30:29.895 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:29.895 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:30:29.895 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:30:29.895 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:30:29.895 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:29.895 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:30:29.895 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:29.895 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:30:29.895 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:29.895 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:30:29.895 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:29.895 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:29.895 rmmod nvme_tcp 00:30:29.895 rmmod nvme_fabrics 00:30:29.895 rmmod nvme_keyring 00:30:29.895 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:29.895 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:30:29.895 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:30:29.895 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 2929435 ']' 00:30:29.895 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 2929435 00:30:29.895 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 2929435 ']' 00:30:29.895 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 2929435 00:30:29.895 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2929435) - No such process 00:30:29.895 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 2929435 is not found' 00:30:29.895 Process with pid 2929435 is not found 00:30:29.895 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:29.895 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:29.895 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:29.895 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:30:29.895 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:30:29.895 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:29.895 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:30:29.895 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:29.895 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:29.895 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:29.895 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:29.895 09:48:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:32.440 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:32.440 00:30:32.440 real 0m7.817s 00:30:32.440 user 0m19.332s 00:30:32.440 sys 0m1.226s 00:30:32.440 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:32.440 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:30:32.440 ************************************ 00:30:32.440 END TEST nvmf_shutdown_tc3 00:30:32.440 ************************************ 00:30:32.440 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:30:32.440 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:30:32.440 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:30:32.440 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:32.440 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:32.440 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:32.440 ************************************ 00:30:32.440 START TEST nvmf_shutdown_tc4 00:30:32.440 ************************************ 00:30:32.440 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:30:32.440 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:30:32.440 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:30:32.440 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:32.440 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:32.440 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:32.440 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:32.440 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:32.440 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:32.440 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:32.440 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:32.440 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:32.440 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:32.440 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:30:32.440 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:32.440 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:32.440 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:30:32.440 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:32.440 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:32.440 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:32.440 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:32.441 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:32.441 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:30:32.441 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:32.441 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:30:32.441 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:30:32.441 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:30:32.441 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:30:32.441 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:30:32.441 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:30:32.441 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:32.441 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:32.441 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:32.441 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:32.441 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:32.441 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:32.441 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:32.441 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:32.441 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:32.441 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:32.441 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:32.441 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:32.441 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:32.441 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:32.441 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:32.441 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:32.441 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:32.441 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:32.441 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:32.441 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:32.441 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:32.441 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:32.441 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:32.441 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:32.441 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:32.441 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:32.441 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:32.441 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:32.441 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:32.441 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:32.441 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:32.441 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:32.441 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:32.441 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:32.441 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:32.441 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:32.441 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:32.441 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:32.441 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:32.441 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:32.441 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:32.441 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:32.441 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:32.441 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:32.441 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:32.441 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:32.441 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:32.441 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:32.441 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:32.441 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:32.441 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:32.441 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:32.441 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:32.441 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:32.441 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:32.441 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:32.441 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:32.441 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:32.441 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:30:32.441 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:32.441 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:32.441 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:32.441 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:32.441 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:32.441 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:32.441 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:32.441 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:32.441 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:32.441 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:32.441 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:32.441 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:32.441 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:32.441 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:32.441 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:32.441 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:32.441 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:32.441 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:32.441 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:32.441 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:32.441 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:32.441 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:32.441 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:32.441 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:32.441 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:32.441 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:32.441 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:32.441 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.628 ms 00:30:32.441 00:30:32.441 --- 10.0.0.2 ping statistics --- 00:30:32.441 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:32.441 rtt min/avg/max/mdev = 0.628/0.628/0.628/0.000 ms 00:30:32.441 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:32.442 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:32.442 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.328 ms 00:30:32.442 00:30:32.442 --- 10.0.0.1 ping statistics --- 00:30:32.442 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:32.442 rtt min/avg/max/mdev = 0.328/0.328/0.328/0.000 ms 00:30:32.442 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:32.442 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:30:32.442 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:32.442 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:32.442 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:32.442 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:32.442 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:32.442 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:32.442 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:32.442 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:30:32.442 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:32.442 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:32.442 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:32.442 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=2931220 00:30:32.442 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 2931220 00:30:32.442 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:30:32.442 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 2931220 ']' 00:30:32.442 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:32.442 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:32.442 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:32.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:32.442 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:32.442 09:48:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:32.705 [2024-12-09 09:48:07.950995] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:30:32.705 [2024-12-09 09:48:07.951074] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:32.705 [2024-12-09 09:48:08.048833] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:32.705 [2024-12-09 09:48:08.071551] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:32.705 [2024-12-09 09:48:08.071592] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:32.705 [2024-12-09 09:48:08.071598] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:32.705 [2024-12-09 09:48:08.071604] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:32.705 [2024-12-09 09:48:08.071608] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:32.705 [2024-12-09 09:48:08.073268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:32.705 [2024-12-09 09:48:08.073324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:32.705 [2024-12-09 09:48:08.073482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:32.705 [2024-12-09 09:48:08.073483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:33.649 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:33.649 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:30:33.649 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:33.649 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:33.649 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:33.649 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:33.649 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:33.649 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.649 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:33.649 [2024-12-09 09:48:08.783148] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:33.649 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.649 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:30:33.649 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:30:33.649 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:33.649 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:33.649 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:33.649 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:33.649 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:33.649 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:33.649 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:33.649 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:33.649 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:33.649 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:33.649 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:33.649 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:33.649 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:33.649 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:33.649 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:33.649 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:33.649 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:33.649 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:33.649 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:33.649 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:33.649 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:33.649 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:30:33.649 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:30:33.649 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:30:33.649 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.649 09:48:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:33.649 Malloc1 00:30:33.649 [2024-12-09 09:48:08.890930] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:33.649 Malloc2 00:30:33.649 Malloc3 00:30:33.649 Malloc4 00:30:33.649 Malloc5 00:30:33.649 Malloc6 00:30:33.649 Malloc7 00:30:33.911 Malloc8 00:30:33.911 Malloc9 00:30:33.911 Malloc10 00:30:33.911 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.911 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:30:33.911 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:33.911 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:33.911 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=2931463 00:30:33.911 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:30:33.911 09:48:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:30:34.173 [2024-12-09 09:48:09.373483] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:30:39.545 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:39.545 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 2931220 00:30:39.545 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 2931220 ']' 00:30:39.545 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 2931220 00:30:39.545 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:30:39.545 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:39.545 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2931220 00:30:39.545 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:39.545 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:39.545 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2931220' 00:30:39.545 killing process with pid 2931220 00:30:39.545 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 2931220 00:30:39.545 [2024-12-09 09:48:14.370978] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf420 is same with the state(6) to be set 00:30:39.545 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 2931220 00:30:39.545 [2024-12-09 09:48:14.371023] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf420 is same with the state(6) to be set 00:30:39.545 [2024-12-09 09:48:14.371030] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf420 is same with the state(6) to be set 00:30:39.545 [2024-12-09 09:48:14.371035] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf420 is same with the state(6) to be set 00:30:39.545 [2024-12-09 09:48:14.371040] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf420 is same with the state(6) to be set 00:30:39.545 [2024-12-09 09:48:14.371044] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf420 is same with the state(6) to be set 00:30:39.545 [2024-12-09 09:48:14.371050] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf420 is same with the state(6) to be set 00:30:39.545 [2024-12-09 09:48:14.371055] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf420 is same with the state(6) to be set 00:30:39.545 [2024-12-09 09:48:14.371065] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf420 is same with the state(6) to be set 00:30:39.545 [2024-12-09 09:48:14.371070] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf420 is same with the state(6) to be set 00:30:39.545 [2024-12-09 09:48:14.371323] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf8f0 is same with the state(6) to be set 00:30:39.545 [2024-12-09 09:48:14.371351] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf8f0 is same with the state(6) to be set 00:30:39.545 [2024-12-09 09:48:14.371357] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baf8f0 is same with the state(6) to be set 00:30:39.545 Write completed with error (sct=0, sc=8) 00:30:39.545 starting I/O failed: -6 00:30:39.545 Write completed with error (sct=0, sc=8) 00:30:39.545 Write completed with error (sct=0, sc=8) 00:30:39.545 Write completed with error (sct=0, sc=8) 00:30:39.545 [2024-12-09 09:48:14.371678] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bafdc0 is same with Write completed with error (sct=0, sc=8) 00:30:39.545 the state(6) to be set 00:30:39.545 starting I/O failed: -6 00:30:39.545 [2024-12-09 09:48:14.371701] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bafdc0 is same with the state(6) to be set 00:30:39.545 Write completed with error (sct=0, sc=8) 00:30:39.545 [2024-12-09 09:48:14.371707] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bafdc0 is same with the state(6) to be set 00:30:39.545 [2024-12-09 09:48:14.371713] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bafdc0 is same with the state(6) to be set 00:30:39.545 Write completed with error (sct=0, sc=8) 00:30:39.545 Write completed with error (sct=0, sc=8) 00:30:39.545 Write completed with error (sct=0, sc=8) 00:30:39.545 starting I/O failed: -6 00:30:39.545 Write completed with error (sct=0, sc=8) 00:30:39.545 Write completed with error (sct=0, sc=8) 00:30:39.545 Write completed with error (sct=0, sc=8) 00:30:39.545 Write completed with error (sct=0, sc=8) 00:30:39.545 starting I/O failed: -6 00:30:39.545 Write completed with error (sct=0, sc=8) 00:30:39.545 Write completed with error (sct=0, sc=8) 00:30:39.545 Write completed with error (sct=0, sc=8) 00:30:39.545 Write completed with error (sct=0, sc=8) 00:30:39.545 starting I/O failed: -6 00:30:39.545 Write completed with error (sct=0, sc=8) 00:30:39.545 Write completed with error (sct=0, sc=8) 00:30:39.545 Write completed with error (sct=0, sc=8) 00:30:39.545 Write completed with error (sct=0, sc=8) 00:30:39.545 starting I/O failed: -6 00:30:39.545 Write completed with error (sct=0, sc=8) 00:30:39.545 Write completed with error (sct=0, sc=8) 00:30:39.545 Write completed with error (sct=0, sc=8) 00:30:39.545 Write completed with error (sct=0, sc=8) 00:30:39.545 starting I/O failed: -6 00:30:39.545 Write completed with error (sct=0, sc=8) 00:30:39.545 Write completed with error (sct=0, sc=8) 00:30:39.545 [2024-12-09 09:48:14.372088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:39.545 [2024-12-09 09:48:14.372169] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baef50 is same with the state(6) to be set 00:30:39.545 [2024-12-09 09:48:14.372190] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1baef50 is same with the state(6) to be set 00:30:39.545 starting I/O failed: -6 00:30:39.545 starting I/O failed: -6 00:30:39.545 starting I/O failed: -6 00:30:39.545 Write completed with error (sct=0, sc=8) 00:30:39.545 Write completed with error (sct=0, sc=8) 00:30:39.545 Write completed with error (sct=0, sc=8) 00:30:39.545 starting I/O failed: -6 00:30:39.545 Write completed with error (sct=0, sc=8) 00:30:39.545 starting I/O failed: -6 00:30:39.545 Write completed with error (sct=0, sc=8) 00:30:39.545 Write completed with error (sct=0, sc=8) 00:30:39.545 Write completed with error (sct=0, sc=8) 00:30:39.545 starting I/O failed: -6 00:30:39.545 Write completed with error (sct=0, sc=8) 00:30:39.545 starting I/O failed: -6 00:30:39.545 Write completed with error (sct=0, sc=8) 00:30:39.545 Write completed with error (sct=0, sc=8) 00:30:39.545 Write completed with error (sct=0, sc=8) 00:30:39.545 starting I/O failed: -6 00:30:39.545 Write completed with error (sct=0, sc=8) 00:30:39.545 starting I/O failed: -6 00:30:39.545 Write completed with error (sct=0, sc=8) 00:30:39.545 Write completed with error (sct=0, sc=8) 00:30:39.545 Write completed with error (sct=0, sc=8) 00:30:39.545 starting I/O failed: -6 00:30:39.545 Write completed with error (sct=0, sc=8) 00:30:39.545 starting I/O failed: -6 00:30:39.545 Write completed with error (sct=0, sc=8) 00:30:39.545 Write completed with error (sct=0, sc=8) 00:30:39.545 Write completed with error (sct=0, sc=8) 00:30:39.545 starting I/O failed: -6 00:30:39.545 Write completed with error (sct=0, sc=8) 00:30:39.545 starting I/O failed: -6 00:30:39.545 Write completed with error (sct=0, sc=8) 00:30:39.545 starting I/O failed: -6 00:30:39.545 Write completed with error (sct=0, sc=8) 00:30:39.545 Write completed with error (sct=0, sc=8) 00:30:39.545 starting I/O failed: -6 00:30:39.545 Write completed with error (sct=0, sc=8) 00:30:39.545 starting I/O failed: -6 00:30:39.545 Write completed with error (sct=0, sc=8) 00:30:39.545 starting I/O failed: -6 00:30:39.545 Write completed with error (sct=0, sc=8) 00:30:39.545 Write completed with error (sct=0, sc=8) 00:30:39.545 starting I/O failed: -6 00:30:39.545 Write completed with error (sct=0, sc=8) 00:30:39.545 starting I/O failed: -6 00:30:39.545 Write completed with error (sct=0, sc=8) 00:30:39.545 starting I/O failed: -6 00:30:39.545 Write completed with error (sct=0, sc=8) 00:30:39.545 Write completed with error (sct=0, sc=8) 00:30:39.545 starting I/O failed: -6 00:30:39.545 Write completed with error (sct=0, sc=8) 00:30:39.545 starting I/O failed: -6 00:30:39.545 Write completed with error (sct=0, sc=8) 00:30:39.545 starting I/O failed: -6 00:30:39.545 Write completed with error (sct=0, sc=8) 00:30:39.545 Write completed with error (sct=0, sc=8) 00:30:39.545 starting I/O failed: -6 00:30:39.545 Write completed with error (sct=0, sc=8) 00:30:39.545 starting I/O failed: -6 00:30:39.545 Write completed with error (sct=0, sc=8) 00:30:39.545 starting I/O failed: -6 00:30:39.545 Write completed with error (sct=0, sc=8) 00:30:39.545 Write completed with error (sct=0, sc=8) 00:30:39.545 starting I/O failed: -6 00:30:39.545 Write completed with error (sct=0, sc=8) 00:30:39.545 starting I/O failed: -6 00:30:39.545 Write completed with error (sct=0, sc=8) 00:30:39.545 starting I/O failed: -6 00:30:39.545 Write completed with error (sct=0, sc=8) 00:30:39.545 Write completed with error (sct=0, sc=8) 00:30:39.545 starting I/O failed: -6 00:30:39.545 Write completed with error (sct=0, sc=8) 00:30:39.545 starting I/O failed: -6 00:30:39.545 Write completed with error (sct=0, sc=8) 00:30:39.545 starting I/O failed: -6 00:30:39.545 Write completed with error (sct=0, sc=8) 00:30:39.545 Write completed with error (sct=0, sc=8) 00:30:39.545 starting I/O failed: -6 00:30:39.545 Write completed with error (sct=0, sc=8) 00:30:39.546 starting I/O failed: -6 00:30:39.546 Write completed with error (sct=0, sc=8) 00:30:39.546 starting I/O failed: -6 00:30:39.546 Write completed with error (sct=0, sc=8) 00:30:39.546 Write completed with error (sct=0, sc=8) 00:30:39.546 starting I/O failed: -6 00:30:39.546 Write completed with error (sct=0, sc=8) 00:30:39.546 starting I/O failed: -6 00:30:39.546 Write completed with error (sct=0, sc=8) 00:30:39.546 starting I/O failed: -6 00:30:39.546 Write completed with error (sct=0, sc=8) 00:30:39.546 Write completed with error (sct=0, sc=8) 00:30:39.546 starting I/O failed: -6 00:30:39.546 Write completed with error (sct=0, sc=8) 00:30:39.546 starting I/O failed: -6 00:30:39.546 Write completed with error (sct=0, sc=8) 00:30:39.546 [2024-12-09 09:48:14.373519] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bad890 is same with starting I/O failed: -6 00:30:39.546 the state(6) to be set 00:30:39.546 Write completed with error (sct=0, sc=8) 00:30:39.546 [2024-12-09 09:48:14.373538] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bad890 is same with the state(6) to be set 00:30:39.546 [2024-12-09 09:48:14.373543] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bad890 is same with the state(6) to be set 00:30:39.546 [2024-12-09 09:48:14.373548] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bad890 is same with the state(6) to be set 00:30:39.546 Write completed with error (sct=0, sc=8) 00:30:39.546 starting I/O failed: -6 00:30:39.546 Write completed with error (sct=0, sc=8) 00:30:39.546 starting I/O failed: -6 00:30:39.546 Write completed with error (sct=0, sc=8) 00:30:39.546 starting I/O failed: -6 00:30:39.546 Write completed with error (sct=0, sc=8) 00:30:39.546 Write completed with error (sct=0, sc=8) 00:30:39.546 starting I/O failed: -6 00:30:39.546 Write completed with error (sct=0, sc=8) 00:30:39.546 starting I/O failed: -6 00:30:39.546 Write completed with error (sct=0, sc=8) 00:30:39.546 starting I/O failed: -6 00:30:39.546 Write completed with error (sct=0, sc=8) 00:30:39.546 Write completed with error (sct=0, sc=8) 00:30:39.546 starting I/O failed: -6 00:30:39.546 Write completed with error (sct=0, sc=8) 00:30:39.546 starting I/O failed: -6 00:30:39.546 Write completed with error (sct=0, sc=8) 00:30:39.546 starting I/O failed: -6 00:30:39.546 Write completed with error (sct=0, sc=8) 00:30:39.546 Write completed with error (sct=0, sc=8) 00:30:39.546 starting I/O failed: -6 00:30:39.546 Write completed with error (sct=0, sc=8) 00:30:39.546 starting I/O failed: -6 00:30:39.546 Write completed with error (sct=0, sc=8) 00:30:39.546 starting I/O failed: -6 00:30:39.546 Write completed with error (sct=0, sc=8) 00:30:39.546 Write completed with error (sct=0, sc=8) 00:30:39.546 starting I/O failed: -6 00:30:39.546 Write completed with error (sct=0, sc=8) 00:30:39.546 starting I/O failed: -6 00:30:39.546 Write completed with error (sct=0, sc=8) 00:30:39.546 starting I/O failed: -6 00:30:39.546 Write completed with error (sct=0, sc=8) 00:30:39.546 Write completed with error (sct=0, sc=8) 00:30:39.546 starting I/O failed: -6 00:30:39.546 Write completed with error (sct=0, sc=8) 00:30:39.546 starting I/O failed: -6 00:30:39.546 [2024-12-09 09:48:14.373922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.546 starting I/O failed: -6 00:30:39.546 starting I/O failed: -6 00:30:39.546 starting I/O failed: -6 00:30:39.546 starting I/O failed: -6 00:30:39.546 starting I/O failed: -6 00:30:39.546 starting I/O failed: -6 00:30:39.546 starting I/O failed: -6 00:30:39.546 starting I/O failed: -6 00:30:39.546 starting I/O failed: -6 00:30:39.546 starting I/O failed: -6 00:30:39.546 Write completed with error (sct=0, sc=8) 00:30:39.546 starting I/O failed: -6 00:30:39.546 Write completed with error (sct=0, sc=8) 00:30:39.546 starting I/O failed: -6 00:30:39.546 Write completed with error (sct=0, sc=8) 00:30:39.546 starting I/O failed: -6 00:30:39.546 Write completed with error (sct=0, sc=8) 00:30:39.546 starting I/O failed: -6 00:30:39.546 Write completed with error (sct=0, sc=8) 00:30:39.546 starting I/O failed: -6 00:30:39.546 Write completed with error (sct=0, sc=8) 00:30:39.546 starting I/O failed: -6 00:30:39.546 Write completed with error (sct=0, sc=8) 00:30:39.546 starting I/O failed: -6 00:30:39.546 Write completed with error (sct=0, sc=8) 00:30:39.546 starting I/O failed: -6 00:30:39.546 Write completed with error (sct=0, sc=8) 00:30:39.546 starting I/O failed: -6 00:30:39.546 Write completed with error (sct=0, sc=8) 00:30:39.546 starting I/O failed: -6 00:30:39.546 Write completed with error (sct=0, sc=8) 00:30:39.546 starting I/O failed: -6 00:30:39.546 Write completed with error (sct=0, sc=8) 00:30:39.546 starting I/O failed: -6 00:30:39.546 Write completed with error (sct=0, sc=8) 00:30:39.546 starting I/O failed: -6 00:30:39.546 Write completed with error (sct=0, sc=8) 00:30:39.546 starting I/O failed: -6 00:30:39.546 Write completed with error (sct=0, sc=8) 00:30:39.546 starting I/O failed: -6 00:30:39.546 Write completed with error (sct=0, sc=8) 00:30:39.546 starting I/O failed: -6 00:30:39.546 Write completed with error (sct=0, sc=8) 00:30:39.546 starting I/O failed: -6 00:30:39.546 Write completed with error (sct=0, sc=8) 00:30:39.546 starting I/O failed: -6 00:30:39.546 Write completed with error (sct=0, sc=8) 00:30:39.546 starting I/O failed: -6 00:30:39.546 Write completed with error (sct=0, sc=8) 00:30:39.546 starting I/O failed: -6 00:30:39.546 Write completed with error (sct=0, sc=8) 00:30:39.546 starting I/O failed: -6 00:30:39.546 Write completed with error (sct=0, sc=8) 00:30:39.546 starting I/O failed: -6 00:30:39.546 Write completed with error (sct=0, sc=8) 00:30:39.546 starting I/O failed: -6 00:30:39.546 Write completed with error (sct=0, sc=8) 00:30:39.546 starting I/O failed: -6 00:30:39.546 Write completed with error (sct=0, sc=8) 00:30:39.546 starting I/O failed: -6 00:30:39.546 Write completed with error (sct=0, sc=8) 00:30:39.546 starting I/O failed: -6 00:30:39.546 Write completed with error (sct=0, sc=8) 00:30:39.546 starting I/O failed: -6 00:30:39.546 Write completed with error (sct=0, sc=8) 00:30:39.546 starting I/O failed: -6 00:30:39.546 Write completed with error (sct=0, sc=8) 00:30:39.546 starting I/O failed: -6 00:30:39.546 Write completed with error (sct=0, sc=8) 00:30:39.546 starting I/O failed: -6 00:30:39.546 Write completed with error (sct=0, sc=8) 00:30:39.546 starting I/O failed: -6 00:30:39.546 Write completed with error (sct=0, sc=8) 00:30:39.546 starting I/O failed: -6 00:30:39.546 Write completed with error (sct=0, sc=8) 00:30:39.546 starting I/O failed: -6 00:30:39.546 Write completed with error (sct=0, sc=8) 00:30:39.546 starting I/O failed: -6 00:30:39.546 Write completed with error (sct=0, sc=8) 00:30:39.546 starting I/O failed: -6 00:30:39.546 Write completed with error (sct=0, sc=8) 00:30:39.546 starting I/O failed: -6 00:30:39.546 Write completed with error (sct=0, sc=8) 00:30:39.546 starting I/O failed: -6 00:30:39.546 Write completed with error (sct=0, sc=8) 00:30:39.546 starting I/O failed: -6 00:30:39.546 Write completed with error (sct=0, sc=8) 00:30:39.546 starting I/O failed: -6 00:30:39.546 Write completed with error (sct=0, sc=8) 00:30:39.546 starting I/O failed: -6 00:30:39.546 Write completed with error (sct=0, sc=8) 00:30:39.546 starting I/O failed: -6 00:30:39.546 Write completed with error (sct=0, sc=8) 00:30:39.546 starting I/O failed: -6 00:30:39.546 Write completed with error (sct=0, sc=8) 00:30:39.546 starting I/O failed: -6 00:30:39.546 Write completed with error (sct=0, sc=8) 00:30:39.546 starting I/O failed: -6 00:30:39.546 Write completed with error (sct=0, sc=8) 00:30:39.546 starting I/O failed: -6 00:30:39.546 Write completed with error (sct=0, sc=8) 00:30:39.546 starting I/O failed: -6 00:30:39.546 Write completed with error (sct=0, sc=8) 00:30:39.546 starting I/O failed: -6 00:30:39.546 Write completed with error (sct=0, sc=8) 00:30:39.546 starting I/O failed: -6 00:30:39.546 Write completed with error (sct=0, sc=8) 00:30:39.546 starting I/O failed: -6 00:30:39.546 Write completed with error (sct=0, sc=8) 00:30:39.546 starting I/O failed: -6 00:30:39.546 Write completed with error (sct=0, sc=8) 00:30:39.546 starting I/O failed: -6 00:30:39.546 Write completed with error (sct=0, sc=8) 00:30:39.546 starting I/O failed: -6 00:30:39.546 Write completed with error (sct=0, sc=8) 00:30:39.546 starting I/O failed: -6 00:30:39.546 [2024-12-09 09:48:14.375694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.546 NVMe io qpair process completion error 00:30:39.546 [2024-12-09 09:48:14.377444] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb0760 is same with the state(6) to be set 00:30:39.546 [2024-12-09 09:48:14.377461] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb0760 is same with the state(6) to be set 00:30:39.546 [2024-12-09 09:48:14.377467] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb0760 is same with the state(6) to be set 00:30:39.546 [2024-12-09 09:48:14.377471] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb0760 is same with the state(6) to be set 00:30:39.546 [2024-12-09 09:48:14.377477] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb0760 is same with the state(6) to be set 00:30:39.546 [2024-12-09 09:48:14.377481] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb0760 is same with the state(6) to be set 00:30:39.546 [2024-12-09 09:48:14.377910] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb0c30 is same with the state(6) to be set 00:30:39.546 [2024-12-09 09:48:14.377928] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb0c30 is same with the state(6) to be set 00:30:39.546 [2024-12-09 09:48:14.377933] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb0c30 is same with the state(6) to be set 00:30:39.546 [2024-12-09 09:48:14.377938] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb0c30 is same with the state(6) to be set 00:30:39.546 [2024-12-09 09:48:14.377942] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb0c30 is same with the state(6) to be set 00:30:39.546 [2024-12-09 09:48:14.377947] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb0c30 is same with the state(6) to be set 00:30:39.546 [2024-12-09 09:48:14.377952] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb0c30 is same with the state(6) to be set 00:30:39.547 [2024-12-09 09:48:14.377957] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb0c30 is same with the state(6) to be set 00:30:39.547 [2024-12-09 09:48:14.377962] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb0c30 is same with the state(6) to be set 00:30:39.547 [2024-12-09 09:48:14.377967] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb0c30 is same with the state(6) to be set 00:30:39.547 [2024-12-09 09:48:14.378176] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb1100 is same with the state(6) to be set 00:30:39.547 [2024-12-09 09:48:14.378578] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb0290 is same with the state(6) to be set 00:30:39.547 [2024-12-09 09:48:14.378597] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb0290 is same with the state(6) to be set 00:30:39.547 [2024-12-09 09:48:14.378602] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb0290 is same with the state(6) to be set 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 starting I/O failed: -6 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 starting I/O failed: -6 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 starting I/O failed: -6 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 starting I/O failed: -6 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 starting I/O failed: -6 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 starting I/O failed: -6 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 starting I/O failed: -6 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 starting I/O failed: -6 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 starting I/O failed: -6 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 starting I/O failed: -6 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 [2024-12-09 09:48:14.379795] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.547 starting I/O failed: -6 00:30:39.547 starting I/O failed: -6 00:30:39.547 starting I/O failed: -6 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 starting I/O failed: -6 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 starting I/O failed: -6 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 starting I/O failed: -6 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 starting I/O failed: -6 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 starting I/O failed: -6 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 starting I/O failed: -6 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 starting I/O failed: -6 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 starting I/O failed: -6 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 starting I/O failed: -6 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 starting I/O failed: -6 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 starting I/O failed: -6 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 starting I/O failed: -6 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 starting I/O failed: -6 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 starting I/O failed: -6 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 starting I/O failed: -6 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 starting I/O failed: -6 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 starting I/O failed: -6 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 starting I/O failed: -6 00:30:39.547 [2024-12-09 09:48:14.380739] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 starting I/O failed: -6 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 starting I/O failed: -6 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 starting I/O failed: -6 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 starting I/O failed: -6 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 starting I/O failed: -6 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 starting I/O failed: -6 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 starting I/O failed: -6 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 starting I/O failed: -6 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 starting I/O failed: -6 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 starting I/O failed: -6 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 starting I/O failed: -6 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 starting I/O failed: -6 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 starting I/O failed: -6 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 starting I/O failed: -6 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 starting I/O failed: -6 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 starting I/O failed: -6 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 starting I/O failed: -6 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 starting I/O failed: -6 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 starting I/O failed: -6 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 starting I/O failed: -6 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 starting I/O failed: -6 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 starting I/O failed: -6 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 starting I/O failed: -6 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 starting I/O failed: -6 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 starting I/O failed: -6 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 starting I/O failed: -6 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 starting I/O failed: -6 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 starting I/O failed: -6 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 starting I/O failed: -6 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 starting I/O failed: -6 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 starting I/O failed: -6 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 starting I/O failed: -6 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 starting I/O failed: -6 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 starting I/O failed: -6 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 starting I/O failed: -6 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 starting I/O failed: -6 00:30:39.547 Write completed with error (sct=0, sc=8) 00:30:39.547 [2024-12-09 09:48:14.381645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 starting I/O failed: -6 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 starting I/O failed: -6 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 starting I/O failed: -6 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 starting I/O failed: -6 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 starting I/O failed: -6 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 starting I/O failed: -6 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 starting I/O failed: -6 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 starting I/O failed: -6 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 starting I/O failed: -6 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 starting I/O failed: -6 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 starting I/O failed: -6 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 starting I/O failed: -6 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 starting I/O failed: -6 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 starting I/O failed: -6 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 starting I/O failed: -6 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 starting I/O failed: -6 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 starting I/O failed: -6 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 starting I/O failed: -6 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 starting I/O failed: -6 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 starting I/O failed: -6 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 starting I/O failed: -6 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 starting I/O failed: -6 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 starting I/O failed: -6 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 starting I/O failed: -6 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 starting I/O failed: -6 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 starting I/O failed: -6 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 starting I/O failed: -6 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 starting I/O failed: -6 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 starting I/O failed: -6 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 starting I/O failed: -6 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 starting I/O failed: -6 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 starting I/O failed: -6 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 starting I/O failed: -6 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 starting I/O failed: -6 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 starting I/O failed: -6 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 starting I/O failed: -6 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 starting I/O failed: -6 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 starting I/O failed: -6 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 starting I/O failed: -6 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 starting I/O failed: -6 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 starting I/O failed: -6 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 starting I/O failed: -6 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 starting I/O failed: -6 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 starting I/O failed: -6 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 starting I/O failed: -6 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 starting I/O failed: -6 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 starting I/O failed: -6 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 starting I/O failed: -6 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 starting I/O failed: -6 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 starting I/O failed: -6 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 starting I/O failed: -6 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 starting I/O failed: -6 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 starting I/O failed: -6 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 starting I/O failed: -6 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 starting I/O failed: -6 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 starting I/O failed: -6 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 starting I/O failed: -6 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 starting I/O failed: -6 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 starting I/O failed: -6 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 starting I/O failed: -6 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 starting I/O failed: -6 00:30:39.548 [2024-12-09 09:48:14.383039] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:39.548 NVMe io qpair process completion error 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 starting I/O failed: -6 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 starting I/O failed: -6 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 starting I/O failed: -6 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 starting I/O failed: -6 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 starting I/O failed: -6 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 starting I/O failed: -6 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 starting I/O failed: -6 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 starting I/O failed: -6 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 [2024-12-09 09:48:14.384326] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 starting I/O failed: -6 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 starting I/O failed: -6 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 starting I/O failed: -6 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 starting I/O failed: -6 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 starting I/O failed: -6 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 starting I/O failed: -6 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 starting I/O failed: -6 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 starting I/O failed: -6 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 starting I/O failed: -6 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 starting I/O failed: -6 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 starting I/O failed: -6 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 starting I/O failed: -6 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 starting I/O failed: -6 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 starting I/O failed: -6 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 starting I/O failed: -6 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 starting I/O failed: -6 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.548 starting I/O failed: -6 00:30:39.548 Write completed with error (sct=0, sc=8) 00:30:39.549 Write completed with error (sct=0, sc=8) 00:30:39.549 Write completed with error (sct=0, sc=8) 00:30:39.549 starting I/O failed: -6 00:30:39.549 Write completed with error (sct=0, sc=8) 00:30:39.549 starting I/O failed: -6 00:30:39.549 Write completed with error (sct=0, sc=8) 00:30:39.549 Write completed with error (sct=0, sc=8) 00:30:39.549 [2024-12-09 09:48:14.385138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:39.549 Write completed with error (sct=0, sc=8) 00:30:39.549 starting I/O failed: -6 00:30:39.549 Write completed with error (sct=0, sc=8) 00:30:39.549 starting I/O failed: -6 00:30:39.549 Write completed with error (sct=0, sc=8) 00:30:39.549 Write completed with error (sct=0, sc=8) 00:30:39.549 starting I/O failed: -6 00:30:39.549 Write completed with error (sct=0, sc=8) 00:30:39.549 starting I/O failed: -6 00:30:39.549 Write completed with error (sct=0, sc=8) 00:30:39.549 starting I/O failed: -6 00:30:39.549 Write completed with error (sct=0, sc=8) 00:30:39.549 Write completed with error (sct=0, sc=8) 00:30:39.549 starting I/O failed: -6 00:30:39.549 Write completed with error (sct=0, sc=8) 00:30:39.549 starting I/O failed: -6 00:30:39.549 Write completed with error (sct=0, sc=8) 00:30:39.549 starting I/O failed: -6 00:30:39.549 Write completed with error (sct=0, sc=8) 00:30:39.549 Write completed with error (sct=0, sc=8) 00:30:39.549 starting I/O failed: -6 00:30:39.549 Write completed with error (sct=0, sc=8) 00:30:39.549 starting I/O failed: -6 00:30:39.549 Write completed with error (sct=0, sc=8) 00:30:39.549 starting I/O failed: -6 00:30:39.549 Write completed with error (sct=0, sc=8) 00:30:39.549 Write completed with error (sct=0, sc=8) 00:30:39.549 starting I/O failed: -6 00:30:39.549 Write completed with error (sct=0, sc=8) 00:30:39.549 starting I/O failed: -6 00:30:39.549 Write completed with error (sct=0, sc=8) 00:30:39.549 starting I/O failed: -6 00:30:39.549 Write completed with error (sct=0, sc=8) 00:30:39.549 Write completed with error (sct=0, sc=8) 00:30:39.549 starting I/O failed: -6 00:30:39.549 Write completed with error (sct=0, sc=8) 00:30:39.549 starting I/O failed: -6 00:30:39.549 Write completed with error (sct=0, sc=8) 00:30:39.549 starting I/O failed: -6 00:30:39.549 Write completed with error (sct=0, sc=8) 00:30:39.549 Write completed with error (sct=0, sc=8) 00:30:39.549 starting I/O failed: -6 00:30:39.549 Write completed with error (sct=0, sc=8) 00:30:39.549 starting I/O failed: -6 00:30:39.549 Write completed with error (sct=0, sc=8) 00:30:39.549 starting I/O failed: -6 00:30:39.549 Write completed with error (sct=0, sc=8) 00:30:39.549 Write completed with error (sct=0, sc=8) 00:30:39.549 starting I/O failed: -6 00:30:39.549 Write completed with error (sct=0, sc=8) 00:30:39.549 starting I/O failed: -6 00:30:39.549 Write completed with error (sct=0, sc=8) 00:30:39.549 starting I/O failed: -6 00:30:39.549 Write completed with error (sct=0, sc=8) 00:30:39.549 Write completed with error (sct=0, sc=8) 00:30:39.549 starting I/O failed: -6 00:30:39.549 Write completed with error (sct=0, sc=8) 00:30:39.549 starting I/O failed: -6 00:30:39.549 Write completed with error (sct=0, sc=8) 00:30:39.549 starting I/O failed: -6 00:30:39.549 Write completed with error (sct=0, sc=8) 00:30:39.549 Write completed with error (sct=0, sc=8) 00:30:39.549 starting I/O failed: -6 00:30:39.549 Write completed with error (sct=0, sc=8) 00:30:39.549 starting I/O failed: -6 00:30:39.549 Write completed with error (sct=0, sc=8) 00:30:39.549 starting I/O failed: -6 00:30:39.549 Write completed with error (sct=0, sc=8) 00:30:39.549 Write completed with error (sct=0, sc=8) 00:30:39.549 starting I/O failed: -6 00:30:39.549 Write completed with error (sct=0, sc=8) 00:30:39.549 starting I/O failed: -6 00:30:39.549 Write completed with error (sct=0, sc=8) 00:30:39.549 starting I/O failed: -6 00:30:39.549 Write completed with error (sct=0, sc=8) 00:30:39.549 Write completed with error (sct=0, sc=8) 00:30:39.549 starting I/O failed: -6 00:30:39.549 Write completed with error (sct=0, sc=8) 00:30:39.549 starting I/O failed: -6 00:30:39.549 Write completed with error (sct=0, sc=8) 00:30:39.549 starting I/O failed: -6 00:30:39.549 Write completed with error (sct=0, sc=8) 00:30:39.549 Write completed with error (sct=0, sc=8) 00:30:39.549 starting I/O failed: -6 00:30:39.549 Write completed with error (sct=0, sc=8) 00:30:39.549 starting I/O failed: -6 00:30:39.549 Write completed with error (sct=0, sc=8) 00:30:39.549 starting I/O failed: -6 00:30:39.549 [2024-12-09 09:48:14.386089] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.549 Write completed with error (sct=0, sc=8) 00:30:39.549 starting I/O failed: -6 00:30:39.549 Write completed with error (sct=0, sc=8) 00:30:39.549 starting I/O failed: -6 00:30:39.549 Write completed with error (sct=0, sc=8) 00:30:39.549 starting I/O failed: -6 00:30:39.549 Write completed with error (sct=0, sc=8) 00:30:39.549 starting I/O failed: -6 00:30:39.549 Write completed with error (sct=0, sc=8) 00:30:39.549 starting I/O failed: -6 00:30:39.549 Write completed with error (sct=0, sc=8) 00:30:39.549 starting I/O failed: -6 00:30:39.549 Write completed with error (sct=0, sc=8) 00:30:39.549 starting I/O failed: -6 00:30:39.549 Write completed with error (sct=0, sc=8) 00:30:39.549 starting I/O failed: -6 00:30:39.549 Write completed with error (sct=0, sc=8) 00:30:39.549 starting I/O failed: -6 00:30:39.549 Write completed with error (sct=0, sc=8) 00:30:39.549 starting I/O failed: -6 00:30:39.549 Write completed with error (sct=0, sc=8) 00:30:39.549 starting I/O failed: -6 00:30:39.549 Write completed with error (sct=0, sc=8) 00:30:39.549 starting I/O failed: -6 00:30:39.549 Write completed with error (sct=0, sc=8) 00:30:39.549 starting I/O failed: -6 00:30:39.549 Write completed with error (sct=0, sc=8) 00:30:39.549 starting I/O failed: -6 00:30:39.549 Write completed with error (sct=0, sc=8) 00:30:39.549 starting I/O failed: -6 00:30:39.549 Write completed with error (sct=0, sc=8) 00:30:39.549 starting I/O failed: -6 00:30:39.549 Write completed with error (sct=0, sc=8) 00:30:39.549 starting I/O failed: -6 00:30:39.549 Write completed with error (sct=0, sc=8) 00:30:39.549 starting I/O failed: -6 00:30:39.549 Write completed with error (sct=0, sc=8) 00:30:39.549 starting I/O failed: -6 00:30:39.549 Write completed with error (sct=0, sc=8) 00:30:39.549 starting I/O failed: -6 00:30:39.549 Write completed with error (sct=0, sc=8) 00:30:39.549 starting I/O failed: -6 00:30:39.549 Write completed with error (sct=0, sc=8) 00:30:39.549 starting I/O failed: -6 00:30:39.549 Write completed with error (sct=0, sc=8) 00:30:39.549 starting I/O failed: -6 00:30:39.549 Write completed with error (sct=0, sc=8) 00:30:39.549 starting I/O failed: -6 00:30:39.549 Write completed with error (sct=0, sc=8) 00:30:39.549 starting I/O failed: -6 00:30:39.549 Write completed with error (sct=0, sc=8) 00:30:39.549 starting I/O failed: -6 00:30:39.549 Write completed with error (sct=0, sc=8) 00:30:39.549 starting I/O failed: -6 00:30:39.549 Write completed with error (sct=0, sc=8) 00:30:39.549 starting I/O failed: -6 00:30:39.549 Write completed with error (sct=0, sc=8) 00:30:39.549 starting I/O failed: -6 00:30:39.549 Write completed with error (sct=0, sc=8) 00:30:39.549 starting I/O failed: -6 00:30:39.549 Write completed with error (sct=0, sc=8) 00:30:39.549 starting I/O failed: -6 00:30:39.549 Write completed with error (sct=0, sc=8) 00:30:39.549 starting I/O failed: -6 00:30:39.549 Write completed with error (sct=0, sc=8) 00:30:39.549 starting I/O failed: -6 00:30:39.549 Write completed with error (sct=0, sc=8) 00:30:39.549 starting I/O failed: -6 00:30:39.549 Write completed with error (sct=0, sc=8) 00:30:39.549 starting I/O failed: -6 00:30:39.549 Write completed with error (sct=0, sc=8) 00:30:39.549 starting I/O failed: -6 00:30:39.549 Write completed with error (sct=0, sc=8) 00:30:39.549 starting I/O failed: -6 00:30:39.549 Write completed with error (sct=0, sc=8) 00:30:39.549 starting I/O failed: -6 00:30:39.549 Write completed with error (sct=0, sc=8) 00:30:39.549 starting I/O failed: -6 00:30:39.549 Write completed with error (sct=0, sc=8) 00:30:39.549 starting I/O failed: -6 00:30:39.549 Write completed with error (sct=0, sc=8) 00:30:39.549 starting I/O failed: -6 00:30:39.549 Write completed with error (sct=0, sc=8) 00:30:39.549 starting I/O failed: -6 00:30:39.549 Write completed with error (sct=0, sc=8) 00:30:39.549 starting I/O failed: -6 00:30:39.549 Write completed with error (sct=0, sc=8) 00:30:39.549 starting I/O failed: -6 00:30:39.549 Write completed with error (sct=0, sc=8) 00:30:39.549 starting I/O failed: -6 00:30:39.549 Write completed with error (sct=0, sc=8) 00:30:39.549 starting I/O failed: -6 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 starting I/O failed: -6 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 starting I/O failed: -6 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 starting I/O failed: -6 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 starting I/O failed: -6 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 starting I/O failed: -6 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 starting I/O failed: -6 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 starting I/O failed: -6 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 starting I/O failed: -6 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 starting I/O failed: -6 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 starting I/O failed: -6 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 starting I/O failed: -6 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 starting I/O failed: -6 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 starting I/O failed: -6 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 starting I/O failed: -6 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 starting I/O failed: -6 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 starting I/O failed: -6 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 starting I/O failed: -6 00:30:39.550 [2024-12-09 09:48:14.388439] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.550 NVMe io qpair process completion error 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 starting I/O failed: -6 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 starting I/O failed: -6 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 starting I/O failed: -6 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 starting I/O failed: -6 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 starting I/O failed: -6 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 starting I/O failed: -6 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 starting I/O failed: -6 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 starting I/O failed: -6 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 [2024-12-09 09:48:14.389774] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 starting I/O failed: -6 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 starting I/O failed: -6 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 starting I/O failed: -6 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 starting I/O failed: -6 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 starting I/O failed: -6 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 starting I/O failed: -6 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 starting I/O failed: -6 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 starting I/O failed: -6 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 starting I/O failed: -6 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 starting I/O failed: -6 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 starting I/O failed: -6 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 starting I/O failed: -6 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 starting I/O failed: -6 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 starting I/O failed: -6 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 starting I/O failed: -6 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 starting I/O failed: -6 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 starting I/O failed: -6 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 starting I/O failed: -6 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 starting I/O failed: -6 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 starting I/O failed: -6 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 starting I/O failed: -6 00:30:39.550 [2024-12-09 09:48:14.390596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 starting I/O failed: -6 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 starting I/O failed: -6 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 starting I/O failed: -6 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 starting I/O failed: -6 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 starting I/O failed: -6 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 starting I/O failed: -6 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 starting I/O failed: -6 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 starting I/O failed: -6 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 starting I/O failed: -6 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 starting I/O failed: -6 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 starting I/O failed: -6 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 starting I/O failed: -6 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 starting I/O failed: -6 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 starting I/O failed: -6 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 starting I/O failed: -6 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 starting I/O failed: -6 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 starting I/O failed: -6 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 starting I/O failed: -6 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 starting I/O failed: -6 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 starting I/O failed: -6 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 starting I/O failed: -6 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 starting I/O failed: -6 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 starting I/O failed: -6 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 starting I/O failed: -6 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 starting I/O failed: -6 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 starting I/O failed: -6 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 starting I/O failed: -6 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 starting I/O failed: -6 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 starting I/O failed: -6 00:30:39.550 Write completed with error (sct=0, sc=8) 00:30:39.550 starting I/O failed: -6 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 starting I/O failed: -6 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 starting I/O failed: -6 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 starting I/O failed: -6 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 starting I/O failed: -6 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 starting I/O failed: -6 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 starting I/O failed: -6 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 starting I/O failed: -6 00:30:39.551 [2024-12-09 09:48:14.392062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 starting I/O failed: -6 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 starting I/O failed: -6 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 starting I/O failed: -6 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 starting I/O failed: -6 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 starting I/O failed: -6 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 starting I/O failed: -6 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 starting I/O failed: -6 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 starting I/O failed: -6 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 starting I/O failed: -6 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 starting I/O failed: -6 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 starting I/O failed: -6 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 starting I/O failed: -6 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 starting I/O failed: -6 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 starting I/O failed: -6 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 starting I/O failed: -6 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 starting I/O failed: -6 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 starting I/O failed: -6 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 starting I/O failed: -6 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 starting I/O failed: -6 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 starting I/O failed: -6 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 starting I/O failed: -6 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 starting I/O failed: -6 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 starting I/O failed: -6 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 starting I/O failed: -6 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 starting I/O failed: -6 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 starting I/O failed: -6 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 starting I/O failed: -6 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 starting I/O failed: -6 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 starting I/O failed: -6 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 starting I/O failed: -6 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 starting I/O failed: -6 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 starting I/O failed: -6 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 starting I/O failed: -6 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 starting I/O failed: -6 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 starting I/O failed: -6 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 starting I/O failed: -6 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 starting I/O failed: -6 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 starting I/O failed: -6 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 starting I/O failed: -6 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 starting I/O failed: -6 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 starting I/O failed: -6 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 starting I/O failed: -6 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 starting I/O failed: -6 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 starting I/O failed: -6 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 starting I/O failed: -6 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 starting I/O failed: -6 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 starting I/O failed: -6 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 starting I/O failed: -6 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 starting I/O failed: -6 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 starting I/O failed: -6 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 starting I/O failed: -6 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 starting I/O failed: -6 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 starting I/O failed: -6 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 starting I/O failed: -6 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 starting I/O failed: -6 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 starting I/O failed: -6 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 starting I/O failed: -6 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 starting I/O failed: -6 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 starting I/O failed: -6 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 starting I/O failed: -6 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 starting I/O failed: -6 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 starting I/O failed: -6 00:30:39.551 [2024-12-09 09:48:14.393982] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.551 NVMe io qpair process completion error 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 starting I/O failed: -6 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 starting I/O failed: -6 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 starting I/O failed: -6 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 starting I/O failed: -6 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 starting I/O failed: -6 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 starting I/O failed: -6 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 starting I/O failed: -6 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 starting I/O failed: -6 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 starting I/O failed: -6 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 starting I/O failed: -6 00:30:39.551 [2024-12-09 09:48:14.395089] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 starting I/O failed: -6 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 starting I/O failed: -6 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 starting I/O failed: -6 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 starting I/O failed: -6 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 starting I/O failed: -6 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 starting I/O failed: -6 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.551 Write completed with error (sct=0, sc=8) 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 starting I/O failed: -6 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 starting I/O failed: -6 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 starting I/O failed: -6 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 starting I/O failed: -6 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 starting I/O failed: -6 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 starting I/O failed: -6 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 starting I/O failed: -6 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 starting I/O failed: -6 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 starting I/O failed: -6 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 starting I/O failed: -6 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 starting I/O failed: -6 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 starting I/O failed: -6 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 starting I/O failed: -6 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 starting I/O failed: -6 00:30:39.552 [2024-12-09 09:48:14.395905] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 starting I/O failed: -6 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 starting I/O failed: -6 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 starting I/O failed: -6 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 starting I/O failed: -6 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 starting I/O failed: -6 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 starting I/O failed: -6 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 starting I/O failed: -6 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 starting I/O failed: -6 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 starting I/O failed: -6 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 starting I/O failed: -6 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 starting I/O failed: -6 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 starting I/O failed: -6 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 starting I/O failed: -6 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 starting I/O failed: -6 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 starting I/O failed: -6 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 starting I/O failed: -6 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 starting I/O failed: -6 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 starting I/O failed: -6 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 starting I/O failed: -6 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 starting I/O failed: -6 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 starting I/O failed: -6 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 starting I/O failed: -6 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 starting I/O failed: -6 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 starting I/O failed: -6 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 starting I/O failed: -6 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 starting I/O failed: -6 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 starting I/O failed: -6 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 starting I/O failed: -6 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 starting I/O failed: -6 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 starting I/O failed: -6 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 starting I/O failed: -6 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 starting I/O failed: -6 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 starting I/O failed: -6 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 starting I/O failed: -6 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 starting I/O failed: -6 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 starting I/O failed: -6 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 [2024-12-09 09:48:14.396843] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 starting I/O failed: -6 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 starting I/O failed: -6 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 starting I/O failed: -6 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 starting I/O failed: -6 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 starting I/O failed: -6 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 starting I/O failed: -6 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 starting I/O failed: -6 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 starting I/O failed: -6 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 starting I/O failed: -6 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 starting I/O failed: -6 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 starting I/O failed: -6 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 starting I/O failed: -6 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 starting I/O failed: -6 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 starting I/O failed: -6 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 starting I/O failed: -6 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 starting I/O failed: -6 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 starting I/O failed: -6 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 starting I/O failed: -6 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 starting I/O failed: -6 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 starting I/O failed: -6 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 starting I/O failed: -6 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 starting I/O failed: -6 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 starting I/O failed: -6 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 starting I/O failed: -6 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 starting I/O failed: -6 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 starting I/O failed: -6 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 starting I/O failed: -6 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 starting I/O failed: -6 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 starting I/O failed: -6 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 starting I/O failed: -6 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 starting I/O failed: -6 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 starting I/O failed: -6 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 starting I/O failed: -6 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 starting I/O failed: -6 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 starting I/O failed: -6 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 starting I/O failed: -6 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 starting I/O failed: -6 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 starting I/O failed: -6 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 starting I/O failed: -6 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 starting I/O failed: -6 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 starting I/O failed: -6 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 starting I/O failed: -6 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 starting I/O failed: -6 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 starting I/O failed: -6 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 starting I/O failed: -6 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 starting I/O failed: -6 00:30:39.552 Write completed with error (sct=0, sc=8) 00:30:39.552 starting I/O failed: -6 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 starting I/O failed: -6 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 starting I/O failed: -6 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 starting I/O failed: -6 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 starting I/O failed: -6 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 starting I/O failed: -6 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 starting I/O failed: -6 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 starting I/O failed: -6 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 starting I/O failed: -6 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 starting I/O failed: -6 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 starting I/O failed: -6 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 starting I/O failed: -6 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 starting I/O failed: -6 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 starting I/O failed: -6 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 starting I/O failed: -6 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 starting I/O failed: -6 00:30:39.553 [2024-12-09 09:48:14.398719] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.553 NVMe io qpair process completion error 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 starting I/O failed: -6 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 starting I/O failed: -6 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 starting I/O failed: -6 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 starting I/O failed: -6 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 starting I/O failed: -6 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 starting I/O failed: -6 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 starting I/O failed: -6 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 starting I/O failed: -6 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 [2024-12-09 09:48:14.399736] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 starting I/O failed: -6 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 starting I/O failed: -6 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 starting I/O failed: -6 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 starting I/O failed: -6 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 starting I/O failed: -6 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 starting I/O failed: -6 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 starting I/O failed: -6 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 starting I/O failed: -6 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 starting I/O failed: -6 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 starting I/O failed: -6 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 starting I/O failed: -6 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 starting I/O failed: -6 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 starting I/O failed: -6 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 starting I/O failed: -6 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 starting I/O failed: -6 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 starting I/O failed: -6 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 starting I/O failed: -6 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 starting I/O failed: -6 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 starting I/O failed: -6 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 starting I/O failed: -6 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 [2024-12-09 09:48:14.400543] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 starting I/O failed: -6 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 starting I/O failed: -6 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 starting I/O failed: -6 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 starting I/O failed: -6 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 starting I/O failed: -6 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 starting I/O failed: -6 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 starting I/O failed: -6 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 starting I/O failed: -6 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 starting I/O failed: -6 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 starting I/O failed: -6 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 starting I/O failed: -6 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 starting I/O failed: -6 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 starting I/O failed: -6 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 starting I/O failed: -6 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 starting I/O failed: -6 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 starting I/O failed: -6 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 starting I/O failed: -6 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 starting I/O failed: -6 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 starting I/O failed: -6 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 starting I/O failed: -6 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 starting I/O failed: -6 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 starting I/O failed: -6 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 starting I/O failed: -6 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 starting I/O failed: -6 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 starting I/O failed: -6 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 starting I/O failed: -6 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 starting I/O failed: -6 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 starting I/O failed: -6 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 starting I/O failed: -6 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 starting I/O failed: -6 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 starting I/O failed: -6 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 starting I/O failed: -6 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 starting I/O failed: -6 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 Write completed with error (sct=0, sc=8) 00:30:39.553 starting I/O failed: -6 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 starting I/O failed: -6 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 starting I/O failed: -6 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 starting I/O failed: -6 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 starting I/O failed: -6 00:30:39.554 [2024-12-09 09:48:14.401484] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 starting I/O failed: -6 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 starting I/O failed: -6 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 starting I/O failed: -6 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 starting I/O failed: -6 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 starting I/O failed: -6 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 starting I/O failed: -6 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 starting I/O failed: -6 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 starting I/O failed: -6 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 starting I/O failed: -6 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 starting I/O failed: -6 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 starting I/O failed: -6 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 starting I/O failed: -6 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 starting I/O failed: -6 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 starting I/O failed: -6 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 starting I/O failed: -6 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 starting I/O failed: -6 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 starting I/O failed: -6 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 starting I/O failed: -6 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 starting I/O failed: -6 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 starting I/O failed: -6 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 starting I/O failed: -6 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 starting I/O failed: -6 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 starting I/O failed: -6 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 starting I/O failed: -6 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 starting I/O failed: -6 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 starting I/O failed: -6 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 starting I/O failed: -6 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 starting I/O failed: -6 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 starting I/O failed: -6 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 starting I/O failed: -6 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 starting I/O failed: -6 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 starting I/O failed: -6 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 starting I/O failed: -6 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 starting I/O failed: -6 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 starting I/O failed: -6 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 starting I/O failed: -6 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 starting I/O failed: -6 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 starting I/O failed: -6 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 starting I/O failed: -6 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 starting I/O failed: -6 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 starting I/O failed: -6 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 starting I/O failed: -6 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 starting I/O failed: -6 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 starting I/O failed: -6 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 starting I/O failed: -6 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 starting I/O failed: -6 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 starting I/O failed: -6 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 starting I/O failed: -6 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 starting I/O failed: -6 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 starting I/O failed: -6 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 starting I/O failed: -6 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 starting I/O failed: -6 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 starting I/O failed: -6 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 starting I/O failed: -6 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 starting I/O failed: -6 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 starting I/O failed: -6 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 starting I/O failed: -6 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 starting I/O failed: -6 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 starting I/O failed: -6 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 starting I/O failed: -6 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 starting I/O failed: -6 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 starting I/O failed: -6 00:30:39.554 [2024-12-09 09:48:14.403865] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.554 NVMe io qpair process completion error 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 starting I/O failed: -6 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 starting I/O failed: -6 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 starting I/O failed: -6 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 starting I/O failed: -6 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 starting I/O failed: -6 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 starting I/O failed: -6 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 starting I/O failed: -6 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 starting I/O failed: -6 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 starting I/O failed: -6 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 starting I/O failed: -6 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 [2024-12-09 09:48:14.404994] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:39.554 starting I/O failed: -6 00:30:39.554 starting I/O failed: -6 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 Write completed with error (sct=0, sc=8) 00:30:39.554 starting I/O failed: -6 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 starting I/O failed: -6 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 starting I/O failed: -6 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 starting I/O failed: -6 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 starting I/O failed: -6 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 starting I/O failed: -6 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 starting I/O failed: -6 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 starting I/O failed: -6 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 starting I/O failed: -6 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 starting I/O failed: -6 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 starting I/O failed: -6 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 starting I/O failed: -6 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 starting I/O failed: -6 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 starting I/O failed: -6 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 starting I/O failed: -6 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 starting I/O failed: -6 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 starting I/O failed: -6 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 starting I/O failed: -6 00:30:39.555 [2024-12-09 09:48:14.405866] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 starting I/O failed: -6 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 starting I/O failed: -6 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 starting I/O failed: -6 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 starting I/O failed: -6 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 starting I/O failed: -6 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 starting I/O failed: -6 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 starting I/O failed: -6 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 starting I/O failed: -6 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 starting I/O failed: -6 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 starting I/O failed: -6 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 starting I/O failed: -6 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 starting I/O failed: -6 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 starting I/O failed: -6 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 starting I/O failed: -6 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 starting I/O failed: -6 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 starting I/O failed: -6 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 starting I/O failed: -6 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 starting I/O failed: -6 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 starting I/O failed: -6 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 starting I/O failed: -6 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 starting I/O failed: -6 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 starting I/O failed: -6 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 starting I/O failed: -6 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 starting I/O failed: -6 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 starting I/O failed: -6 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 starting I/O failed: -6 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 starting I/O failed: -6 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 starting I/O failed: -6 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 starting I/O failed: -6 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 starting I/O failed: -6 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 starting I/O failed: -6 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 starting I/O failed: -6 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 starting I/O failed: -6 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 starting I/O failed: -6 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 starting I/O failed: -6 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 starting I/O failed: -6 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 [2024-12-09 09:48:14.406801] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.555 starting I/O failed: -6 00:30:39.555 starting I/O failed: -6 00:30:39.555 starting I/O failed: -6 00:30:39.555 starting I/O failed: -6 00:30:39.555 starting I/O failed: -6 00:30:39.555 starting I/O failed: -6 00:30:39.555 starting I/O failed: -6 00:30:39.555 starting I/O failed: -6 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 starting I/O failed: -6 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 starting I/O failed: -6 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 starting I/O failed: -6 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 starting I/O failed: -6 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 starting I/O failed: -6 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 starting I/O failed: -6 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 starting I/O failed: -6 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 starting I/O failed: -6 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 starting I/O failed: -6 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 starting I/O failed: -6 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 starting I/O failed: -6 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 starting I/O failed: -6 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 starting I/O failed: -6 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 starting I/O failed: -6 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 starting I/O failed: -6 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 starting I/O failed: -6 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 starting I/O failed: -6 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 starting I/O failed: -6 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 starting I/O failed: -6 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 starting I/O failed: -6 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 starting I/O failed: -6 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 starting I/O failed: -6 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 starting I/O failed: -6 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 starting I/O failed: -6 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 starting I/O failed: -6 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 starting I/O failed: -6 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 starting I/O failed: -6 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 starting I/O failed: -6 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 starting I/O failed: -6 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 starting I/O failed: -6 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 starting I/O failed: -6 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 starting I/O failed: -6 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 starting I/O failed: -6 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 starting I/O failed: -6 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 starting I/O failed: -6 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.555 starting I/O failed: -6 00:30:39.555 Write completed with error (sct=0, sc=8) 00:30:39.556 starting I/O failed: -6 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 starting I/O failed: -6 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 starting I/O failed: -6 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 starting I/O failed: -6 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 starting I/O failed: -6 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 starting I/O failed: -6 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 starting I/O failed: -6 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 starting I/O failed: -6 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 starting I/O failed: -6 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 starting I/O failed: -6 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 starting I/O failed: -6 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 starting I/O failed: -6 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 starting I/O failed: -6 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 starting I/O failed: -6 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 starting I/O failed: -6 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 starting I/O failed: -6 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 starting I/O failed: -6 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 starting I/O failed: -6 00:30:39.556 [2024-12-09 09:48:14.408864] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.556 NVMe io qpair process completion error 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 starting I/O failed: -6 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 starting I/O failed: -6 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 starting I/O failed: -6 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 starting I/O failed: -6 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 starting I/O failed: -6 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 starting I/O failed: -6 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 starting I/O failed: -6 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 starting I/O failed: -6 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 starting I/O failed: -6 00:30:39.556 [2024-12-09 09:48:14.409890] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 starting I/O failed: -6 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 starting I/O failed: -6 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 starting I/O failed: -6 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 starting I/O failed: -6 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 starting I/O failed: -6 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 starting I/O failed: -6 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 starting I/O failed: -6 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 starting I/O failed: -6 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 starting I/O failed: -6 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 starting I/O failed: -6 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 starting I/O failed: -6 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 starting I/O failed: -6 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 starting I/O failed: -6 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 starting I/O failed: -6 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 starting I/O failed: -6 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 starting I/O failed: -6 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 starting I/O failed: -6 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 starting I/O failed: -6 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 starting I/O failed: -6 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 starting I/O failed: -6 00:30:39.556 [2024-12-09 09:48:14.410724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 starting I/O failed: -6 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 starting I/O failed: -6 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 starting I/O failed: -6 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 starting I/O failed: -6 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 starting I/O failed: -6 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 starting I/O failed: -6 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 starting I/O failed: -6 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 starting I/O failed: -6 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 starting I/O failed: -6 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 starting I/O failed: -6 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 starting I/O failed: -6 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 starting I/O failed: -6 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 starting I/O failed: -6 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 starting I/O failed: -6 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 starting I/O failed: -6 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 starting I/O failed: -6 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 starting I/O failed: -6 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 starting I/O failed: -6 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 starting I/O failed: -6 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 starting I/O failed: -6 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 starting I/O failed: -6 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 starting I/O failed: -6 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 starting I/O failed: -6 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 starting I/O failed: -6 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 starting I/O failed: -6 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 starting I/O failed: -6 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 starting I/O failed: -6 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 starting I/O failed: -6 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 starting I/O failed: -6 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 starting I/O failed: -6 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 Write completed with error (sct=0, sc=8) 00:30:39.556 starting I/O failed: -6 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 starting I/O failed: -6 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 starting I/O failed: -6 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 starting I/O failed: -6 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 starting I/O failed: -6 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 starting I/O failed: -6 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 starting I/O failed: -6 00:30:39.557 [2024-12-09 09:48:14.411687] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 starting I/O failed: -6 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 starting I/O failed: -6 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 starting I/O failed: -6 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 starting I/O failed: -6 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 starting I/O failed: -6 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 starting I/O failed: -6 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 starting I/O failed: -6 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 starting I/O failed: -6 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 starting I/O failed: -6 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 starting I/O failed: -6 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 starting I/O failed: -6 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 starting I/O failed: -6 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 starting I/O failed: -6 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 starting I/O failed: -6 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 starting I/O failed: -6 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 starting I/O failed: -6 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 starting I/O failed: -6 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 starting I/O failed: -6 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 starting I/O failed: -6 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 starting I/O failed: -6 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 starting I/O failed: -6 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 starting I/O failed: -6 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 starting I/O failed: -6 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 starting I/O failed: -6 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 starting I/O failed: -6 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 starting I/O failed: -6 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 starting I/O failed: -6 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 starting I/O failed: -6 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 starting I/O failed: -6 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 starting I/O failed: -6 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 starting I/O failed: -6 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 starting I/O failed: -6 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 starting I/O failed: -6 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 starting I/O failed: -6 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 starting I/O failed: -6 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 starting I/O failed: -6 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 starting I/O failed: -6 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 starting I/O failed: -6 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 starting I/O failed: -6 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 starting I/O failed: -6 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 starting I/O failed: -6 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 starting I/O failed: -6 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 starting I/O failed: -6 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 starting I/O failed: -6 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 starting I/O failed: -6 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 starting I/O failed: -6 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 starting I/O failed: -6 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 starting I/O failed: -6 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 starting I/O failed: -6 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 starting I/O failed: -6 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 starting I/O failed: -6 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 starting I/O failed: -6 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 starting I/O failed: -6 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 starting I/O failed: -6 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 starting I/O failed: -6 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 starting I/O failed: -6 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 starting I/O failed: -6 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 starting I/O failed: -6 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 starting I/O failed: -6 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 starting I/O failed: -6 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 starting I/O failed: -6 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 starting I/O failed: -6 00:30:39.557 [2024-12-09 09:48:14.413708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.557 NVMe io qpair process completion error 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 starting I/O failed: -6 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 starting I/O failed: -6 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 starting I/O failed: -6 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 starting I/O failed: -6 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 starting I/O failed: -6 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 starting I/O failed: -6 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 starting I/O failed: -6 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 starting I/O failed: -6 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 starting I/O failed: -6 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 starting I/O failed: -6 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 [2024-12-09 09:48:14.414986] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.557 starting I/O failed: -6 00:30:39.557 starting I/O failed: -6 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 starting I/O failed: -6 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 starting I/O failed: -6 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 starting I/O failed: -6 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 starting I/O failed: -6 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 starting I/O failed: -6 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.557 starting I/O failed: -6 00:30:39.557 Write completed with error (sct=0, sc=8) 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 starting I/O failed: -6 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 starting I/O failed: -6 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 starting I/O failed: -6 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 starting I/O failed: -6 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 starting I/O failed: -6 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 starting I/O failed: -6 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 starting I/O failed: -6 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 starting I/O failed: -6 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 starting I/O failed: -6 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 starting I/O failed: -6 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 starting I/O failed: -6 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 starting I/O failed: -6 00:30:39.558 [2024-12-09 09:48:14.415841] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 starting I/O failed: -6 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 starting I/O failed: -6 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 starting I/O failed: -6 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 starting I/O failed: -6 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 starting I/O failed: -6 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 starting I/O failed: -6 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 starting I/O failed: -6 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 starting I/O failed: -6 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 starting I/O failed: -6 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 starting I/O failed: -6 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 starting I/O failed: -6 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 starting I/O failed: -6 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 starting I/O failed: -6 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 starting I/O failed: -6 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 starting I/O failed: -6 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 starting I/O failed: -6 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 starting I/O failed: -6 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 starting I/O failed: -6 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 starting I/O failed: -6 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 starting I/O failed: -6 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 starting I/O failed: -6 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 starting I/O failed: -6 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 starting I/O failed: -6 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 starting I/O failed: -6 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 starting I/O failed: -6 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 starting I/O failed: -6 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 starting I/O failed: -6 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 starting I/O failed: -6 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 starting I/O failed: -6 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 starting I/O failed: -6 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 starting I/O failed: -6 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 starting I/O failed: -6 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 starting I/O failed: -6 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 starting I/O failed: -6 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 starting I/O failed: -6 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 starting I/O failed: -6 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 [2024-12-09 09:48:14.416777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 starting I/O failed: -6 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 starting I/O failed: -6 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 starting I/O failed: -6 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 starting I/O failed: -6 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 starting I/O failed: -6 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 starting I/O failed: -6 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 starting I/O failed: -6 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 starting I/O failed: -6 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 starting I/O failed: -6 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 starting I/O failed: -6 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 starting I/O failed: -6 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 starting I/O failed: -6 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 starting I/O failed: -6 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 starting I/O failed: -6 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 starting I/O failed: -6 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 starting I/O failed: -6 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 starting I/O failed: -6 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 starting I/O failed: -6 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 starting I/O failed: -6 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 starting I/O failed: -6 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 starting I/O failed: -6 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 starting I/O failed: -6 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 starting I/O failed: -6 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 starting I/O failed: -6 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 starting I/O failed: -6 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 starting I/O failed: -6 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 starting I/O failed: -6 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 starting I/O failed: -6 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 starting I/O failed: -6 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 starting I/O failed: -6 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 starting I/O failed: -6 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 starting I/O failed: -6 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 starting I/O failed: -6 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 starting I/O failed: -6 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 starting I/O failed: -6 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 starting I/O failed: -6 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 starting I/O failed: -6 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 starting I/O failed: -6 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 starting I/O failed: -6 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 starting I/O failed: -6 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 starting I/O failed: -6 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 starting I/O failed: -6 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 starting I/O failed: -6 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 starting I/O failed: -6 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 starting I/O failed: -6 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 starting I/O failed: -6 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 starting I/O failed: -6 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 starting I/O failed: -6 00:30:39.558 Write completed with error (sct=0, sc=8) 00:30:39.558 starting I/O failed: -6 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 starting I/O failed: -6 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 starting I/O failed: -6 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 starting I/O failed: -6 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 starting I/O failed: -6 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 starting I/O failed: -6 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 starting I/O failed: -6 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 starting I/O failed: -6 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 starting I/O failed: -6 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 starting I/O failed: -6 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 starting I/O failed: -6 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 starting I/O failed: -6 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 starting I/O failed: -6 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 starting I/O failed: -6 00:30:39.559 [2024-12-09 09:48:14.419263] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:39.559 NVMe io qpair process completion error 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 starting I/O failed: -6 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 starting I/O failed: -6 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 starting I/O failed: -6 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 starting I/O failed: -6 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 starting I/O failed: -6 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 starting I/O failed: -6 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 starting I/O failed: -6 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 starting I/O failed: -6 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 starting I/O failed: -6 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 starting I/O failed: -6 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 starting I/O failed: -6 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 starting I/O failed: -6 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 starting I/O failed: -6 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 starting I/O failed: -6 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 starting I/O failed: -6 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 starting I/O failed: -6 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 starting I/O failed: -6 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 starting I/O failed: -6 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 starting I/O failed: -6 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 starting I/O failed: -6 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 starting I/O failed: -6 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 starting I/O failed: -6 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 starting I/O failed: -6 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 starting I/O failed: -6 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 starting I/O failed: -6 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 starting I/O failed: -6 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 starting I/O failed: -6 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 starting I/O failed: -6 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 starting I/O failed: -6 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 starting I/O failed: -6 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 starting I/O failed: -6 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 starting I/O failed: -6 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 starting I/O failed: -6 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 starting I/O failed: -6 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 starting I/O failed: -6 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 starting I/O failed: -6 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 starting I/O failed: -6 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 starting I/O failed: -6 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 starting I/O failed: -6 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 starting I/O failed: -6 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 starting I/O failed: -6 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 starting I/O failed: -6 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 starting I/O failed: -6 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 starting I/O failed: -6 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 starting I/O failed: -6 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 starting I/O failed: -6 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 starting I/O failed: -6 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 starting I/O failed: -6 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 starting I/O failed: -6 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 starting I/O failed: -6 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 starting I/O failed: -6 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 starting I/O failed: -6 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 starting I/O failed: -6 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 starting I/O failed: -6 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 starting I/O failed: -6 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 starting I/O failed: -6 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 starting I/O failed: -6 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 starting I/O failed: -6 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 starting I/O failed: -6 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 starting I/O failed: -6 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 starting I/O failed: -6 00:30:39.559 starting I/O failed: -6 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 starting I/O failed: -6 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 starting I/O failed: -6 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 starting I/O failed: -6 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 starting I/O failed: -6 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 starting I/O failed: -6 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 starting I/O failed: -6 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 starting I/O failed: -6 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.559 starting I/O failed: -6 00:30:39.559 Write completed with error (sct=0, sc=8) 00:30:39.560 starting I/O failed: -6 00:30:39.560 Write completed with error (sct=0, sc=8) 00:30:39.560 starting I/O failed: -6 00:30:39.560 Write completed with error (sct=0, sc=8) 00:30:39.560 starting I/O failed: -6 00:30:39.560 Write completed with error (sct=0, sc=8) 00:30:39.560 starting I/O failed: -6 00:30:39.560 Write completed with error (sct=0, sc=8) 00:30:39.560 starting I/O failed: -6 00:30:39.560 Write completed with error (sct=0, sc=8) 00:30:39.560 starting I/O failed: -6 00:30:39.560 Write completed with error (sct=0, sc=8) 00:30:39.560 starting I/O failed: -6 00:30:39.560 Write completed with error (sct=0, sc=8) 00:30:39.560 starting I/O failed: -6 00:30:39.560 Write completed with error (sct=0, sc=8) 00:30:39.560 starting I/O failed: -6 00:30:39.560 Write completed with error (sct=0, sc=8) 00:30:39.560 starting I/O failed: -6 00:30:39.560 Write completed with error (sct=0, sc=8) 00:30:39.560 starting I/O failed: -6 00:30:39.560 Write completed with error (sct=0, sc=8) 00:30:39.560 starting I/O failed: -6 00:30:39.560 Write completed with error (sct=0, sc=8) 00:30:39.560 starting I/O failed: -6 00:30:39.560 Write completed with error (sct=0, sc=8) 00:30:39.560 starting I/O failed: -6 00:30:39.560 Write completed with error (sct=0, sc=8) 00:30:39.560 starting I/O failed: -6 00:30:39.560 Write completed with error (sct=0, sc=8) 00:30:39.560 starting I/O failed: -6 00:30:39.560 Write completed with error (sct=0, sc=8) 00:30:39.560 starting I/O failed: -6 00:30:39.560 Write completed with error (sct=0, sc=8) 00:30:39.560 starting I/O failed: -6 00:30:39.560 Write completed with error (sct=0, sc=8) 00:30:39.560 starting I/O failed: -6 00:30:39.560 Write completed with error (sct=0, sc=8) 00:30:39.560 starting I/O failed: -6 00:30:39.560 Write completed with error (sct=0, sc=8) 00:30:39.560 starting I/O failed: -6 00:30:39.560 Write completed with error (sct=0, sc=8) 00:30:39.560 starting I/O failed: -6 00:30:39.560 Write completed with error (sct=0, sc=8) 00:30:39.560 starting I/O failed: -6 00:30:39.560 Write completed with error (sct=0, sc=8) 00:30:39.560 starting I/O failed: -6 00:30:39.560 Write completed with error (sct=0, sc=8) 00:30:39.560 starting I/O failed: -6 00:30:39.560 Write completed with error (sct=0, sc=8) 00:30:39.560 starting I/O failed: -6 00:30:39.560 Write completed with error (sct=0, sc=8) 00:30:39.560 starting I/O failed: -6 00:30:39.560 Write completed with error (sct=0, sc=8) 00:30:39.560 starting I/O failed: -6 00:30:39.560 Write completed with error (sct=0, sc=8) 00:30:39.560 starting I/O failed: -6 00:30:39.560 Write completed with error (sct=0, sc=8) 00:30:39.560 starting I/O failed: -6 00:30:39.560 Write completed with error (sct=0, sc=8) 00:30:39.560 starting I/O failed: -6 00:30:39.560 Write completed with error (sct=0, sc=8) 00:30:39.560 starting I/O failed: -6 00:30:39.560 Write completed with error (sct=0, sc=8) 00:30:39.560 starting I/O failed: -6 00:30:39.560 Write completed with error (sct=0, sc=8) 00:30:39.560 starting I/O failed: -6 00:30:39.560 Write completed with error (sct=0, sc=8) 00:30:39.560 starting I/O failed: -6 00:30:39.560 Write completed with error (sct=0, sc=8) 00:30:39.560 starting I/O failed: -6 00:30:39.560 Write completed with error (sct=0, sc=8) 00:30:39.560 starting I/O failed: -6 00:30:39.560 Write completed with error (sct=0, sc=8) 00:30:39.560 starting I/O failed: -6 00:30:39.560 Write completed with error (sct=0, sc=8) 00:30:39.560 starting I/O failed: -6 00:30:39.560 Write completed with error (sct=0, sc=8) 00:30:39.560 starting I/O failed: -6 00:30:39.560 Write completed with error (sct=0, sc=8) 00:30:39.560 starting I/O failed: -6 00:30:39.560 Write completed with error (sct=0, sc=8) 00:30:39.560 starting I/O failed: -6 00:30:39.560 Write completed with error (sct=0, sc=8) 00:30:39.560 starting I/O failed: -6 00:30:39.560 Write completed with error (sct=0, sc=8) 00:30:39.560 starting I/O failed: -6 00:30:39.560 Write completed with error (sct=0, sc=8) 00:30:39.560 starting I/O failed: -6 00:30:39.560 Write completed with error (sct=0, sc=8) 00:30:39.560 starting I/O failed: -6 00:30:39.560 Write completed with error (sct=0, sc=8) 00:30:39.560 starting I/O failed: -6 00:30:39.560 Write completed with error (sct=0, sc=8) 00:30:39.560 starting I/O failed: -6 00:30:39.560 Write completed with error (sct=0, sc=8) 00:30:39.560 starting I/O failed: -6 00:30:39.560 Write completed with error (sct=0, sc=8) 00:30:39.560 starting I/O failed: -6 00:30:39.560 Write completed with error (sct=0, sc=8) 00:30:39.560 starting I/O failed: -6 00:30:39.560 Write completed with error (sct=0, sc=8) 00:30:39.560 starting I/O failed: -6 00:30:39.560 Write completed with error (sct=0, sc=8) 00:30:39.560 starting I/O failed: -6 00:30:39.560 Write completed with error (sct=0, sc=8) 00:30:39.560 starting I/O failed: -6 00:30:39.560 Write completed with error (sct=0, sc=8) 00:30:39.560 starting I/O failed: -6 00:30:39.560 Write completed with error (sct=0, sc=8) 00:30:39.560 starting I/O failed: -6 00:30:39.560 Write completed with error (sct=0, sc=8) 00:30:39.560 starting I/O failed: -6 00:30:39.560 Write completed with error (sct=0, sc=8) 00:30:39.560 starting I/O failed: -6 00:30:39.560 [2024-12-09 09:48:14.423190] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:39.560 NVMe io qpair process completion error 00:30:39.560 Initializing NVMe Controllers 00:30:39.560 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:30:39.560 Controller IO queue size 128, less than required. 00:30:39.560 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:39.560 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:30:39.560 Controller IO queue size 128, less than required. 00:30:39.560 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:39.560 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:30:39.560 Controller IO queue size 128, less than required. 00:30:39.560 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:39.560 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:30:39.560 Controller IO queue size 128, less than required. 00:30:39.560 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:39.560 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:30:39.560 Controller IO queue size 128, less than required. 00:30:39.560 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:39.560 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:30:39.560 Controller IO queue size 128, less than required. 00:30:39.560 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:39.560 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:30:39.560 Controller IO queue size 128, less than required. 00:30:39.560 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:39.560 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:30:39.560 Controller IO queue size 128, less than required. 00:30:39.560 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:39.560 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:39.560 Controller IO queue size 128, less than required. 00:30:39.560 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:39.560 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:30:39.560 Controller IO queue size 128, less than required. 00:30:39.560 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:39.560 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:30:39.560 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:30:39.560 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:30:39.560 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:30:39.560 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:30:39.560 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:30:39.560 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:30:39.560 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:30:39.560 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:39.560 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:30:39.560 Initialization complete. Launching workers. 00:30:39.560 ======================================================== 00:30:39.560 Latency(us) 00:30:39.560 Device Information : IOPS MiB/s Average min max 00:30:39.560 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1910.53 82.09 67014.20 630.34 118815.86 00:30:39.560 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1862.94 80.05 68748.61 632.67 149652.04 00:30:39.560 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1884.57 80.98 67999.08 635.65 123226.19 00:30:39.560 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1881.54 80.85 68137.72 807.33 127144.22 00:30:39.560 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1889.76 81.20 67865.70 841.50 121906.26 00:30:39.560 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1881.97 80.87 68181.55 691.41 119585.48 00:30:39.560 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1894.09 81.39 67769.40 848.03 120143.98 00:30:39.560 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1904.26 81.82 67440.29 692.41 135073.28 00:30:39.560 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1883.27 80.92 67519.09 603.72 119222.00 00:30:39.560 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1881.97 80.87 68212.68 510.81 135701.74 00:30:39.560 ======================================================== 00:30:39.560 Total : 18874.91 811.03 67886.01 510.81 149652.04 00:30:39.560 00:30:39.560 [2024-12-09 09:48:14.427482] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd3370 is same with the state(6) to be set 00:30:39.561 [2024-12-09 09:48:14.427526] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd3880 is same with the state(6) to be set 00:30:39.561 [2024-12-09 09:48:14.427555] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd3550 is same with the state(6) to be set 00:30:39.561 [2024-12-09 09:48:14.427584] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd5320 is same with the state(6) to be set 00:30:39.561 [2024-12-09 09:48:14.427616] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd2fb0 is same with the state(6) to be set 00:30:39.561 [2024-12-09 09:48:14.427650] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd4cc0 is same with the state(6) to be set 00:30:39.561 [2024-12-09 09:48:14.427680] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd3190 is same with the state(6) to be set 00:30:39.561 [2024-12-09 09:48:14.427708] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd5650 is same with the state(6) to be set 00:30:39.561 [2024-12-09 09:48:14.427737] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd8b30 is same with the state(6) to be set 00:30:39.561 [2024-12-09 09:48:14.427766] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd4ff0 is same with the state(6) to be set 00:30:39.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:30:39.561 09:48:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:30:40.503 09:48:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 2931463 00:30:40.503 09:48:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:30:40.503 09:48:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2931463 00:30:40.503 09:48:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:30:40.503 09:48:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:40.503 09:48:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:30:40.503 09:48:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:40.503 09:48:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 2931463 00:30:40.503 09:48:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:30:40.503 09:48:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:40.503 09:48:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:40.503 09:48:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:40.503 09:48:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:30:40.503 09:48:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:30:40.503 09:48:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:30:40.503 09:48:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:40.503 09:48:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:30:40.503 09:48:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:40.503 09:48:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:30:40.503 09:48:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:40.503 09:48:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:30:40.503 09:48:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:40.503 09:48:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:40.503 rmmod nvme_tcp 00:30:40.503 rmmod nvme_fabrics 00:30:40.503 rmmod nvme_keyring 00:30:40.503 09:48:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:40.503 09:48:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:30:40.503 09:48:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:30:40.503 09:48:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 2931220 ']' 00:30:40.503 09:48:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 2931220 00:30:40.503 09:48:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 2931220 ']' 00:30:40.504 09:48:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 2931220 00:30:40.504 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2931220) - No such process 00:30:40.504 09:48:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 2931220 is not found' 00:30:40.504 Process with pid 2931220 is not found 00:30:40.504 09:48:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:40.504 09:48:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:40.504 09:48:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:40.504 09:48:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:30:40.504 09:48:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:30:40.504 09:48:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:40.504 09:48:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:30:40.504 09:48:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:40.504 09:48:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:40.504 09:48:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:40.504 09:48:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:40.504 09:48:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:42.419 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:42.419 00:30:42.419 real 0m10.305s 00:30:42.419 user 0m27.980s 00:30:42.419 sys 0m4.017s 00:30:42.419 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:42.419 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:30:42.419 ************************************ 00:30:42.419 END TEST nvmf_shutdown_tc4 00:30:42.419 ************************************ 00:30:42.419 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:30:42.419 00:30:42.419 real 0m43.869s 00:30:42.419 user 1m47.742s 00:30:42.419 sys 0m13.813s 00:30:42.419 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:42.419 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:42.419 ************************************ 00:30:42.419 END TEST nvmf_shutdown 00:30:42.419 ************************************ 00:30:42.419 09:48:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:30:42.681 09:48:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:42.681 09:48:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:42.681 09:48:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:30:42.681 ************************************ 00:30:42.681 START TEST nvmf_nsid 00:30:42.681 ************************************ 00:30:42.681 09:48:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:30:42.681 * Looking for test storage... 00:30:42.681 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:42.681 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:42.681 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:30:42.681 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:42.681 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:42.681 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:42.681 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:42.681 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:42.681 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:30:42.681 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:30:42.681 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:30:42.681 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:30:42.681 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:30:42.681 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:30:42.681 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:30:42.681 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:42.681 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:30:42.681 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:30:42.681 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:42.681 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:42.681 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:30:42.681 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:30:42.681 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:42.681 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:30:42.681 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:30:42.681 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:30:42.681 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:30:42.681 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:42.681 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:30:42.681 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:30:42.681 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:42.681 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:42.681 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:30:42.681 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:42.681 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:42.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:42.681 --rc genhtml_branch_coverage=1 00:30:42.681 --rc genhtml_function_coverage=1 00:30:42.681 --rc genhtml_legend=1 00:30:42.681 --rc geninfo_all_blocks=1 00:30:42.681 --rc geninfo_unexecuted_blocks=1 00:30:42.681 00:30:42.681 ' 00:30:42.681 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:42.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:42.682 --rc genhtml_branch_coverage=1 00:30:42.682 --rc genhtml_function_coverage=1 00:30:42.682 --rc genhtml_legend=1 00:30:42.682 --rc geninfo_all_blocks=1 00:30:42.682 --rc geninfo_unexecuted_blocks=1 00:30:42.682 00:30:42.682 ' 00:30:42.682 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:42.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:42.682 --rc genhtml_branch_coverage=1 00:30:42.682 --rc genhtml_function_coverage=1 00:30:42.682 --rc genhtml_legend=1 00:30:42.682 --rc geninfo_all_blocks=1 00:30:42.682 --rc geninfo_unexecuted_blocks=1 00:30:42.682 00:30:42.682 ' 00:30:42.682 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:42.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:42.682 --rc genhtml_branch_coverage=1 00:30:42.682 --rc genhtml_function_coverage=1 00:30:42.682 --rc genhtml_legend=1 00:30:42.682 --rc geninfo_all_blocks=1 00:30:42.682 --rc geninfo_unexecuted_blocks=1 00:30:42.682 00:30:42.682 ' 00:30:42.682 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:42.682 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:30:42.682 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:42.682 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:42.682 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:42.682 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:42.682 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:42.682 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:42.682 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:42.682 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:42.682 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:42.682 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:42.944 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:42.944 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:42.944 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:42.944 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:42.944 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:42.944 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:42.944 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:42.944 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:30:42.944 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:42.944 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:42.944 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:42.944 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.944 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.944 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.944 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:30:42.944 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.944 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:30:42.944 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:42.944 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:42.944 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:42.944 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:42.944 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:42.944 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:42.944 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:42.944 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:42.944 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:42.944 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:42.944 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:30:42.944 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:30:42.944 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:30:42.944 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:30:42.944 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:30:42.944 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:30:42.944 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:42.944 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:42.944 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:42.944 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:42.944 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:42.944 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:42.944 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:42.944 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:42.944 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:42.945 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:42.945 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:30:42.945 09:48:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:51.090 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:51.090 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:30:51.090 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:51.090 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:51.090 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:51.090 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:51.090 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:51.090 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:30:51.090 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:51.090 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:30:51.090 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:30:51.090 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:30:51.090 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:30:51.090 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:30:51.090 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:30:51.090 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:51.090 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:51.090 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:51.090 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:51.090 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:51.090 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:51.090 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:51.090 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:51.090 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:51.090 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:51.090 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:51.090 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:51.090 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:51.090 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:51.090 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:51.091 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:51.091 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:51.091 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:51.091 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:51.091 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:51.091 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:51.091 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:51.091 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:51.091 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:51.091 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:51.091 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:51.091 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:51.091 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:51.091 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:51.091 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:51.091 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:51.091 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:51.091 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:51.091 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:51.091 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:51.091 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:51.091 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:51.091 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:51.091 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:51.091 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:51.091 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:51.091 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:51.091 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:51.091 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:51.091 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:51.091 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:51.091 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:51.091 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:51.091 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:51.091 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:51.091 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:51.091 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:51.091 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:51.091 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:51.091 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:51.091 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:51.091 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:51.091 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:51.091 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:30:51.091 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:51.091 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:51.091 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:51.091 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:51.091 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:51.091 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:51.091 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:51.091 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:51.091 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:51.091 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:51.091 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:51.091 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:51.091 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:51.091 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:51.091 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:51.091 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:51.091 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:51.091 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:51.091 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:51.091 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:51.091 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:51.091 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:51.091 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:51.091 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:51.091 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:51.091 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:51.091 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:51.091 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.671 ms 00:30:51.091 00:30:51.091 --- 10.0.0.2 ping statistics --- 00:30:51.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:51.091 rtt min/avg/max/mdev = 0.671/0.671/0.671/0.000 ms 00:30:51.091 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:51.091 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:51.091 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.299 ms 00:30:51.091 00:30:51.091 --- 10.0.0.1 ping statistics --- 00:30:51.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:51.091 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:30:51.091 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:51.091 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:30:51.091 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:51.091 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:51.091 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:51.091 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:51.091 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:51.091 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:51.091 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:51.091 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:30:51.091 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:51.091 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:51.091 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:51.091 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=2937268 00:30:51.091 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 2937268 00:30:51.091 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:30:51.091 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 2937268 ']' 00:30:51.091 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:51.091 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:51.092 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:51.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:51.092 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:51.092 09:48:25 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:51.092 [2024-12-09 09:48:25.638592] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:30:51.092 [2024-12-09 09:48:25.638670] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:51.092 [2024-12-09 09:48:25.737043] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:51.092 [2024-12-09 09:48:25.763523] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:51.092 [2024-12-09 09:48:25.763575] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:51.092 [2024-12-09 09:48:25.763584] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:51.092 [2024-12-09 09:48:25.763591] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:51.092 [2024-12-09 09:48:25.763597] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:51.092 [2024-12-09 09:48:25.764327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:51.092 09:48:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:51.092 09:48:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:30:51.092 09:48:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:51.092 09:48:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:51.092 09:48:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:51.092 09:48:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:51.092 09:48:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:30:51.092 09:48:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=2937594 00:30:51.092 09:48:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:30:51.092 09:48:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:30:51.092 09:48:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:30:51.092 09:48:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:30:51.092 09:48:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:30:51.092 09:48:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:30:51.092 09:48:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:51.092 09:48:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:51.092 09:48:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:30:51.092 09:48:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:51.092 09:48:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:30:51.092 09:48:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:30:51.092 09:48:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:30:51.092 09:48:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:30:51.092 09:48:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:30:51.092 09:48:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=dc337b6b-d114-464c-b496-5f47c68e53d7 00:30:51.092 09:48:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:30:51.092 09:48:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=1a0d2ae2-32e7-4752-95b9-c759beead69c 00:30:51.092 09:48:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:30:51.092 09:48:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=a5137786-2f94-44b0-ac65-5b94ce7577cd 00:30:51.092 09:48:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:30:51.092 09:48:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.092 09:48:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:51.353 null0 00:30:51.353 null1 00:30:51.353 [2024-12-09 09:48:26.561874] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:30:51.353 [2024-12-09 09:48:26.561945] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2937594 ] 00:30:51.353 null2 00:30:51.353 [2024-12-09 09:48:26.567358] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:51.353 [2024-12-09 09:48:26.591612] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:51.353 09:48:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.353 09:48:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 2937594 /var/tmp/tgt2.sock 00:30:51.353 09:48:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 2937594 ']' 00:30:51.353 09:48:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:30:51.353 09:48:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:51.353 09:48:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:30:51.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:30:51.353 09:48:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:51.353 09:48:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:51.353 [2024-12-09 09:48:26.657319] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:51.353 [2024-12-09 09:48:26.685115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:51.614 09:48:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:51.614 09:48:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:30:51.614 09:48:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:30:51.876 [2024-12-09 09:48:27.205877] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:51.876 [2024-12-09 09:48:27.222064] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:30:51.876 nvme0n1 nvme0n2 00:30:51.876 nvme1n1 00:30:51.876 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:30:51.876 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:30:51.876 09:48:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:53.261 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:30:53.261 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:30:53.261 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:30:53.261 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:30:53.261 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:30:53.261 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:30:53.262 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:30:53.262 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:30:53.262 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:30:53.262 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:30:53.262 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:30:53.262 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:30:53.262 09:48:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:30:54.668 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:30:54.668 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:30:54.668 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:30:54.668 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:30:54.668 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:30:54.668 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid dc337b6b-d114-464c-b496-5f47c68e53d7 00:30:54.668 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:30:54.668 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:30:54.668 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:30:54.668 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:30:54.668 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:30:54.668 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=dc337b6bd114464cb4965f47c68e53d7 00:30:54.668 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo DC337B6BD114464CB4965F47C68E53D7 00:30:54.668 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ DC337B6BD114464CB4965F47C68E53D7 == \D\C\3\3\7\B\6\B\D\1\1\4\4\6\4\C\B\4\9\6\5\F\4\7\C\6\8\E\5\3\D\7 ]] 00:30:54.668 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:30:54.668 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:30:54.669 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:30:54.669 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:30:54.669 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:30:54.669 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:30:54.669 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:30:54.669 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 1a0d2ae2-32e7-4752-95b9-c759beead69c 00:30:54.669 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:30:54.669 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:30:54.669 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:30:54.669 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:30:54.669 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:30:54.669 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=1a0d2ae232e7475295b9c759beead69c 00:30:54.669 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 1A0D2AE232E7475295B9C759BEEAD69C 00:30:54.669 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 1A0D2AE232E7475295B9C759BEEAD69C == \1\A\0\D\2\A\E\2\3\2\E\7\4\7\5\2\9\5\B\9\C\7\5\9\B\E\E\A\D\6\9\C ]] 00:30:54.669 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:30:54.669 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:30:54.669 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:30:54.669 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:30:54.669 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:30:54.669 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:30:54.669 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:30:54.669 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid a5137786-2f94-44b0-ac65-5b94ce7577cd 00:30:54.669 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:30:54.669 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:30:54.669 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:30:54.669 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:30:54.669 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:30:54.669 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=a51377862f9444b0ac655b94ce7577cd 00:30:54.669 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo A51377862F9444B0AC655B94CE7577CD 00:30:54.669 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ A51377862F9444B0AC655B94CE7577CD == \A\5\1\3\7\7\8\6\2\F\9\4\4\4\B\0\A\C\6\5\5\B\9\4\C\E\7\5\7\7\C\D ]] 00:30:54.669 09:48:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:30:54.929 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:30:54.929 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:30:54.929 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 2937594 00:30:54.929 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 2937594 ']' 00:30:54.929 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 2937594 00:30:54.929 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:30:54.929 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:54.929 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2937594 00:30:54.929 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:54.930 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:54.930 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2937594' 00:30:54.930 killing process with pid 2937594 00:30:54.930 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 2937594 00:30:54.930 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 2937594 00:30:55.191 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:30:55.191 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:55.191 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:30:55.191 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:55.191 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:30:55.191 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:55.191 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:55.191 rmmod nvme_tcp 00:30:55.191 rmmod nvme_fabrics 00:30:55.191 rmmod nvme_keyring 00:30:55.191 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:55.191 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:30:55.191 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:30:55.191 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 2937268 ']' 00:30:55.191 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 2937268 00:30:55.191 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 2937268 ']' 00:30:55.191 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 2937268 00:30:55.191 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:30:55.191 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:55.191 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2937268 00:30:55.191 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:55.191 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:55.191 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2937268' 00:30:55.191 killing process with pid 2937268 00:30:55.191 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 2937268 00:30:55.191 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 2937268 00:30:55.452 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:55.452 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:55.452 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:55.452 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:30:55.452 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:30:55.452 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:55.452 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:30:55.452 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:55.452 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:55.452 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:55.452 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:55.452 09:48:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:57.368 09:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:57.368 00:30:57.368 real 0m14.848s 00:30:57.368 user 0m11.347s 00:30:57.368 sys 0m6.765s 00:30:57.368 09:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:57.368 09:48:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:30:57.368 ************************************ 00:30:57.368 END TEST nvmf_nsid 00:30:57.368 ************************************ 00:30:57.368 09:48:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:30:57.368 00:30:57.368 real 19m39.142s 00:30:57.368 user 51m23.128s 00:30:57.368 sys 4m55.407s 00:30:57.368 09:48:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:57.368 09:48:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:30:57.368 ************************************ 00:30:57.368 END TEST nvmf_target_extra 00:30:57.368 ************************************ 00:30:57.629 09:48:32 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:30:57.629 09:48:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:57.629 09:48:32 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:57.629 09:48:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:57.629 ************************************ 00:30:57.629 START TEST nvmf_host 00:30:57.629 ************************************ 00:30:57.629 09:48:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:30:57.629 * Looking for test storage... 00:30:57.629 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:30:57.629 09:48:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:57.629 09:48:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:30:57.629 09:48:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:57.629 09:48:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:57.629 09:48:33 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:57.629 09:48:33 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:57.629 09:48:33 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:57.629 09:48:33 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:30:57.629 09:48:33 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:30:57.629 09:48:33 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:30:57.629 09:48:33 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:30:57.629 09:48:33 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:30:57.629 09:48:33 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:30:57.629 09:48:33 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:30:57.629 09:48:33 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:57.629 09:48:33 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:30:57.629 09:48:33 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:30:57.629 09:48:33 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:57.629 09:48:33 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:57.629 09:48:33 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:30:57.629 09:48:33 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:30:57.629 09:48:33 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:57.893 09:48:33 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:30:57.893 09:48:33 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:30:57.893 09:48:33 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:30:57.893 09:48:33 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:30:57.893 09:48:33 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:57.893 09:48:33 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:30:57.893 09:48:33 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:30:57.893 09:48:33 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:57.893 09:48:33 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:57.893 09:48:33 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:30:57.893 09:48:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:57.893 09:48:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:57.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:57.893 --rc genhtml_branch_coverage=1 00:30:57.893 --rc genhtml_function_coverage=1 00:30:57.893 --rc genhtml_legend=1 00:30:57.893 --rc geninfo_all_blocks=1 00:30:57.893 --rc geninfo_unexecuted_blocks=1 00:30:57.893 00:30:57.893 ' 00:30:57.893 09:48:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:57.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:57.893 --rc genhtml_branch_coverage=1 00:30:57.893 --rc genhtml_function_coverage=1 00:30:57.893 --rc genhtml_legend=1 00:30:57.893 --rc geninfo_all_blocks=1 00:30:57.893 --rc geninfo_unexecuted_blocks=1 00:30:57.893 00:30:57.893 ' 00:30:57.893 09:48:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:57.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:57.893 --rc genhtml_branch_coverage=1 00:30:57.893 --rc genhtml_function_coverage=1 00:30:57.893 --rc genhtml_legend=1 00:30:57.893 --rc geninfo_all_blocks=1 00:30:57.893 --rc geninfo_unexecuted_blocks=1 00:30:57.893 00:30:57.893 ' 00:30:57.893 09:48:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:57.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:57.893 --rc genhtml_branch_coverage=1 00:30:57.893 --rc genhtml_function_coverage=1 00:30:57.893 --rc genhtml_legend=1 00:30:57.893 --rc geninfo_all_blocks=1 00:30:57.893 --rc geninfo_unexecuted_blocks=1 00:30:57.893 00:30:57.893 ' 00:30:57.893 09:48:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:57.893 09:48:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:30:57.893 09:48:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:57.893 09:48:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:57.893 09:48:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:57.893 09:48:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:57.893 09:48:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:57.893 09:48:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:57.893 09:48:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:57.893 09:48:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:57.893 09:48:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:57.893 09:48:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:57.893 09:48:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:57.893 09:48:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:57.893 09:48:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:57.893 09:48:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:57.893 09:48:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:57.893 09:48:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:57.893 09:48:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:57.893 09:48:33 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:30:57.893 09:48:33 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:57.893 09:48:33 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:57.893 09:48:33 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:57.893 09:48:33 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:57.893 09:48:33 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:57.893 09:48:33 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:57.893 09:48:33 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:30:57.893 09:48:33 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:57.893 09:48:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:30:57.893 09:48:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:57.893 09:48:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:57.893 09:48:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:57.893 09:48:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:57.893 09:48:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:57.893 09:48:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:57.893 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:57.893 09:48:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:57.893 09:48:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:57.893 09:48:33 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:57.893 09:48:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:30:57.893 09:48:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:30:57.893 09:48:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:30:57.893 09:48:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:30:57.893 09:48:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:57.893 09:48:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:57.893 09:48:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:57.893 ************************************ 00:30:57.893 START TEST nvmf_multicontroller 00:30:57.893 ************************************ 00:30:57.894 09:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:30:57.894 * Looking for test storage... 00:30:57.894 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:57.894 09:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:57.894 09:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lcov --version 00:30:57.894 09:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:58.156 09:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:58.156 09:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:58.156 09:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:58.156 09:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:58.156 09:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:30:58.156 09:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:30:58.156 09:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:30:58.156 09:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:30:58.156 09:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:30:58.156 09:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:30:58.156 09:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:30:58.156 09:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:58.156 09:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:30:58.156 09:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:30:58.156 09:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:58.156 09:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:58.156 09:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:30:58.156 09:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:30:58.156 09:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:58.157 09:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:30:58.157 09:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:30:58.157 09:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:30:58.157 09:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:30:58.157 09:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:58.157 09:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:30:58.157 09:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:30:58.157 09:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:58.157 09:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:58.157 09:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:30:58.157 09:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:58.157 09:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:58.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:58.157 --rc genhtml_branch_coverage=1 00:30:58.157 --rc genhtml_function_coverage=1 00:30:58.157 --rc genhtml_legend=1 00:30:58.157 --rc geninfo_all_blocks=1 00:30:58.157 --rc geninfo_unexecuted_blocks=1 00:30:58.157 00:30:58.157 ' 00:30:58.157 09:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:58.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:58.157 --rc genhtml_branch_coverage=1 00:30:58.157 --rc genhtml_function_coverage=1 00:30:58.157 --rc genhtml_legend=1 00:30:58.157 --rc geninfo_all_blocks=1 00:30:58.157 --rc geninfo_unexecuted_blocks=1 00:30:58.157 00:30:58.157 ' 00:30:58.157 09:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:58.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:58.157 --rc genhtml_branch_coverage=1 00:30:58.157 --rc genhtml_function_coverage=1 00:30:58.157 --rc genhtml_legend=1 00:30:58.157 --rc geninfo_all_blocks=1 00:30:58.157 --rc geninfo_unexecuted_blocks=1 00:30:58.157 00:30:58.157 ' 00:30:58.157 09:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:58.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:58.157 --rc genhtml_branch_coverage=1 00:30:58.157 --rc genhtml_function_coverage=1 00:30:58.157 --rc genhtml_legend=1 00:30:58.157 --rc geninfo_all_blocks=1 00:30:58.157 --rc geninfo_unexecuted_blocks=1 00:30:58.157 00:30:58.157 ' 00:30:58.157 09:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:58.157 09:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:30:58.157 09:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:58.157 09:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:58.157 09:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:58.157 09:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:58.157 09:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:58.157 09:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:58.157 09:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:58.157 09:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:58.157 09:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:58.157 09:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:58.157 09:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:58.157 09:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:58.157 09:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:58.157 09:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:58.157 09:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:58.157 09:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:58.157 09:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:58.157 09:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:30:58.157 09:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:58.157 09:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:58.157 09:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:58.157 09:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:58.157 09:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:58.157 09:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:58.157 09:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:30:58.157 09:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:58.157 09:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:30:58.157 09:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:58.157 09:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:58.157 09:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:58.157 09:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:58.157 09:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:58.157 09:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:58.157 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:58.157 09:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:58.157 09:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:58.157 09:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:58.157 09:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:58.157 09:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:58.157 09:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:30:58.157 09:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:30:58.157 09:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:58.157 09:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:30:58.157 09:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:30:58.157 09:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:58.157 09:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:58.157 09:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:58.157 09:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:58.157 09:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:58.157 09:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:58.157 09:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:58.157 09:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:58.157 09:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:58.157 09:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:58.157 09:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:30:58.157 09:48:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:06.301 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:06.301 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:31:06.301 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:06.301 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:06.301 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:06.301 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:06.301 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:06.301 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:31:06.301 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:06.301 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:31:06.301 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:31:06.301 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:31:06.301 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:31:06.301 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:31:06.301 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:31:06.301 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:06.301 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:06.301 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:06.301 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:06.301 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:06.301 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:06.301 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:06.301 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:06.301 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:06.301 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:06.301 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:06.301 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:06.301 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:06.301 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:06.302 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:06.302 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:06.302 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:06.302 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:06.302 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:06.302 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:06.302 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:06.302 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:06.302 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:06.302 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:06.302 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:06.302 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:06.302 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:06.302 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:06.302 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:06.302 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:06.302 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:06.302 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:06.302 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:06.302 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:06.302 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:06.302 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:06.302 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:06.302 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:06.302 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:06.302 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:06.302 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:06.302 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:06.302 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:06.302 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:06.302 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:06.302 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:06.302 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:06.302 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:06.302 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:06.302 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:06.302 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:06.302 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:06.302 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:06.302 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:06.302 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:06.302 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:06.302 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:06.302 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:06.302 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:31:06.302 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:06.302 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:06.302 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:06.302 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:06.302 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:06.302 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:06.302 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:06.302 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:06.302 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:06.302 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:06.302 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:06.302 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:06.302 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:06.302 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:06.302 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:06.302 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:06.302 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:06.302 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:06.302 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:06.302 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:06.302 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:06.302 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:06.302 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:06.302 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:06.302 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:06.302 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:06.302 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:06.302 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.641 ms 00:31:06.302 00:31:06.302 --- 10.0.0.2 ping statistics --- 00:31:06.302 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:06.302 rtt min/avg/max/mdev = 0.641/0.641/0.641/0.000 ms 00:31:06.302 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:06.302 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:06.302 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.324 ms 00:31:06.302 00:31:06.302 --- 10.0.0.1 ping statistics --- 00:31:06.302 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:06.302 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:31:06.302 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:06.302 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:31:06.302 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:06.302 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:06.302 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:06.302 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:06.302 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:06.302 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:06.302 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:06.302 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:31:06.302 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:06.302 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:06.302 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:06.302 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=2942622 00:31:06.302 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 2942622 00:31:06.302 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:31:06.302 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 2942622 ']' 00:31:06.302 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:06.302 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:06.302 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:06.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:06.302 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:06.302 09:48:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:06.302 [2024-12-09 09:48:40.872686] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:31:06.302 [2024-12-09 09:48:40.872751] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:06.302 [2024-12-09 09:48:40.954267] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:06.302 [2024-12-09 09:48:40.972657] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:06.302 [2024-12-09 09:48:40.972691] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:06.302 [2024-12-09 09:48:40.972699] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:06.303 [2024-12-09 09:48:40.972706] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:06.303 [2024-12-09 09:48:40.972712] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:06.303 [2024-12-09 09:48:40.974012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:06.303 [2024-12-09 09:48:40.974168] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:06.303 [2024-12-09 09:48:40.974169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:06.303 09:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:06.303 09:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:31:06.303 09:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:06.303 09:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:06.303 09:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:06.303 09:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:06.303 09:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:06.303 09:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.303 09:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:06.303 [2024-12-09 09:48:41.713803] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:06.303 09:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.303 09:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:06.303 09:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.303 09:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:06.303 Malloc0 00:31:06.303 09:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.303 09:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:06.303 09:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.303 09:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:06.565 09:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.565 09:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:06.565 09:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.565 09:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:06.565 09:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.565 09:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:06.565 09:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.565 09:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:06.565 [2024-12-09 09:48:41.776738] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:06.565 09:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.565 09:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:06.565 09:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.565 09:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:06.565 [2024-12-09 09:48:41.788685] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:06.565 09:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.565 09:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:31:06.565 09:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.565 09:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:06.565 Malloc1 00:31:06.565 09:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.565 09:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:31:06.565 09:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.565 09:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:06.565 09:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.565 09:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:31:06.565 09:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.565 09:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:06.565 09:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.565 09:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:06.565 09:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.565 09:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:06.565 09:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.565 09:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:31:06.565 09:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:06.565 09:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:06.565 09:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:06.565 09:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=2942729 00:31:06.565 09:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:06.565 09:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:31:06.565 09:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 2942729 /var/tmp/bdevperf.sock 00:31:06.565 09:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 2942729 ']' 00:31:06.565 09:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:06.565 09:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:06.565 09:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:06.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:06.565 09:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:06.565 09:48:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:07.511 09:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:07.511 09:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:31:07.511 09:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:31:07.511 09:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:07.511 09:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:07.511 NVMe0n1 00:31:07.511 09:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:07.511 09:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:07.511 09:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:31:07.511 09:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:07.511 09:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:07.511 09:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:07.511 1 00:31:07.511 09:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:31:07.511 09:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:31:07.511 09:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:31:07.511 09:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:31:07.511 09:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:07.511 09:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:31:07.511 09:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:07.511 09:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:31:07.511 09:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:07.511 09:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:07.511 request: 00:31:07.511 { 00:31:07.511 "name": "NVMe0", 00:31:07.511 "trtype": "tcp", 00:31:07.511 "traddr": "10.0.0.2", 00:31:07.511 "adrfam": "ipv4", 00:31:07.511 "trsvcid": "4420", 00:31:07.511 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:07.511 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:31:07.511 "hostaddr": "10.0.0.1", 00:31:07.511 "prchk_reftag": false, 00:31:07.511 "prchk_guard": false, 00:31:07.511 "hdgst": false, 00:31:07.511 "ddgst": false, 00:31:07.511 "allow_unrecognized_csi": false, 00:31:07.511 "method": "bdev_nvme_attach_controller", 00:31:07.511 "req_id": 1 00:31:07.511 } 00:31:07.511 Got JSON-RPC error response 00:31:07.511 response: 00:31:07.511 { 00:31:07.511 "code": -114, 00:31:07.511 "message": "A controller named NVMe0 already exists with the specified network path" 00:31:07.511 } 00:31:07.511 09:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:31:07.511 09:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:31:07.511 09:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:07.511 09:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:07.512 09:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:07.512 09:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:31:07.512 09:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:31:07.512 09:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:31:07.512 09:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:31:07.512 09:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:07.512 09:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:31:07.512 09:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:07.512 09:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:31:07.512 09:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:07.512 09:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:07.512 request: 00:31:07.512 { 00:31:07.512 "name": "NVMe0", 00:31:07.512 "trtype": "tcp", 00:31:07.512 "traddr": "10.0.0.2", 00:31:07.512 "adrfam": "ipv4", 00:31:07.512 "trsvcid": "4420", 00:31:07.512 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:31:07.512 "hostaddr": "10.0.0.1", 00:31:07.512 "prchk_reftag": false, 00:31:07.512 "prchk_guard": false, 00:31:07.512 "hdgst": false, 00:31:07.512 "ddgst": false, 00:31:07.512 "allow_unrecognized_csi": false, 00:31:07.512 "method": "bdev_nvme_attach_controller", 00:31:07.512 "req_id": 1 00:31:07.512 } 00:31:07.512 Got JSON-RPC error response 00:31:07.512 response: 00:31:07.512 { 00:31:07.512 "code": -114, 00:31:07.512 "message": "A controller named NVMe0 already exists with the specified network path" 00:31:07.512 } 00:31:07.512 09:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:31:07.512 09:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:31:07.512 09:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:07.512 09:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:07.512 09:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:07.512 09:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:31:07.512 09:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:31:07.512 09:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:31:07.512 09:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:31:07.512 09:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:07.512 09:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:31:07.512 09:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:07.512 09:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:31:07.512 09:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:07.512 09:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:07.512 request: 00:31:07.512 { 00:31:07.512 "name": "NVMe0", 00:31:07.512 "trtype": "tcp", 00:31:07.512 "traddr": "10.0.0.2", 00:31:07.512 "adrfam": "ipv4", 00:31:07.512 "trsvcid": "4420", 00:31:07.512 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:07.512 "hostaddr": "10.0.0.1", 00:31:07.512 "prchk_reftag": false, 00:31:07.512 "prchk_guard": false, 00:31:07.512 "hdgst": false, 00:31:07.512 "ddgst": false, 00:31:07.512 "multipath": "disable", 00:31:07.512 "allow_unrecognized_csi": false, 00:31:07.512 "method": "bdev_nvme_attach_controller", 00:31:07.512 "req_id": 1 00:31:07.512 } 00:31:07.512 Got JSON-RPC error response 00:31:07.512 response: 00:31:07.512 { 00:31:07.512 "code": -114, 00:31:07.512 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:31:07.512 } 00:31:07.512 09:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:31:07.512 09:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:31:07.512 09:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:07.512 09:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:07.512 09:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:07.512 09:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:31:07.512 09:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:31:07.512 09:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:31:07.512 09:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:31:07.512 09:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:07.512 09:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:31:07.512 09:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:07.512 09:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:31:07.512 09:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:07.512 09:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:07.512 request: 00:31:07.512 { 00:31:07.512 "name": "NVMe0", 00:31:07.512 "trtype": "tcp", 00:31:07.512 "traddr": "10.0.0.2", 00:31:07.512 "adrfam": "ipv4", 00:31:07.512 "trsvcid": "4420", 00:31:07.512 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:07.512 "hostaddr": "10.0.0.1", 00:31:07.512 "prchk_reftag": false, 00:31:07.512 "prchk_guard": false, 00:31:07.512 "hdgst": false, 00:31:07.512 "ddgst": false, 00:31:07.512 "multipath": "failover", 00:31:07.512 "allow_unrecognized_csi": false, 00:31:07.512 "method": "bdev_nvme_attach_controller", 00:31:07.512 "req_id": 1 00:31:07.512 } 00:31:07.512 Got JSON-RPC error response 00:31:07.512 response: 00:31:07.512 { 00:31:07.512 "code": -114, 00:31:07.512 "message": "A controller named NVMe0 already exists with the specified network path" 00:31:07.512 } 00:31:07.512 09:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:31:07.512 09:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:31:07.512 09:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:07.512 09:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:07.512 09:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:07.512 09:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:07.512 09:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:07.512 09:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:07.774 NVMe0n1 00:31:07.774 09:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:07.774 09:48:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:07.774 09:48:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:07.774 09:48:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:07.774 09:48:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:07.774 09:48:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:31:07.774 09:48:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:07.774 09:48:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:07.774 00:31:07.774 09:48:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:07.774 09:48:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:07.774 09:48:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:31:07.774 09:48:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:07.774 09:48:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:07.774 09:48:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:07.774 09:48:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:31:07.774 09:48:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:09.160 { 00:31:09.160 "results": [ 00:31:09.160 { 00:31:09.160 "job": "NVMe0n1", 00:31:09.160 "core_mask": "0x1", 00:31:09.160 "workload": "write", 00:31:09.160 "status": "finished", 00:31:09.160 "queue_depth": 128, 00:31:09.160 "io_size": 4096, 00:31:09.160 "runtime": 1.006375, 00:31:09.160 "iops": 21867.59408769097, 00:31:09.160 "mibps": 85.42028940504285, 00:31:09.160 "io_failed": 0, 00:31:09.160 "io_timeout": 0, 00:31:09.160 "avg_latency_us": 5840.00503112646, 00:31:09.160 "min_latency_us": 2539.52, 00:31:09.160 "max_latency_us": 10704.213333333333 00:31:09.160 } 00:31:09.160 ], 00:31:09.160 "core_count": 1 00:31:09.160 } 00:31:09.160 09:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:31:09.160 09:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:09.160 09:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:09.160 09:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:09.160 09:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:31:09.160 09:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 2942729 00:31:09.160 09:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 2942729 ']' 00:31:09.160 09:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 2942729 00:31:09.160 09:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:31:09.160 09:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:09.160 09:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2942729 00:31:09.160 09:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:09.160 09:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:09.160 09:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2942729' 00:31:09.160 killing process with pid 2942729 00:31:09.160 09:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 2942729 00:31:09.160 09:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 2942729 00:31:09.160 09:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:09.160 09:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:09.160 09:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:09.160 09:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:09.160 09:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:09.160 09:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:09.160 09:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:09.160 09:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:09.160 09:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:31:09.160 09:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:09.160 09:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:31:09.160 09:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:31:09.160 09:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:31:09.160 09:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:31:09.160 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:31:09.160 [2024-12-09 09:48:41.908660] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:31:09.160 [2024-12-09 09:48:41.908721] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2942729 ] 00:31:09.160 [2024-12-09 09:48:41.998976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:09.160 [2024-12-09 09:48:42.017961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:09.160 [2024-12-09 09:48:43.126354] bdev.c:4934:bdev_name_add: *ERROR*: Bdev name b5cbe5d3-2403-4fff-b1a1-87fa0805a7bf already exists 00:31:09.161 [2024-12-09 09:48:43.126384] bdev.c:8154:bdev_register: *ERROR*: Unable to add uuid:b5cbe5d3-2403-4fff-b1a1-87fa0805a7bf alias for bdev NVMe1n1 00:31:09.161 [2024-12-09 09:48:43.126393] bdev_nvme.c:4665:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:31:09.161 Running I/O for 1 seconds... 00:31:09.161 21816.00 IOPS, 85.22 MiB/s 00:31:09.161 Latency(us) 00:31:09.161 [2024-12-09T08:48:44.614Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:09.161 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:31:09.161 NVMe0n1 : 1.01 21867.59 85.42 0.00 0.00 5840.01 2539.52 10704.21 00:31:09.161 [2024-12-09T08:48:44.614Z] =================================================================================================================== 00:31:09.161 [2024-12-09T08:48:44.614Z] Total : 21867.59 85.42 0.00 0.00 5840.01 2539.52 10704.21 00:31:09.161 Received shutdown signal, test time was about 1.000000 seconds 00:31:09.161 00:31:09.161 Latency(us) 00:31:09.161 [2024-12-09T08:48:44.614Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:09.161 [2024-12-09T08:48:44.614Z] =================================================================================================================== 00:31:09.161 [2024-12-09T08:48:44.614Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:09.161 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:31:09.161 09:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:09.161 09:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:31:09.161 09:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:31:09.161 09:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:09.161 09:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:31:09.161 09:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:09.161 09:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:31:09.161 09:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:09.161 09:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:09.161 rmmod nvme_tcp 00:31:09.161 rmmod nvme_fabrics 00:31:09.161 rmmod nvme_keyring 00:31:09.161 09:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:09.161 09:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:31:09.161 09:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:31:09.161 09:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 2942622 ']' 00:31:09.161 09:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 2942622 00:31:09.161 09:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 2942622 ']' 00:31:09.161 09:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 2942622 00:31:09.161 09:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:31:09.161 09:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:09.161 09:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2942622 00:31:09.422 09:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:09.422 09:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:09.422 09:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2942622' 00:31:09.422 killing process with pid 2942622 00:31:09.422 09:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 2942622 00:31:09.422 09:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 2942622 00:31:09.422 09:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:09.422 09:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:09.422 09:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:09.422 09:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:31:09.422 09:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:31:09.422 09:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:09.422 09:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:31:09.422 09:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:09.422 09:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:09.422 09:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:09.422 09:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:09.422 09:48:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:11.968 09:48:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:11.968 00:31:11.968 real 0m13.669s 00:31:11.968 user 0m16.480s 00:31:11.968 sys 0m6.271s 00:31:11.968 09:48:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:11.968 09:48:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:31:11.968 ************************************ 00:31:11.968 END TEST nvmf_multicontroller 00:31:11.968 ************************************ 00:31:11.968 09:48:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:31:11.968 09:48:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:11.968 09:48:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:11.968 09:48:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:11.968 ************************************ 00:31:11.968 START TEST nvmf_aer 00:31:11.968 ************************************ 00:31:11.968 09:48:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:31:11.968 * Looking for test storage... 00:31:11.968 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:11.968 09:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:11.968 09:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lcov --version 00:31:11.968 09:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:11.968 09:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:11.968 09:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:11.968 09:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:11.968 09:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:11.968 09:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:31:11.969 09:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:31:11.969 09:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:31:11.969 09:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:31:11.969 09:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:31:11.969 09:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:31:11.969 09:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:31:11.969 09:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:11.969 09:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:31:11.969 09:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:31:11.969 09:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:11.969 09:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:11.969 09:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:31:11.969 09:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:31:11.969 09:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:11.969 09:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:31:11.969 09:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:31:11.969 09:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:31:11.969 09:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:31:11.969 09:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:11.969 09:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:31:11.969 09:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:31:11.969 09:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:11.969 09:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:11.969 09:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:31:11.969 09:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:11.969 09:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:11.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:11.969 --rc genhtml_branch_coverage=1 00:31:11.969 --rc genhtml_function_coverage=1 00:31:11.969 --rc genhtml_legend=1 00:31:11.969 --rc geninfo_all_blocks=1 00:31:11.969 --rc geninfo_unexecuted_blocks=1 00:31:11.969 00:31:11.969 ' 00:31:11.969 09:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:11.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:11.969 --rc genhtml_branch_coverage=1 00:31:11.969 --rc genhtml_function_coverage=1 00:31:11.969 --rc genhtml_legend=1 00:31:11.969 --rc geninfo_all_blocks=1 00:31:11.969 --rc geninfo_unexecuted_blocks=1 00:31:11.969 00:31:11.969 ' 00:31:11.969 09:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:11.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:11.969 --rc genhtml_branch_coverage=1 00:31:11.969 --rc genhtml_function_coverage=1 00:31:11.969 --rc genhtml_legend=1 00:31:11.969 --rc geninfo_all_blocks=1 00:31:11.969 --rc geninfo_unexecuted_blocks=1 00:31:11.969 00:31:11.969 ' 00:31:11.969 09:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:11.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:11.969 --rc genhtml_branch_coverage=1 00:31:11.969 --rc genhtml_function_coverage=1 00:31:11.969 --rc genhtml_legend=1 00:31:11.969 --rc geninfo_all_blocks=1 00:31:11.969 --rc geninfo_unexecuted_blocks=1 00:31:11.969 00:31:11.969 ' 00:31:11.969 09:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:11.969 09:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:31:11.969 09:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:11.969 09:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:11.969 09:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:11.969 09:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:11.969 09:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:11.969 09:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:11.969 09:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:11.969 09:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:11.969 09:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:11.969 09:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:11.969 09:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:11.969 09:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:11.969 09:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:11.969 09:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:11.969 09:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:11.969 09:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:11.969 09:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:11.969 09:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:31:11.969 09:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:11.969 09:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:11.969 09:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:11.969 09:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:11.969 09:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:11.969 09:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:11.969 09:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:31:11.969 09:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:11.969 09:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:31:11.969 09:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:11.969 09:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:11.969 09:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:11.969 09:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:11.969 09:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:11.969 09:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:11.969 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:11.969 09:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:11.969 09:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:11.969 09:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:11.969 09:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:31:11.969 09:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:11.969 09:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:11.969 09:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:11.969 09:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:11.969 09:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:11.969 09:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:11.969 09:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:11.969 09:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:11.969 09:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:11.969 09:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:11.969 09:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:31:11.969 09:48:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:20.109 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:20.109 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:31:20.109 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:20.109 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:20.109 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:20.109 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:20.109 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:20.109 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:31:20.109 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:20.109 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:31:20.109 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:31:20.109 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:31:20.109 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:31:20.109 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:31:20.109 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:31:20.109 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:20.109 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:20.109 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:20.109 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:20.109 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:20.109 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:20.109 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:20.109 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:20.109 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:20.109 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:20.109 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:20.109 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:20.109 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:20.109 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:20.109 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:20.109 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:20.109 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:20.109 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:20.109 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:20.109 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:20.109 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:20.109 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:20.109 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:20.109 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:20.109 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:20.109 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:20.109 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:20.109 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:20.109 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:20.109 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:20.109 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:20.109 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:20.109 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:20.109 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:20.109 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:20.109 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:20.109 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:20.110 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:20.110 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:20.110 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:20.110 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:20.110 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:20.110 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:20.110 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:20.110 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:20.110 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:20.110 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:20.110 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:20.110 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:20.110 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:20.110 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:20.110 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:20.110 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:20.110 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:20.110 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:20.110 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:20.110 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:20.110 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:20.110 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:31:20.110 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:20.110 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:20.110 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:20.110 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:20.110 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:20.110 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:20.110 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:20.110 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:20.110 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:20.110 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:20.110 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:20.110 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:20.110 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:20.110 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:20.110 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:20.110 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:20.110 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:20.110 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:20.110 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:20.110 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:20.110 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:20.110 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:20.110 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:20.110 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:20.110 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:20.110 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:20.110 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:20.110 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.626 ms 00:31:20.110 00:31:20.110 --- 10.0.0.2 ping statistics --- 00:31:20.110 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:20.110 rtt min/avg/max/mdev = 0.626/0.626/0.626/0.000 ms 00:31:20.110 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:20.110 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:20.110 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:31:20.110 00:31:20.110 --- 10.0.0.1 ping statistics --- 00:31:20.110 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:20.110 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:31:20.110 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:20.110 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:31:20.110 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:20.110 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:20.110 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:20.110 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:20.110 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:20.110 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:20.110 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:20.110 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:31:20.110 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:20.110 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:20.110 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:20.110 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=2947413 00:31:20.110 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 2947413 00:31:20.110 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:20.110 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 2947413 ']' 00:31:20.110 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:20.110 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:20.110 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:20.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:20.110 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:20.110 09:48:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:20.110 [2024-12-09 09:48:54.664157] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:31:20.110 [2024-12-09 09:48:54.664226] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:20.110 [2024-12-09 09:48:54.762498] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:20.110 [2024-12-09 09:48:54.791346] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:20.110 [2024-12-09 09:48:54.791399] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:20.110 [2024-12-09 09:48:54.791409] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:20.110 [2024-12-09 09:48:54.791416] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:20.110 [2024-12-09 09:48:54.791423] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:20.110 [2024-12-09 09:48:54.793376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:20.110 [2024-12-09 09:48:54.793519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:20.110 [2024-12-09 09:48:54.793755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:20.110 [2024-12-09 09:48:54.794029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:20.110 09:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:20.110 09:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:31:20.110 09:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:20.110 09:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:20.110 09:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:20.110 09:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:20.110 09:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:20.110 09:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:20.110 09:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:20.110 [2024-12-09 09:48:55.525072] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:20.110 09:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:20.110 09:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:31:20.110 09:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:20.110 09:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:20.371 Malloc0 00:31:20.371 09:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:20.371 09:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:31:20.371 09:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:20.371 09:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:20.371 09:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:20.371 09:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:20.371 09:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:20.371 09:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:20.371 09:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:20.371 09:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:20.371 09:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:20.371 09:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:20.371 [2024-12-09 09:48:55.594990] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:20.371 09:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:20.371 09:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:31:20.371 09:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:20.371 09:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:20.371 [ 00:31:20.371 { 00:31:20.371 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:31:20.371 "subtype": "Discovery", 00:31:20.371 "listen_addresses": [], 00:31:20.371 "allow_any_host": true, 00:31:20.371 "hosts": [] 00:31:20.371 }, 00:31:20.371 { 00:31:20.371 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:20.371 "subtype": "NVMe", 00:31:20.371 "listen_addresses": [ 00:31:20.371 { 00:31:20.371 "trtype": "TCP", 00:31:20.371 "adrfam": "IPv4", 00:31:20.371 "traddr": "10.0.0.2", 00:31:20.371 "trsvcid": "4420" 00:31:20.371 } 00:31:20.371 ], 00:31:20.371 "allow_any_host": true, 00:31:20.371 "hosts": [], 00:31:20.371 "serial_number": "SPDK00000000000001", 00:31:20.371 "model_number": "SPDK bdev Controller", 00:31:20.371 "max_namespaces": 2, 00:31:20.371 "min_cntlid": 1, 00:31:20.371 "max_cntlid": 65519, 00:31:20.371 "namespaces": [ 00:31:20.371 { 00:31:20.371 "nsid": 1, 00:31:20.371 "bdev_name": "Malloc0", 00:31:20.371 "name": "Malloc0", 00:31:20.371 "nguid": "897922E768B24E53A8A5C5C1FD53956A", 00:31:20.371 "uuid": "897922e7-68b2-4e53-a8a5-c5c1fd53956a" 00:31:20.371 } 00:31:20.371 ] 00:31:20.371 } 00:31:20.371 ] 00:31:20.371 09:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:20.371 09:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:31:20.371 09:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:31:20.371 09:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=2947761 00:31:20.371 09:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:31:20.371 09:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:31:20.371 09:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:31:20.371 09:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:31:20.371 09:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:31:20.371 09:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:31:20.371 09:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:31:20.371 09:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:31:20.371 09:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:31:20.371 09:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:31:20.371 09:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:31:20.632 09:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:31:20.632 09:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:31:20.632 09:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:31:20.632 09:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:31:20.632 09:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:20.632 09:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:20.632 Malloc1 00:31:20.632 09:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:20.632 09:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:31:20.632 09:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:20.632 09:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:20.632 09:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:20.632 09:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:31:20.632 09:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:20.632 09:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:20.632 Asynchronous Event Request test 00:31:20.632 Attaching to 10.0.0.2 00:31:20.632 Attached to 10.0.0.2 00:31:20.632 Registering asynchronous event callbacks... 00:31:20.632 Starting namespace attribute notice tests for all controllers... 00:31:20.632 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:31:20.632 aer_cb - Changed Namespace 00:31:20.632 Cleaning up... 00:31:20.632 [ 00:31:20.632 { 00:31:20.632 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:31:20.632 "subtype": "Discovery", 00:31:20.632 "listen_addresses": [], 00:31:20.632 "allow_any_host": true, 00:31:20.632 "hosts": [] 00:31:20.632 }, 00:31:20.632 { 00:31:20.632 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:20.632 "subtype": "NVMe", 00:31:20.632 "listen_addresses": [ 00:31:20.632 { 00:31:20.632 "trtype": "TCP", 00:31:20.632 "adrfam": "IPv4", 00:31:20.632 "traddr": "10.0.0.2", 00:31:20.632 "trsvcid": "4420" 00:31:20.632 } 00:31:20.632 ], 00:31:20.632 "allow_any_host": true, 00:31:20.632 "hosts": [], 00:31:20.632 "serial_number": "SPDK00000000000001", 00:31:20.632 "model_number": "SPDK bdev Controller", 00:31:20.632 "max_namespaces": 2, 00:31:20.632 "min_cntlid": 1, 00:31:20.632 "max_cntlid": 65519, 00:31:20.632 "namespaces": [ 00:31:20.632 { 00:31:20.632 "nsid": 1, 00:31:20.632 "bdev_name": "Malloc0", 00:31:20.632 "name": "Malloc0", 00:31:20.632 "nguid": "897922E768B24E53A8A5C5C1FD53956A", 00:31:20.632 "uuid": "897922e7-68b2-4e53-a8a5-c5c1fd53956a" 00:31:20.632 }, 00:31:20.632 { 00:31:20.632 "nsid": 2, 00:31:20.632 "bdev_name": "Malloc1", 00:31:20.632 "name": "Malloc1", 00:31:20.632 "nguid": "BE6E600E8659492A8AE4B48326A16180", 00:31:20.632 "uuid": "be6e600e-8659-492a-8ae4-b48326a16180" 00:31:20.632 } 00:31:20.632 ] 00:31:20.632 } 00:31:20.632 ] 00:31:20.632 09:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:20.632 09:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 2947761 00:31:20.632 09:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:31:20.632 09:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:20.632 09:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:20.632 09:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:20.632 09:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:31:20.632 09:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:20.632 09:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:20.632 09:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:20.632 09:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:20.632 09:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:20.632 09:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:20.632 09:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:20.632 09:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:31:20.632 09:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:31:20.632 09:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:20.632 09:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:31:20.632 09:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:20.632 09:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:31:20.632 09:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:20.632 09:48:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:20.632 rmmod nvme_tcp 00:31:20.632 rmmod nvme_fabrics 00:31:20.632 rmmod nvme_keyring 00:31:20.633 09:48:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:20.633 09:48:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:31:20.633 09:48:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:31:20.633 09:48:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 2947413 ']' 00:31:20.633 09:48:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 2947413 00:31:20.633 09:48:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 2947413 ']' 00:31:20.633 09:48:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 2947413 00:31:20.633 09:48:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:31:20.633 09:48:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:20.633 09:48:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2947413 00:31:20.633 09:48:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:20.633 09:48:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:20.633 09:48:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2947413' 00:31:20.633 killing process with pid 2947413 00:31:20.633 09:48:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 2947413 00:31:20.633 09:48:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 2947413 00:31:20.894 09:48:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:20.894 09:48:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:20.894 09:48:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:20.894 09:48:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:31:20.894 09:48:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:31:20.894 09:48:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:20.894 09:48:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:31:20.894 09:48:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:20.894 09:48:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:20.894 09:48:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:20.894 09:48:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:20.894 09:48:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:23.442 09:48:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:23.442 00:31:23.442 real 0m11.364s 00:31:23.442 user 0m7.869s 00:31:23.442 sys 0m6.086s 00:31:23.442 09:48:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:23.442 09:48:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:31:23.442 ************************************ 00:31:23.442 END TEST nvmf_aer 00:31:23.442 ************************************ 00:31:23.442 09:48:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:31:23.442 09:48:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:23.442 09:48:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:23.442 09:48:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:23.442 ************************************ 00:31:23.442 START TEST nvmf_async_init 00:31:23.442 ************************************ 00:31:23.442 09:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:31:23.442 * Looking for test storage... 00:31:23.442 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:23.442 09:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:23.442 09:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lcov --version 00:31:23.442 09:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:23.442 09:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:23.442 09:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:23.442 09:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:23.442 09:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:23.442 09:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:31:23.442 09:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:31:23.442 09:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:31:23.442 09:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:31:23.442 09:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:31:23.442 09:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:31:23.442 09:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:31:23.442 09:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:23.442 09:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:31:23.442 09:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:31:23.442 09:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:23.442 09:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:23.442 09:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:31:23.442 09:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:31:23.442 09:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:23.442 09:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:31:23.442 09:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:31:23.442 09:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:31:23.442 09:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:31:23.442 09:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:23.442 09:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:31:23.442 09:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:31:23.442 09:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:23.443 09:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:23.443 09:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:31:23.443 09:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:23.443 09:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:23.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:23.443 --rc genhtml_branch_coverage=1 00:31:23.443 --rc genhtml_function_coverage=1 00:31:23.443 --rc genhtml_legend=1 00:31:23.443 --rc geninfo_all_blocks=1 00:31:23.443 --rc geninfo_unexecuted_blocks=1 00:31:23.443 00:31:23.443 ' 00:31:23.443 09:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:23.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:23.443 --rc genhtml_branch_coverage=1 00:31:23.443 --rc genhtml_function_coverage=1 00:31:23.443 --rc genhtml_legend=1 00:31:23.443 --rc geninfo_all_blocks=1 00:31:23.443 --rc geninfo_unexecuted_blocks=1 00:31:23.443 00:31:23.443 ' 00:31:23.443 09:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:23.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:23.443 --rc genhtml_branch_coverage=1 00:31:23.443 --rc genhtml_function_coverage=1 00:31:23.443 --rc genhtml_legend=1 00:31:23.443 --rc geninfo_all_blocks=1 00:31:23.443 --rc geninfo_unexecuted_blocks=1 00:31:23.443 00:31:23.443 ' 00:31:23.443 09:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:23.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:23.443 --rc genhtml_branch_coverage=1 00:31:23.443 --rc genhtml_function_coverage=1 00:31:23.443 --rc genhtml_legend=1 00:31:23.443 --rc geninfo_all_blocks=1 00:31:23.443 --rc geninfo_unexecuted_blocks=1 00:31:23.443 00:31:23.443 ' 00:31:23.443 09:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:23.443 09:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:31:23.443 09:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:23.443 09:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:23.443 09:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:23.443 09:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:23.443 09:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:23.443 09:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:23.443 09:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:23.443 09:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:23.443 09:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:23.443 09:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:23.443 09:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:23.443 09:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:23.443 09:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:23.443 09:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:23.443 09:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:23.443 09:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:23.443 09:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:23.443 09:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:31:23.443 09:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:23.443 09:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:23.443 09:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:23.443 09:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:23.443 09:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:23.443 09:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:23.443 09:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:31:23.443 09:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:23.443 09:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:31:23.443 09:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:23.443 09:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:23.443 09:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:23.443 09:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:23.443 09:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:23.443 09:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:23.443 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:23.443 09:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:23.443 09:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:23.443 09:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:23.443 09:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:31:23.443 09:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:31:23.443 09:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:31:23.443 09:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:31:23.443 09:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:31:23.443 09:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:31:23.443 09:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=6ba008591605436f9931ba8e5fb542b4 00:31:23.443 09:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:31:23.443 09:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:23.443 09:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:23.443 09:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:23.443 09:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:23.443 09:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:23.443 09:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:23.443 09:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:23.443 09:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:23.443 09:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:23.443 09:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:23.443 09:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:31:23.443 09:48:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:31.691 09:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:31.691 09:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:31:31.691 09:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:31.691 09:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:31.691 09:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:31.691 09:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:31.691 09:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:31.691 09:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:31:31.691 09:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:31.691 09:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:31:31.691 09:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:31:31.691 09:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:31:31.691 09:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:31:31.691 09:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:31:31.691 09:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:31:31.691 09:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:31.691 09:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:31.691 09:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:31.691 09:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:31.691 09:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:31.691 09:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:31.691 09:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:31.691 09:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:31.691 09:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:31.691 09:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:31.692 09:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:31.692 09:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:31.692 09:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:31.692 09:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:31.692 09:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:31.692 09:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:31.692 09:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:31.692 09:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:31.692 09:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:31.692 09:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:31.692 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:31.692 09:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:31.692 09:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:31.692 09:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:31.692 09:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:31.692 09:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:31.692 09:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:31.692 09:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:31.692 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:31.692 09:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:31.692 09:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:31.692 09:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:31.692 09:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:31.692 09:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:31.692 09:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:31.692 09:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:31.692 09:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:31.692 09:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:31.692 09:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:31.692 09:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:31.692 09:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:31.692 09:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:31.692 09:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:31.692 09:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:31.692 09:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:31.692 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:31.692 09:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:31.692 09:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:31.692 09:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:31.692 09:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:31.692 09:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:31.692 09:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:31.692 09:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:31.692 09:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:31.692 09:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:31.692 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:31.692 09:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:31.692 09:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:31.692 09:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:31:31.692 09:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:31.692 09:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:31.692 09:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:31.692 09:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:31.692 09:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:31.692 09:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:31.692 09:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:31.692 09:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:31.692 09:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:31.692 09:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:31.692 09:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:31.692 09:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:31.692 09:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:31.692 09:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:31.692 09:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:31.692 09:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:31.692 09:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:31.692 09:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:31.692 09:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:31.692 09:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:31.692 09:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:31.692 09:49:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:31.692 09:49:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:31.692 09:49:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:31.692 09:49:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:31.692 09:49:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:31.692 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:31.692 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.687 ms 00:31:31.692 00:31:31.692 --- 10.0.0.2 ping statistics --- 00:31:31.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:31.692 rtt min/avg/max/mdev = 0.687/0.687/0.687/0.000 ms 00:31:31.692 09:49:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:31.692 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:31.692 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.307 ms 00:31:31.692 00:31:31.692 --- 10.0.0.1 ping statistics --- 00:31:31.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:31.692 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:31:31.692 09:49:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:31.692 09:49:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:31:31.692 09:49:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:31.692 09:49:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:31.692 09:49:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:31.692 09:49:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:31.692 09:49:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:31.692 09:49:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:31.692 09:49:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:31.692 09:49:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:31:31.692 09:49:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:31.692 09:49:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:31.692 09:49:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:31.692 09:49:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=2951967 00:31:31.692 09:49:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 2951967 00:31:31.692 09:49:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:31:31.692 09:49:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 2951967 ']' 00:31:31.692 09:49:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:31.692 09:49:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:31.692 09:49:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:31.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:31.692 09:49:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:31.692 09:49:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:31.692 [2024-12-09 09:49:06.177888] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:31:31.692 [2024-12-09 09:49:06.177957] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:31.692 [2024-12-09 09:49:06.278410] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:31.692 [2024-12-09 09:49:06.305138] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:31.693 [2024-12-09 09:49:06.305189] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:31.693 [2024-12-09 09:49:06.305198] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:31.693 [2024-12-09 09:49:06.305205] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:31.693 [2024-12-09 09:49:06.305211] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:31.693 [2024-12-09 09:49:06.305946] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:31.693 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:31.693 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:31:31.693 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:31.693 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:31.693 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:31.693 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:31.693 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:31:31.693 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:31.693 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:31.693 [2024-12-09 09:49:07.052490] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:31.693 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:31.693 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:31:31.693 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:31.693 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:31.693 null0 00:31:31.693 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:31.693 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:31:31.693 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:31.693 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:31.693 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:31.693 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:31:31.693 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:31.693 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:31.693 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:31.693 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 6ba008591605436f9931ba8e5fb542b4 00:31:31.693 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:31.693 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:31.693 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:31.693 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:31.693 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:31.693 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:31.693 [2024-12-09 09:49:07.096835] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:31.693 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:31.693 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:31:31.693 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:31.693 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:32.003 nvme0n1 00:31:32.003 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.003 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:31:32.003 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.003 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:32.003 [ 00:31:32.003 { 00:31:32.003 "name": "nvme0n1", 00:31:32.003 "aliases": [ 00:31:32.003 "6ba00859-1605-436f-9931-ba8e5fb542b4" 00:31:32.003 ], 00:31:32.003 "product_name": "NVMe disk", 00:31:32.003 "block_size": 512, 00:31:32.004 "num_blocks": 2097152, 00:31:32.004 "uuid": "6ba00859-1605-436f-9931-ba8e5fb542b4", 00:31:32.004 "numa_id": 0, 00:31:32.004 "assigned_rate_limits": { 00:31:32.004 "rw_ios_per_sec": 0, 00:31:32.004 "rw_mbytes_per_sec": 0, 00:31:32.004 "r_mbytes_per_sec": 0, 00:31:32.004 "w_mbytes_per_sec": 0 00:31:32.004 }, 00:31:32.004 "claimed": false, 00:31:32.004 "zoned": false, 00:31:32.004 "supported_io_types": { 00:31:32.004 "read": true, 00:31:32.004 "write": true, 00:31:32.004 "unmap": false, 00:31:32.004 "flush": true, 00:31:32.004 "reset": true, 00:31:32.004 "nvme_admin": true, 00:31:32.004 "nvme_io": true, 00:31:32.004 "nvme_io_md": false, 00:31:32.004 "write_zeroes": true, 00:31:32.004 "zcopy": false, 00:31:32.004 "get_zone_info": false, 00:31:32.004 "zone_management": false, 00:31:32.004 "zone_append": false, 00:31:32.004 "compare": true, 00:31:32.004 "compare_and_write": true, 00:31:32.004 "abort": true, 00:31:32.004 "seek_hole": false, 00:31:32.004 "seek_data": false, 00:31:32.004 "copy": true, 00:31:32.004 "nvme_iov_md": false 00:31:32.004 }, 00:31:32.004 "memory_domains": [ 00:31:32.004 { 00:31:32.004 "dma_device_id": "system", 00:31:32.004 "dma_device_type": 1 00:31:32.004 } 00:31:32.004 ], 00:31:32.004 "driver_specific": { 00:31:32.004 "nvme": [ 00:31:32.004 { 00:31:32.004 "trid": { 00:31:32.004 "trtype": "TCP", 00:31:32.004 "adrfam": "IPv4", 00:31:32.004 "traddr": "10.0.0.2", 00:31:32.004 "trsvcid": "4420", 00:31:32.004 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:31:32.004 }, 00:31:32.004 "ctrlr_data": { 00:31:32.004 "cntlid": 1, 00:31:32.004 "vendor_id": "0x8086", 00:31:32.004 "model_number": "SPDK bdev Controller", 00:31:32.004 "serial_number": "00000000000000000000", 00:31:32.004 "firmware_revision": "25.01", 00:31:32.004 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:32.004 "oacs": { 00:31:32.004 "security": 0, 00:31:32.004 "format": 0, 00:31:32.004 "firmware": 0, 00:31:32.004 "ns_manage": 0 00:31:32.004 }, 00:31:32.004 "multi_ctrlr": true, 00:31:32.004 "ana_reporting": false 00:31:32.004 }, 00:31:32.004 "vs": { 00:31:32.004 "nvme_version": "1.3" 00:31:32.004 }, 00:31:32.004 "ns_data": { 00:31:32.004 "id": 1, 00:31:32.004 "can_share": true 00:31:32.004 } 00:31:32.004 } 00:31:32.004 ], 00:31:32.004 "mp_policy": "active_passive" 00:31:32.004 } 00:31:32.004 } 00:31:32.004 ] 00:31:32.004 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.004 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:31:32.004 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.004 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:32.004 [2024-12-09 09:49:07.351218] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:31:32.004 [2024-12-09 09:49:07.351298] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d36d0 (9): Bad file descriptor 00:31:32.291 [2024-12-09 09:49:07.483749] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:31:32.291 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.291 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:31:32.291 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.291 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:32.291 [ 00:31:32.291 { 00:31:32.291 "name": "nvme0n1", 00:31:32.291 "aliases": [ 00:31:32.291 "6ba00859-1605-436f-9931-ba8e5fb542b4" 00:31:32.291 ], 00:31:32.291 "product_name": "NVMe disk", 00:31:32.291 "block_size": 512, 00:31:32.291 "num_blocks": 2097152, 00:31:32.291 "uuid": "6ba00859-1605-436f-9931-ba8e5fb542b4", 00:31:32.291 "numa_id": 0, 00:31:32.292 "assigned_rate_limits": { 00:31:32.292 "rw_ios_per_sec": 0, 00:31:32.292 "rw_mbytes_per_sec": 0, 00:31:32.292 "r_mbytes_per_sec": 0, 00:31:32.292 "w_mbytes_per_sec": 0 00:31:32.292 }, 00:31:32.292 "claimed": false, 00:31:32.292 "zoned": false, 00:31:32.292 "supported_io_types": { 00:31:32.292 "read": true, 00:31:32.292 "write": true, 00:31:32.292 "unmap": false, 00:31:32.292 "flush": true, 00:31:32.292 "reset": true, 00:31:32.292 "nvme_admin": true, 00:31:32.292 "nvme_io": true, 00:31:32.292 "nvme_io_md": false, 00:31:32.292 "write_zeroes": true, 00:31:32.292 "zcopy": false, 00:31:32.292 "get_zone_info": false, 00:31:32.292 "zone_management": false, 00:31:32.292 "zone_append": false, 00:31:32.292 "compare": true, 00:31:32.292 "compare_and_write": true, 00:31:32.292 "abort": true, 00:31:32.292 "seek_hole": false, 00:31:32.292 "seek_data": false, 00:31:32.292 "copy": true, 00:31:32.292 "nvme_iov_md": false 00:31:32.292 }, 00:31:32.292 "memory_domains": [ 00:31:32.292 { 00:31:32.292 "dma_device_id": "system", 00:31:32.292 "dma_device_type": 1 00:31:32.292 } 00:31:32.292 ], 00:31:32.292 "driver_specific": { 00:31:32.292 "nvme": [ 00:31:32.292 { 00:31:32.292 "trid": { 00:31:32.292 "trtype": "TCP", 00:31:32.292 "adrfam": "IPv4", 00:31:32.292 "traddr": "10.0.0.2", 00:31:32.292 "trsvcid": "4420", 00:31:32.292 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:31:32.292 }, 00:31:32.292 "ctrlr_data": { 00:31:32.292 "cntlid": 2, 00:31:32.292 "vendor_id": "0x8086", 00:31:32.292 "model_number": "SPDK bdev Controller", 00:31:32.292 "serial_number": "00000000000000000000", 00:31:32.292 "firmware_revision": "25.01", 00:31:32.292 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:32.292 "oacs": { 00:31:32.292 "security": 0, 00:31:32.292 "format": 0, 00:31:32.292 "firmware": 0, 00:31:32.292 "ns_manage": 0 00:31:32.292 }, 00:31:32.292 "multi_ctrlr": true, 00:31:32.292 "ana_reporting": false 00:31:32.292 }, 00:31:32.292 "vs": { 00:31:32.292 "nvme_version": "1.3" 00:31:32.292 }, 00:31:32.292 "ns_data": { 00:31:32.292 "id": 1, 00:31:32.292 "can_share": true 00:31:32.292 } 00:31:32.292 } 00:31:32.292 ], 00:31:32.292 "mp_policy": "active_passive" 00:31:32.292 } 00:31:32.292 } 00:31:32.292 ] 00:31:32.292 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.292 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:32.292 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.292 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:32.292 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.292 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:31:32.292 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.xV8D2tGufx 00:31:32.292 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:31:32.292 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.xV8D2tGufx 00:31:32.292 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.xV8D2tGufx 00:31:32.292 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.292 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:32.292 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.292 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:31:32.292 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.292 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:32.292 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.292 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:31:32.292 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.292 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:32.292 [2024-12-09 09:49:07.559868] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:31:32.292 [2024-12-09 09:49:07.560034] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:32.292 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.292 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:31:32.292 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.292 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:32.292 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.292 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:31:32.292 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.292 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:32.292 [2024-12-09 09:49:07.583945] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:32.292 nvme0n1 00:31:32.292 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.292 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:31:32.292 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.292 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:32.292 [ 00:31:32.292 { 00:31:32.292 "name": "nvme0n1", 00:31:32.292 "aliases": [ 00:31:32.292 "6ba00859-1605-436f-9931-ba8e5fb542b4" 00:31:32.292 ], 00:31:32.292 "product_name": "NVMe disk", 00:31:32.292 "block_size": 512, 00:31:32.292 "num_blocks": 2097152, 00:31:32.292 "uuid": "6ba00859-1605-436f-9931-ba8e5fb542b4", 00:31:32.292 "numa_id": 0, 00:31:32.292 "assigned_rate_limits": { 00:31:32.292 "rw_ios_per_sec": 0, 00:31:32.292 "rw_mbytes_per_sec": 0, 00:31:32.292 "r_mbytes_per_sec": 0, 00:31:32.292 "w_mbytes_per_sec": 0 00:31:32.292 }, 00:31:32.292 "claimed": false, 00:31:32.292 "zoned": false, 00:31:32.292 "supported_io_types": { 00:31:32.292 "read": true, 00:31:32.292 "write": true, 00:31:32.292 "unmap": false, 00:31:32.292 "flush": true, 00:31:32.292 "reset": true, 00:31:32.292 "nvme_admin": true, 00:31:32.292 "nvme_io": true, 00:31:32.292 "nvme_io_md": false, 00:31:32.292 "write_zeroes": true, 00:31:32.292 "zcopy": false, 00:31:32.292 "get_zone_info": false, 00:31:32.292 "zone_management": false, 00:31:32.292 "zone_append": false, 00:31:32.292 "compare": true, 00:31:32.292 "compare_and_write": true, 00:31:32.292 "abort": true, 00:31:32.292 "seek_hole": false, 00:31:32.292 "seek_data": false, 00:31:32.292 "copy": true, 00:31:32.292 "nvme_iov_md": false 00:31:32.292 }, 00:31:32.292 "memory_domains": [ 00:31:32.292 { 00:31:32.292 "dma_device_id": "system", 00:31:32.292 "dma_device_type": 1 00:31:32.292 } 00:31:32.292 ], 00:31:32.292 "driver_specific": { 00:31:32.292 "nvme": [ 00:31:32.292 { 00:31:32.292 "trid": { 00:31:32.292 "trtype": "TCP", 00:31:32.292 "adrfam": "IPv4", 00:31:32.292 "traddr": "10.0.0.2", 00:31:32.292 "trsvcid": "4421", 00:31:32.292 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:31:32.292 }, 00:31:32.292 "ctrlr_data": { 00:31:32.292 "cntlid": 3, 00:31:32.292 "vendor_id": "0x8086", 00:31:32.292 "model_number": "SPDK bdev Controller", 00:31:32.292 "serial_number": "00000000000000000000", 00:31:32.292 "firmware_revision": "25.01", 00:31:32.292 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:32.292 "oacs": { 00:31:32.292 "security": 0, 00:31:32.292 "format": 0, 00:31:32.292 "firmware": 0, 00:31:32.292 "ns_manage": 0 00:31:32.292 }, 00:31:32.292 "multi_ctrlr": true, 00:31:32.292 "ana_reporting": false 00:31:32.292 }, 00:31:32.292 "vs": { 00:31:32.292 "nvme_version": "1.3" 00:31:32.292 }, 00:31:32.292 "ns_data": { 00:31:32.292 "id": 1, 00:31:32.292 "can_share": true 00:31:32.292 } 00:31:32.292 } 00:31:32.292 ], 00:31:32.292 "mp_policy": "active_passive" 00:31:32.292 } 00:31:32.292 } 00:31:32.293 ] 00:31:32.293 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.293 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:32.293 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.293 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:32.293 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.293 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.xV8D2tGufx 00:31:32.293 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:31:32.293 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:31:32.293 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:32.293 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:31:32.293 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:32.293 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:31:32.293 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:32.293 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:32.293 rmmod nvme_tcp 00:31:32.293 rmmod nvme_fabrics 00:31:32.553 rmmod nvme_keyring 00:31:32.553 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:32.553 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:31:32.553 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:31:32.553 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 2951967 ']' 00:31:32.553 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 2951967 00:31:32.553 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 2951967 ']' 00:31:32.553 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 2951967 00:31:32.553 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:31:32.553 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:32.553 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2951967 00:31:32.553 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:32.553 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:32.553 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2951967' 00:31:32.553 killing process with pid 2951967 00:31:32.553 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 2951967 00:31:32.553 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 2951967 00:31:32.553 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:32.553 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:32.553 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:32.553 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:31:32.553 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:31:32.553 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:32.553 09:49:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:31:32.814 09:49:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:32.814 09:49:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:32.814 09:49:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:32.814 09:49:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:32.814 09:49:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:34.725 09:49:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:34.725 00:31:34.725 real 0m11.724s 00:31:34.725 user 0m4.105s 00:31:34.725 sys 0m6.171s 00:31:34.725 09:49:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:34.725 09:49:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:31:34.725 ************************************ 00:31:34.725 END TEST nvmf_async_init 00:31:34.725 ************************************ 00:31:34.725 09:49:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:31:34.725 09:49:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:34.725 09:49:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:34.725 09:49:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:34.986 ************************************ 00:31:34.986 START TEST dma 00:31:34.986 ************************************ 00:31:34.986 09:49:10 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:31:34.986 * Looking for test storage... 00:31:34.986 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:34.986 09:49:10 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:34.986 09:49:10 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lcov --version 00:31:34.986 09:49:10 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:34.986 09:49:10 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:34.986 09:49:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:34.986 09:49:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:34.986 09:49:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:34.986 09:49:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:31:34.986 09:49:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:31:34.986 09:49:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:31:34.987 09:49:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:31:34.987 09:49:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:31:34.987 09:49:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:31:34.987 09:49:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:31:34.987 09:49:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:34.987 09:49:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:31:34.987 09:49:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:31:34.987 09:49:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:34.987 09:49:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:34.987 09:49:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:31:34.987 09:49:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:31:34.987 09:49:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:34.987 09:49:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:31:34.987 09:49:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:31:34.987 09:49:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:31:34.987 09:49:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:31:34.987 09:49:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:34.987 09:49:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:31:34.987 09:49:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:31:34.987 09:49:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:34.987 09:49:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:34.987 09:49:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:31:34.987 09:49:10 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:34.987 09:49:10 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:34.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:34.987 --rc genhtml_branch_coverage=1 00:31:34.987 --rc genhtml_function_coverage=1 00:31:34.987 --rc genhtml_legend=1 00:31:34.987 --rc geninfo_all_blocks=1 00:31:34.987 --rc geninfo_unexecuted_blocks=1 00:31:34.987 00:31:34.987 ' 00:31:34.987 09:49:10 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:34.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:34.987 --rc genhtml_branch_coverage=1 00:31:34.987 --rc genhtml_function_coverage=1 00:31:34.987 --rc genhtml_legend=1 00:31:34.987 --rc geninfo_all_blocks=1 00:31:34.987 --rc geninfo_unexecuted_blocks=1 00:31:34.987 00:31:34.987 ' 00:31:34.987 09:49:10 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:34.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:34.987 --rc genhtml_branch_coverage=1 00:31:34.987 --rc genhtml_function_coverage=1 00:31:34.987 --rc genhtml_legend=1 00:31:34.987 --rc geninfo_all_blocks=1 00:31:34.987 --rc geninfo_unexecuted_blocks=1 00:31:34.987 00:31:34.987 ' 00:31:34.987 09:49:10 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:34.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:34.987 --rc genhtml_branch_coverage=1 00:31:34.987 --rc genhtml_function_coverage=1 00:31:34.987 --rc genhtml_legend=1 00:31:34.987 --rc geninfo_all_blocks=1 00:31:34.987 --rc geninfo_unexecuted_blocks=1 00:31:34.987 00:31:34.987 ' 00:31:34.987 09:49:10 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:34.987 09:49:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:31:34.987 09:49:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:34.987 09:49:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:34.987 09:49:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:34.987 09:49:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:34.987 09:49:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:34.987 09:49:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:34.987 09:49:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:34.987 09:49:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:34.987 09:49:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:34.987 09:49:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:34.987 09:49:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:34.987 09:49:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:34.987 09:49:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:34.987 09:49:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:34.987 09:49:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:34.987 09:49:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:34.987 09:49:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:34.987 09:49:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:31:34.987 09:49:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:34.987 09:49:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:34.987 09:49:10 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:34.987 09:49:10 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:34.987 09:49:10 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:34.987 09:49:10 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:34.987 09:49:10 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:31:34.987 09:49:10 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:34.987 09:49:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:31:34.987 09:49:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:34.987 09:49:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:34.987 09:49:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:34.987 09:49:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:34.987 09:49:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:34.987 09:49:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:34.987 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:34.987 09:49:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:34.987 09:49:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:34.987 09:49:10 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:34.987 09:49:10 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:31:34.987 09:49:10 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:31:34.987 00:31:34.987 real 0m0.240s 00:31:34.987 user 0m0.144s 00:31:34.987 sys 0m0.113s 00:31:34.987 09:49:10 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:34.987 09:49:10 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:31:34.987 ************************************ 00:31:34.987 END TEST dma 00:31:34.987 ************************************ 00:31:35.248 09:49:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:31:35.248 09:49:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:35.248 09:49:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:35.248 09:49:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.248 ************************************ 00:31:35.248 START TEST nvmf_identify 00:31:35.248 ************************************ 00:31:35.248 09:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:31:35.248 * Looking for test storage... 00:31:35.248 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:35.248 09:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:35.248 09:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:31:35.248 09:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:35.248 09:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:35.248 09:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:35.248 09:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:35.248 09:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:35.248 09:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:31:35.248 09:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:31:35.248 09:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:31:35.248 09:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:31:35.248 09:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:31:35.248 09:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:31:35.248 09:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:31:35.248 09:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:35.248 09:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:31:35.248 09:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:31:35.248 09:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:35.248 09:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:35.248 09:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:31:35.248 09:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:31:35.248 09:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:35.249 09:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:31:35.249 09:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:31:35.249 09:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:31:35.249 09:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:31:35.249 09:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:35.249 09:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:31:35.249 09:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:31:35.249 09:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:35.249 09:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:35.249 09:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:31:35.249 09:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:35.249 09:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:35.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:35.249 --rc genhtml_branch_coverage=1 00:31:35.249 --rc genhtml_function_coverage=1 00:31:35.249 --rc genhtml_legend=1 00:31:35.249 --rc geninfo_all_blocks=1 00:31:35.249 --rc geninfo_unexecuted_blocks=1 00:31:35.249 00:31:35.249 ' 00:31:35.249 09:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:35.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:35.249 --rc genhtml_branch_coverage=1 00:31:35.249 --rc genhtml_function_coverage=1 00:31:35.249 --rc genhtml_legend=1 00:31:35.249 --rc geninfo_all_blocks=1 00:31:35.249 --rc geninfo_unexecuted_blocks=1 00:31:35.249 00:31:35.249 ' 00:31:35.249 09:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:35.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:35.249 --rc genhtml_branch_coverage=1 00:31:35.249 --rc genhtml_function_coverage=1 00:31:35.249 --rc genhtml_legend=1 00:31:35.249 --rc geninfo_all_blocks=1 00:31:35.249 --rc geninfo_unexecuted_blocks=1 00:31:35.249 00:31:35.249 ' 00:31:35.249 09:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:35.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:35.249 --rc genhtml_branch_coverage=1 00:31:35.249 --rc genhtml_function_coverage=1 00:31:35.249 --rc genhtml_legend=1 00:31:35.249 --rc geninfo_all_blocks=1 00:31:35.249 --rc geninfo_unexecuted_blocks=1 00:31:35.249 00:31:35.249 ' 00:31:35.249 09:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:35.510 09:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:31:35.510 09:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:35.510 09:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:35.510 09:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:35.510 09:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:35.510 09:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:35.510 09:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:35.510 09:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:35.510 09:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:35.510 09:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:35.510 09:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:35.510 09:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:35.510 09:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:35.510 09:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:35.510 09:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:35.510 09:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:35.510 09:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:35.510 09:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:35.510 09:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:31:35.510 09:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:35.510 09:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:35.510 09:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:35.510 09:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:35.510 09:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:35.510 09:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:35.510 09:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:31:35.510 09:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:35.510 09:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:31:35.510 09:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:35.510 09:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:35.510 09:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:35.510 09:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:35.510 09:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:35.510 09:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:35.510 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:35.510 09:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:35.510 09:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:35.510 09:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:35.510 09:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:35.510 09:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:35.510 09:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:31:35.510 09:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:35.510 09:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:35.510 09:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:35.510 09:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:35.511 09:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:35.511 09:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:35.511 09:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:35.511 09:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:35.511 09:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:35.511 09:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:35.511 09:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:31:35.511 09:49:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:43.674 09:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:43.674 09:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:31:43.674 09:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:43.674 09:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:43.674 09:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:43.674 09:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:43.674 09:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:43.674 09:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:31:43.674 09:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:43.675 09:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:31:43.675 09:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:31:43.675 09:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:31:43.675 09:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:31:43.675 09:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:31:43.675 09:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:31:43.675 09:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:43.675 09:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:43.675 09:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:43.675 09:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:43.675 09:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:43.675 09:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:43.675 09:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:43.675 09:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:43.675 09:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:43.675 09:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:43.675 09:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:43.675 09:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:43.675 09:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:43.675 09:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:43.675 09:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:43.675 09:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:43.675 09:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:43.675 09:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:43.675 09:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:43.675 09:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:43.675 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:43.675 09:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:43.675 09:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:43.675 09:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:43.675 09:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:43.675 09:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:43.675 09:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:43.675 09:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:43.675 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:43.675 09:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:43.675 09:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:43.675 09:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:43.675 09:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:43.675 09:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:43.675 09:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:43.675 09:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:43.675 09:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:43.675 09:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:43.675 09:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:43.675 09:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:43.675 09:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:43.675 09:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:43.675 09:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:43.675 09:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:43.675 09:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:43.675 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:43.675 09:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:43.675 09:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:43.675 09:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:43.675 09:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:43.675 09:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:43.675 09:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:43.675 09:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:43.675 09:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:43.675 09:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:43.675 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:43.675 09:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:43.675 09:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:43.675 09:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:31:43.675 09:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:43.675 09:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:43.675 09:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:43.675 09:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:43.675 09:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:43.675 09:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:43.675 09:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:43.675 09:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:43.675 09:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:43.675 09:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:43.675 09:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:43.675 09:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:43.675 09:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:43.675 09:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:43.675 09:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:43.675 09:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:43.675 09:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:43.675 09:49:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:43.675 09:49:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:43.675 09:49:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:43.675 09:49:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:43.675 09:49:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:43.675 09:49:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:43.675 09:49:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:43.675 09:49:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:43.675 09:49:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:43.675 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:43.675 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.645 ms 00:31:43.675 00:31:43.675 --- 10.0.0.2 ping statistics --- 00:31:43.675 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:43.675 rtt min/avg/max/mdev = 0.645/0.645/0.645/0.000 ms 00:31:43.675 09:49:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:43.675 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:43.675 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.297 ms 00:31:43.675 00:31:43.675 --- 10.0.0.1 ping statistics --- 00:31:43.675 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:43.675 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:31:43.675 09:49:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:43.675 09:49:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:31:43.675 09:49:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:43.675 09:49:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:43.675 09:49:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:43.675 09:49:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:43.675 09:49:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:43.675 09:49:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:43.675 09:49:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:43.675 09:49:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:31:43.675 09:49:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:43.675 09:49:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:43.675 09:49:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2956504 00:31:43.675 09:49:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:43.676 09:49:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:43.676 09:49:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2956504 00:31:43.676 09:49:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 2956504 ']' 00:31:43.676 09:49:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:43.676 09:49:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:43.676 09:49:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:43.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:43.676 09:49:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:43.676 09:49:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:43.676 [2024-12-09 09:49:18.277864] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:31:43.676 [2024-12-09 09:49:18.277929] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:43.676 [2024-12-09 09:49:18.365765] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:43.676 [2024-12-09 09:49:18.395237] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:43.676 [2024-12-09 09:49:18.395286] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:43.676 [2024-12-09 09:49:18.395295] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:43.676 [2024-12-09 09:49:18.395302] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:43.676 [2024-12-09 09:49:18.395309] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:43.676 [2024-12-09 09:49:18.397325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:43.676 [2024-12-09 09:49:18.397607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:43.676 [2024-12-09 09:49:18.397815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:43.676 [2024-12-09 09:49:18.397932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:43.676 09:49:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:43.676 09:49:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:31:43.676 09:49:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:43.676 09:49:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:43.676 09:49:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:43.676 [2024-12-09 09:49:18.491921] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:43.676 09:49:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:43.676 09:49:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:31:43.676 09:49:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:43.676 09:49:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:43.676 09:49:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:43.676 09:49:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:43.676 09:49:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:43.676 Malloc0 00:31:43.676 09:49:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:43.676 09:49:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:43.676 09:49:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:43.676 09:49:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:43.676 09:49:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:43.676 09:49:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:31:43.676 09:49:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:43.676 09:49:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:43.676 09:49:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:43.676 09:49:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:43.676 09:49:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:43.676 09:49:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:43.676 [2024-12-09 09:49:18.602960] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:43.676 09:49:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:43.676 09:49:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:43.676 09:49:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:43.676 09:49:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:43.676 09:49:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:43.676 09:49:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:31:43.676 09:49:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:43.676 09:49:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:43.676 [ 00:31:43.676 { 00:31:43.676 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:31:43.676 "subtype": "Discovery", 00:31:43.676 "listen_addresses": [ 00:31:43.676 { 00:31:43.676 "trtype": "TCP", 00:31:43.676 "adrfam": "IPv4", 00:31:43.676 "traddr": "10.0.0.2", 00:31:43.676 "trsvcid": "4420" 00:31:43.676 } 00:31:43.676 ], 00:31:43.676 "allow_any_host": true, 00:31:43.676 "hosts": [] 00:31:43.676 }, 00:31:43.676 { 00:31:43.676 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:43.676 "subtype": "NVMe", 00:31:43.676 "listen_addresses": [ 00:31:43.676 { 00:31:43.676 "trtype": "TCP", 00:31:43.676 "adrfam": "IPv4", 00:31:43.676 "traddr": "10.0.0.2", 00:31:43.676 "trsvcid": "4420" 00:31:43.676 } 00:31:43.676 ], 00:31:43.676 "allow_any_host": true, 00:31:43.676 "hosts": [], 00:31:43.676 "serial_number": "SPDK00000000000001", 00:31:43.676 "model_number": "SPDK bdev Controller", 00:31:43.676 "max_namespaces": 32, 00:31:43.676 "min_cntlid": 1, 00:31:43.676 "max_cntlid": 65519, 00:31:43.676 "namespaces": [ 00:31:43.676 { 00:31:43.676 "nsid": 1, 00:31:43.676 "bdev_name": "Malloc0", 00:31:43.676 "name": "Malloc0", 00:31:43.676 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:31:43.676 "eui64": "ABCDEF0123456789", 00:31:43.676 "uuid": "cd40461d-f8ff-4180-ba9a-2dfabd8bc896" 00:31:43.676 } 00:31:43.676 ] 00:31:43.676 } 00:31:43.676 ] 00:31:43.676 09:49:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:43.676 09:49:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:31:43.676 [2024-12-09 09:49:18.665481] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:31:43.676 [2024-12-09 09:49:18.665520] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2956654 ] 00:31:43.676 [2024-12-09 09:49:18.721794] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:31:43.676 [2024-12-09 09:49:18.721851] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:31:43.676 [2024-12-09 09:49:18.721856] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:31:43.676 [2024-12-09 09:49:18.721870] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:31:43.676 [2024-12-09 09:49:18.721879] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:31:43.676 [2024-12-09 09:49:18.722430] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:31:43.676 [2024-12-09 09:49:18.722461] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xdb3ed0 0 00:31:43.676 [2024-12-09 09:49:18.728645] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:31:43.676 [2024-12-09 09:49:18.728658] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:31:43.676 [2024-12-09 09:49:18.728665] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:31:43.676 [2024-12-09 09:49:18.728669] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:31:43.676 [2024-12-09 09:49:18.728700] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:43.676 [2024-12-09 09:49:18.728706] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:43.676 [2024-12-09 09:49:18.728711] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdb3ed0) 00:31:43.676 [2024-12-09 09:49:18.728725] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:31:43.676 [2024-12-09 09:49:18.728743] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe1f540, cid 0, qid 0 00:31:43.676 [2024-12-09 09:49:18.736650] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:43.676 [2024-12-09 09:49:18.736659] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:43.676 [2024-12-09 09:49:18.736663] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:43.676 [2024-12-09 09:49:18.736667] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe1f540) on tqpair=0xdb3ed0 00:31:43.676 [2024-12-09 09:49:18.736677] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:31:43.676 [2024-12-09 09:49:18.736684] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:31:43.676 [2024-12-09 09:49:18.736689] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:31:43.676 [2024-12-09 09:49:18.736703] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:43.676 [2024-12-09 09:49:18.736708] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:43.676 [2024-12-09 09:49:18.736711] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdb3ed0) 00:31:43.677 [2024-12-09 09:49:18.736719] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.677 [2024-12-09 09:49:18.736732] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe1f540, cid 0, qid 0 00:31:43.677 [2024-12-09 09:49:18.736818] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:43.677 [2024-12-09 09:49:18.736825] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:43.677 [2024-12-09 09:49:18.736828] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:43.677 [2024-12-09 09:49:18.736832] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe1f540) on tqpair=0xdb3ed0 00:31:43.677 [2024-12-09 09:49:18.736844] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:31:43.677 [2024-12-09 09:49:18.736852] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:31:43.677 [2024-12-09 09:49:18.736859] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:43.677 [2024-12-09 09:49:18.736863] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:43.677 [2024-12-09 09:49:18.736866] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdb3ed0) 00:31:43.677 [2024-12-09 09:49:18.736873] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.677 [2024-12-09 09:49:18.736884] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe1f540, cid 0, qid 0 00:31:43.677 [2024-12-09 09:49:18.736947] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:43.677 [2024-12-09 09:49:18.736953] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:43.677 [2024-12-09 09:49:18.736957] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:43.677 [2024-12-09 09:49:18.736961] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe1f540) on tqpair=0xdb3ed0 00:31:43.677 [2024-12-09 09:49:18.736966] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:31:43.677 [2024-12-09 09:49:18.736974] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:31:43.677 [2024-12-09 09:49:18.736981] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:43.677 [2024-12-09 09:49:18.736985] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:43.677 [2024-12-09 09:49:18.736989] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdb3ed0) 00:31:43.677 [2024-12-09 09:49:18.736996] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.677 [2024-12-09 09:49:18.737006] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe1f540, cid 0, qid 0 00:31:43.677 [2024-12-09 09:49:18.737076] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:43.677 [2024-12-09 09:49:18.737082] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:43.677 [2024-12-09 09:49:18.737086] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:43.677 [2024-12-09 09:49:18.737090] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe1f540) on tqpair=0xdb3ed0 00:31:43.677 [2024-12-09 09:49:18.737095] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:31:43.677 [2024-12-09 09:49:18.737104] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:43.677 [2024-12-09 09:49:18.737108] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:43.677 [2024-12-09 09:49:18.737111] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdb3ed0) 00:31:43.677 [2024-12-09 09:49:18.737118] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.677 [2024-12-09 09:49:18.737128] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe1f540, cid 0, qid 0 00:31:43.677 [2024-12-09 09:49:18.737189] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:43.677 [2024-12-09 09:49:18.737195] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:43.677 [2024-12-09 09:49:18.737199] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:43.677 [2024-12-09 09:49:18.737203] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe1f540) on tqpair=0xdb3ed0 00:31:43.677 [2024-12-09 09:49:18.737207] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:31:43.677 [2024-12-09 09:49:18.737212] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:31:43.677 [2024-12-09 09:49:18.737222] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:31:43.677 [2024-12-09 09:49:18.737332] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:31:43.677 [2024-12-09 09:49:18.737337] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:31:43.677 [2024-12-09 09:49:18.737346] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:43.677 [2024-12-09 09:49:18.737350] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:43.677 [2024-12-09 09:49:18.737353] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdb3ed0) 00:31:43.677 [2024-12-09 09:49:18.737360] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.677 [2024-12-09 09:49:18.737371] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe1f540, cid 0, qid 0 00:31:43.677 [2024-12-09 09:49:18.737434] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:43.677 [2024-12-09 09:49:18.737441] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:43.677 [2024-12-09 09:49:18.737444] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:43.677 [2024-12-09 09:49:18.737448] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe1f540) on tqpair=0xdb3ed0 00:31:43.677 [2024-12-09 09:49:18.737453] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:31:43.677 [2024-12-09 09:49:18.737462] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:43.677 [2024-12-09 09:49:18.737466] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:43.677 [2024-12-09 09:49:18.737470] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdb3ed0) 00:31:43.677 [2024-12-09 09:49:18.737477] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.677 [2024-12-09 09:49:18.737486] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe1f540, cid 0, qid 0 00:31:43.677 [2024-12-09 09:49:18.737549] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:43.677 [2024-12-09 09:49:18.737556] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:43.677 [2024-12-09 09:49:18.737560] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:43.677 [2024-12-09 09:49:18.737563] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe1f540) on tqpair=0xdb3ed0 00:31:43.677 [2024-12-09 09:49:18.737568] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:31:43.677 [2024-12-09 09:49:18.737573] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:31:43.677 [2024-12-09 09:49:18.737581] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:31:43.677 [2024-12-09 09:49:18.737588] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:31:43.677 [2024-12-09 09:49:18.737597] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:43.677 [2024-12-09 09:49:18.737601] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdb3ed0) 00:31:43.677 [2024-12-09 09:49:18.737608] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.677 [2024-12-09 09:49:18.737618] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe1f540, cid 0, qid 0 00:31:43.677 [2024-12-09 09:49:18.737727] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:43.677 [2024-12-09 09:49:18.737736] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:43.677 [2024-12-09 09:49:18.737740] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:43.677 [2024-12-09 09:49:18.737744] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xdb3ed0): datao=0, datal=4096, cccid=0 00:31:43.677 [2024-12-09 09:49:18.737749] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe1f540) on tqpair(0xdb3ed0): expected_datao=0, payload_size=4096 00:31:43.677 [2024-12-09 09:49:18.737753] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:43.677 [2024-12-09 09:49:18.737766] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:43.677 [2024-12-09 09:49:18.737770] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:43.677 [2024-12-09 09:49:18.737783] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:43.677 [2024-12-09 09:49:18.737790] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:43.677 [2024-12-09 09:49:18.737793] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:43.677 [2024-12-09 09:49:18.737797] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe1f540) on tqpair=0xdb3ed0 00:31:43.677 [2024-12-09 09:49:18.737807] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:31:43.677 [2024-12-09 09:49:18.737812] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:31:43.677 [2024-12-09 09:49:18.737816] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:31:43.677 [2024-12-09 09:49:18.737822] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:31:43.677 [2024-12-09 09:49:18.737826] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:31:43.677 [2024-12-09 09:49:18.737831] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:31:43.677 [2024-12-09 09:49:18.737839] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:31:43.677 [2024-12-09 09:49:18.737846] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:43.677 [2024-12-09 09:49:18.737850] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:43.677 [2024-12-09 09:49:18.737853] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdb3ed0) 00:31:43.677 [2024-12-09 09:49:18.737861] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:43.677 [2024-12-09 09:49:18.737872] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe1f540, cid 0, qid 0 00:31:43.677 [2024-12-09 09:49:18.737938] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:43.677 [2024-12-09 09:49:18.737945] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:43.677 [2024-12-09 09:49:18.737948] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:43.678 [2024-12-09 09:49:18.737952] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe1f540) on tqpair=0xdb3ed0 00:31:43.678 [2024-12-09 09:49:18.737959] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:43.678 [2024-12-09 09:49:18.737963] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:43.678 [2024-12-09 09:49:18.737967] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdb3ed0) 00:31:43.678 [2024-12-09 09:49:18.737973] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:43.678 [2024-12-09 09:49:18.737979] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:43.678 [2024-12-09 09:49:18.737983] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:43.678 [2024-12-09 09:49:18.737986] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xdb3ed0) 00:31:43.678 [2024-12-09 09:49:18.737994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:43.678 [2024-12-09 09:49:18.738000] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:43.678 [2024-12-09 09:49:18.738004] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:43.678 [2024-12-09 09:49:18.738008] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xdb3ed0) 00:31:43.678 [2024-12-09 09:49:18.738013] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:43.678 [2024-12-09 09:49:18.738019] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:43.678 [2024-12-09 09:49:18.738023] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:43.678 [2024-12-09 09:49:18.738026] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb3ed0) 00:31:43.678 [2024-12-09 09:49:18.738032] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:43.678 [2024-12-09 09:49:18.738037] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:31:43.678 [2024-12-09 09:49:18.738047] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:31:43.678 [2024-12-09 09:49:18.738053] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:43.678 [2024-12-09 09:49:18.738057] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xdb3ed0) 00:31:43.678 [2024-12-09 09:49:18.738064] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.678 [2024-12-09 09:49:18.738076] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe1f540, cid 0, qid 0 00:31:43.678 [2024-12-09 09:49:18.738081] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe1f6c0, cid 1, qid 0 00:31:43.678 [2024-12-09 09:49:18.738086] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe1f840, cid 2, qid 0 00:31:43.678 [2024-12-09 09:49:18.738091] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe1f9c0, cid 3, qid 0 00:31:43.678 [2024-12-09 09:49:18.738096] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe1fb40, cid 4, qid 0 00:31:43.678 [2024-12-09 09:49:18.738202] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:43.678 [2024-12-09 09:49:18.738209] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:43.678 [2024-12-09 09:49:18.738212] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:43.678 [2024-12-09 09:49:18.738216] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe1fb40) on tqpair=0xdb3ed0 00:31:43.678 [2024-12-09 09:49:18.738221] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:31:43.678 [2024-12-09 09:49:18.738226] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:31:43.678 [2024-12-09 09:49:18.738236] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:43.678 [2024-12-09 09:49:18.738240] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xdb3ed0) 00:31:43.678 [2024-12-09 09:49:18.738246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.678 [2024-12-09 09:49:18.738256] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe1fb40, cid 4, qid 0 00:31:43.678 [2024-12-09 09:49:18.738327] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:43.678 [2024-12-09 09:49:18.738334] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:43.678 [2024-12-09 09:49:18.738337] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:43.678 [2024-12-09 09:49:18.738341] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xdb3ed0): datao=0, datal=4096, cccid=4 00:31:43.678 [2024-12-09 09:49:18.738348] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe1fb40) on tqpair(0xdb3ed0): expected_datao=0, payload_size=4096 00:31:43.678 [2024-12-09 09:49:18.738352] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:43.678 [2024-12-09 09:49:18.738359] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:43.678 [2024-12-09 09:49:18.738363] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:43.678 [2024-12-09 09:49:18.738375] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:43.678 [2024-12-09 09:49:18.738381] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:43.678 [2024-12-09 09:49:18.738384] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:43.678 [2024-12-09 09:49:18.738388] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe1fb40) on tqpair=0xdb3ed0 00:31:43.678 [2024-12-09 09:49:18.738399] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:31:43.678 [2024-12-09 09:49:18.738419] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:43.678 [2024-12-09 09:49:18.738424] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xdb3ed0) 00:31:43.678 [2024-12-09 09:49:18.738430] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.678 [2024-12-09 09:49:18.738437] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:43.678 [2024-12-09 09:49:18.738441] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:43.678 [2024-12-09 09:49:18.738444] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xdb3ed0) 00:31:43.678 [2024-12-09 09:49:18.738450] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:31:43.678 [2024-12-09 09:49:18.738464] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe1fb40, cid 4, qid 0 00:31:43.678 [2024-12-09 09:49:18.738469] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe1fcc0, cid 5, qid 0 00:31:43.678 [2024-12-09 09:49:18.738575] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:43.678 [2024-12-09 09:49:18.738582] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:43.678 [2024-12-09 09:49:18.738585] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:43.678 [2024-12-09 09:49:18.738589] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xdb3ed0): datao=0, datal=1024, cccid=4 00:31:43.678 [2024-12-09 09:49:18.738593] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe1fb40) on tqpair(0xdb3ed0): expected_datao=0, payload_size=1024 00:31:43.678 [2024-12-09 09:49:18.738598] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:43.678 [2024-12-09 09:49:18.738605] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:43.678 [2024-12-09 09:49:18.738608] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:43.678 [2024-12-09 09:49:18.738614] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:43.678 [2024-12-09 09:49:18.738620] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:43.678 [2024-12-09 09:49:18.738624] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:43.678 [2024-12-09 09:49:18.738628] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe1fcc0) on tqpair=0xdb3ed0 00:31:43.678 [2024-12-09 09:49:18.779727] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:43.678 [2024-12-09 09:49:18.779738] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:43.678 [2024-12-09 09:49:18.779742] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:43.678 [2024-12-09 09:49:18.779746] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe1fb40) on tqpair=0xdb3ed0 00:31:43.678 [2024-12-09 09:49:18.779757] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:43.678 [2024-12-09 09:49:18.779761] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xdb3ed0) 00:31:43.678 [2024-12-09 09:49:18.779770] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.678 [2024-12-09 09:49:18.779786] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe1fb40, cid 4, qid 0 00:31:43.678 [2024-12-09 09:49:18.779867] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:43.678 [2024-12-09 09:49:18.779873] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:43.678 [2024-12-09 09:49:18.779877] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:43.678 [2024-12-09 09:49:18.779880] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xdb3ed0): datao=0, datal=3072, cccid=4 00:31:43.678 [2024-12-09 09:49:18.779885] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe1fb40) on tqpair(0xdb3ed0): expected_datao=0, payload_size=3072 00:31:43.678 [2024-12-09 09:49:18.779889] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:43.678 [2024-12-09 09:49:18.779896] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:43.678 [2024-12-09 09:49:18.779900] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:43.678 [2024-12-09 09:49:18.780003] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:43.678 [2024-12-09 09:49:18.780010] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:43.678 [2024-12-09 09:49:18.780014] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:43.678 [2024-12-09 09:49:18.780018] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe1fb40) on tqpair=0xdb3ed0 00:31:43.678 [2024-12-09 09:49:18.780026] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:43.678 [2024-12-09 09:49:18.780030] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xdb3ed0) 00:31:43.678 [2024-12-09 09:49:18.780036] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.678 [2024-12-09 09:49:18.780050] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe1fb40, cid 4, qid 0 00:31:43.678 [2024-12-09 09:49:18.780124] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:43.678 [2024-12-09 09:49:18.780131] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:43.678 [2024-12-09 09:49:18.780134] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:43.678 [2024-12-09 09:49:18.780138] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xdb3ed0): datao=0, datal=8, cccid=4 00:31:43.678 [2024-12-09 09:49:18.780142] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe1fb40) on tqpair(0xdb3ed0): expected_datao=0, payload_size=8 00:31:43.678 [2024-12-09 09:49:18.780147] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:43.679 [2024-12-09 09:49:18.780153] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:43.679 [2024-12-09 09:49:18.780157] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:43.679 [2024-12-09 09:49:18.824647] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:43.679 [2024-12-09 09:49:18.824656] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:43.679 [2024-12-09 09:49:18.824660] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:43.679 [2024-12-09 09:49:18.824664] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe1fb40) on tqpair=0xdb3ed0 00:31:43.679 ===================================================== 00:31:43.679 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:31:43.679 ===================================================== 00:31:43.679 Controller Capabilities/Features 00:31:43.679 ================================ 00:31:43.679 Vendor ID: 0000 00:31:43.679 Subsystem Vendor ID: 0000 00:31:43.679 Serial Number: .................... 00:31:43.679 Model Number: ........................................ 00:31:43.679 Firmware Version: 25.01 00:31:43.679 Recommended Arb Burst: 0 00:31:43.679 IEEE OUI Identifier: 00 00 00 00:31:43.679 Multi-path I/O 00:31:43.679 May have multiple subsystem ports: No 00:31:43.679 May have multiple controllers: No 00:31:43.679 Associated with SR-IOV VF: No 00:31:43.679 Max Data Transfer Size: 131072 00:31:43.679 Max Number of Namespaces: 0 00:31:43.679 Max Number of I/O Queues: 1024 00:31:43.679 NVMe Specification Version (VS): 1.3 00:31:43.679 NVMe Specification Version (Identify): 1.3 00:31:43.679 Maximum Queue Entries: 128 00:31:43.679 Contiguous Queues Required: Yes 00:31:43.679 Arbitration Mechanisms Supported 00:31:43.679 Weighted Round Robin: Not Supported 00:31:43.679 Vendor Specific: Not Supported 00:31:43.679 Reset Timeout: 15000 ms 00:31:43.679 Doorbell Stride: 4 bytes 00:31:43.679 NVM Subsystem Reset: Not Supported 00:31:43.679 Command Sets Supported 00:31:43.679 NVM Command Set: Supported 00:31:43.679 Boot Partition: Not Supported 00:31:43.679 Memory Page Size Minimum: 4096 bytes 00:31:43.679 Memory Page Size Maximum: 4096 bytes 00:31:43.679 Persistent Memory Region: Not Supported 00:31:43.679 Optional Asynchronous Events Supported 00:31:43.679 Namespace Attribute Notices: Not Supported 00:31:43.679 Firmware Activation Notices: Not Supported 00:31:43.679 ANA Change Notices: Not Supported 00:31:43.679 PLE Aggregate Log Change Notices: Not Supported 00:31:43.679 LBA Status Info Alert Notices: Not Supported 00:31:43.679 EGE Aggregate Log Change Notices: Not Supported 00:31:43.679 Normal NVM Subsystem Shutdown event: Not Supported 00:31:43.679 Zone Descriptor Change Notices: Not Supported 00:31:43.679 Discovery Log Change Notices: Supported 00:31:43.679 Controller Attributes 00:31:43.679 128-bit Host Identifier: Not Supported 00:31:43.679 Non-Operational Permissive Mode: Not Supported 00:31:43.679 NVM Sets: Not Supported 00:31:43.679 Read Recovery Levels: Not Supported 00:31:43.679 Endurance Groups: Not Supported 00:31:43.679 Predictable Latency Mode: Not Supported 00:31:43.679 Traffic Based Keep ALive: Not Supported 00:31:43.679 Namespace Granularity: Not Supported 00:31:43.679 SQ Associations: Not Supported 00:31:43.679 UUID List: Not Supported 00:31:43.679 Multi-Domain Subsystem: Not Supported 00:31:43.679 Fixed Capacity Management: Not Supported 00:31:43.679 Variable Capacity Management: Not Supported 00:31:43.679 Delete Endurance Group: Not Supported 00:31:43.679 Delete NVM Set: Not Supported 00:31:43.679 Extended LBA Formats Supported: Not Supported 00:31:43.679 Flexible Data Placement Supported: Not Supported 00:31:43.679 00:31:43.679 Controller Memory Buffer Support 00:31:43.679 ================================ 00:31:43.679 Supported: No 00:31:43.679 00:31:43.679 Persistent Memory Region Support 00:31:43.679 ================================ 00:31:43.679 Supported: No 00:31:43.679 00:31:43.679 Admin Command Set Attributes 00:31:43.679 ============================ 00:31:43.679 Security Send/Receive: Not Supported 00:31:43.679 Format NVM: Not Supported 00:31:43.679 Firmware Activate/Download: Not Supported 00:31:43.679 Namespace Management: Not Supported 00:31:43.679 Device Self-Test: Not Supported 00:31:43.679 Directives: Not Supported 00:31:43.679 NVMe-MI: Not Supported 00:31:43.679 Virtualization Management: Not Supported 00:31:43.679 Doorbell Buffer Config: Not Supported 00:31:43.679 Get LBA Status Capability: Not Supported 00:31:43.679 Command & Feature Lockdown Capability: Not Supported 00:31:43.679 Abort Command Limit: 1 00:31:43.679 Async Event Request Limit: 4 00:31:43.679 Number of Firmware Slots: N/A 00:31:43.679 Firmware Slot 1 Read-Only: N/A 00:31:43.679 Firmware Activation Without Reset: N/A 00:31:43.679 Multiple Update Detection Support: N/A 00:31:43.679 Firmware Update Granularity: No Information Provided 00:31:43.679 Per-Namespace SMART Log: No 00:31:43.679 Asymmetric Namespace Access Log Page: Not Supported 00:31:43.679 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:31:43.679 Command Effects Log Page: Not Supported 00:31:43.679 Get Log Page Extended Data: Supported 00:31:43.679 Telemetry Log Pages: Not Supported 00:31:43.679 Persistent Event Log Pages: Not Supported 00:31:43.679 Supported Log Pages Log Page: May Support 00:31:43.679 Commands Supported & Effects Log Page: Not Supported 00:31:43.679 Feature Identifiers & Effects Log Page:May Support 00:31:43.679 NVMe-MI Commands & Effects Log Page: May Support 00:31:43.679 Data Area 4 for Telemetry Log: Not Supported 00:31:43.679 Error Log Page Entries Supported: 128 00:31:43.679 Keep Alive: Not Supported 00:31:43.679 00:31:43.679 NVM Command Set Attributes 00:31:43.679 ========================== 00:31:43.679 Submission Queue Entry Size 00:31:43.679 Max: 1 00:31:43.679 Min: 1 00:31:43.679 Completion Queue Entry Size 00:31:43.679 Max: 1 00:31:43.679 Min: 1 00:31:43.679 Number of Namespaces: 0 00:31:43.679 Compare Command: Not Supported 00:31:43.679 Write Uncorrectable Command: Not Supported 00:31:43.679 Dataset Management Command: Not Supported 00:31:43.679 Write Zeroes Command: Not Supported 00:31:43.679 Set Features Save Field: Not Supported 00:31:43.679 Reservations: Not Supported 00:31:43.679 Timestamp: Not Supported 00:31:43.679 Copy: Not Supported 00:31:43.679 Volatile Write Cache: Not Present 00:31:43.679 Atomic Write Unit (Normal): 1 00:31:43.679 Atomic Write Unit (PFail): 1 00:31:43.679 Atomic Compare & Write Unit: 1 00:31:43.679 Fused Compare & Write: Supported 00:31:43.679 Scatter-Gather List 00:31:43.679 SGL Command Set: Supported 00:31:43.679 SGL Keyed: Supported 00:31:43.679 SGL Bit Bucket Descriptor: Not Supported 00:31:43.679 SGL Metadata Pointer: Not Supported 00:31:43.679 Oversized SGL: Not Supported 00:31:43.679 SGL Metadata Address: Not Supported 00:31:43.679 SGL Offset: Supported 00:31:43.679 Transport SGL Data Block: Not Supported 00:31:43.679 Replay Protected Memory Block: Not Supported 00:31:43.679 00:31:43.679 Firmware Slot Information 00:31:43.679 ========================= 00:31:43.679 Active slot: 0 00:31:43.679 00:31:43.679 00:31:43.679 Error Log 00:31:43.679 ========= 00:31:43.679 00:31:43.679 Active Namespaces 00:31:43.679 ================= 00:31:43.679 Discovery Log Page 00:31:43.679 ================== 00:31:43.679 Generation Counter: 2 00:31:43.679 Number of Records: 2 00:31:43.679 Record Format: 0 00:31:43.679 00:31:43.679 Discovery Log Entry 0 00:31:43.679 ---------------------- 00:31:43.679 Transport Type: 3 (TCP) 00:31:43.680 Address Family: 1 (IPv4) 00:31:43.680 Subsystem Type: 3 (Current Discovery Subsystem) 00:31:43.680 Entry Flags: 00:31:43.680 Duplicate Returned Information: 1 00:31:43.680 Explicit Persistent Connection Support for Discovery: 1 00:31:43.680 Transport Requirements: 00:31:43.680 Secure Channel: Not Required 00:31:43.680 Port ID: 0 (0x0000) 00:31:43.680 Controller ID: 65535 (0xffff) 00:31:43.680 Admin Max SQ Size: 128 00:31:43.680 Transport Service Identifier: 4420 00:31:43.680 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:31:43.680 Transport Address: 10.0.0.2 00:31:43.680 Discovery Log Entry 1 00:31:43.680 ---------------------- 00:31:43.680 Transport Type: 3 (TCP) 00:31:43.680 Address Family: 1 (IPv4) 00:31:43.680 Subsystem Type: 2 (NVM Subsystem) 00:31:43.680 Entry Flags: 00:31:43.680 Duplicate Returned Information: 0 00:31:43.680 Explicit Persistent Connection Support for Discovery: 0 00:31:43.680 Transport Requirements: 00:31:43.680 Secure Channel: Not Required 00:31:43.680 Port ID: 0 (0x0000) 00:31:43.680 Controller ID: 65535 (0xffff) 00:31:43.680 Admin Max SQ Size: 128 00:31:43.680 Transport Service Identifier: 4420 00:31:43.680 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:31:43.680 Transport Address: 10.0.0.2 [2024-12-09 09:49:18.824752] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:31:43.680 [2024-12-09 09:49:18.824763] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe1f540) on tqpair=0xdb3ed0 00:31:43.680 [2024-12-09 09:49:18.824770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.680 [2024-12-09 09:49:18.824776] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe1f6c0) on tqpair=0xdb3ed0 00:31:43.680 [2024-12-09 09:49:18.824780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.680 [2024-12-09 09:49:18.824787] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe1f840) on tqpair=0xdb3ed0 00:31:43.680 [2024-12-09 09:49:18.824792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.680 [2024-12-09 09:49:18.824797] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe1f9c0) on tqpair=0xdb3ed0 00:31:43.680 [2024-12-09 09:49:18.824801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.680 [2024-12-09 09:49:18.824811] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:43.680 [2024-12-09 09:49:18.824816] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:43.680 [2024-12-09 09:49:18.824819] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb3ed0) 00:31:43.680 [2024-12-09 09:49:18.824827] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.680 [2024-12-09 09:49:18.824840] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe1f9c0, cid 3, qid 0 00:31:43.680 [2024-12-09 09:49:18.824900] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:43.680 [2024-12-09 09:49:18.824906] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:43.680 [2024-12-09 09:49:18.824910] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:43.680 [2024-12-09 09:49:18.824914] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe1f9c0) on tqpair=0xdb3ed0 00:31:43.680 [2024-12-09 09:49:18.824921] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:43.680 [2024-12-09 09:49:18.824925] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:43.680 [2024-12-09 09:49:18.824928] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb3ed0) 00:31:43.680 [2024-12-09 09:49:18.824935] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.680 [2024-12-09 09:49:18.824948] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe1f9c0, cid 3, qid 0 00:31:43.680 [2024-12-09 09:49:18.825014] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:43.680 [2024-12-09 09:49:18.825021] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:43.680 [2024-12-09 09:49:18.825024] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:43.680 [2024-12-09 09:49:18.825028] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe1f9c0) on tqpair=0xdb3ed0 00:31:43.680 [2024-12-09 09:49:18.825033] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:31:43.680 [2024-12-09 09:49:18.825038] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:31:43.680 [2024-12-09 09:49:18.825048] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:43.680 [2024-12-09 09:49:18.825052] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:43.680 [2024-12-09 09:49:18.825055] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb3ed0) 00:31:43.680 [2024-12-09 09:49:18.825062] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.680 [2024-12-09 09:49:18.825072] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe1f9c0, cid 3, qid 0 00:31:43.680 [2024-12-09 09:49:18.825134] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:43.680 [2024-12-09 09:49:18.825140] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:43.680 [2024-12-09 09:49:18.825144] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:43.680 [2024-12-09 09:49:18.825148] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe1f9c0) on tqpair=0xdb3ed0 00:31:43.680 [2024-12-09 09:49:18.825158] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:43.680 [2024-12-09 09:49:18.825162] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:43.680 [2024-12-09 09:49:18.825166] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb3ed0) 00:31:43.680 [2024-12-09 09:49:18.825174] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.680 [2024-12-09 09:49:18.825185] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe1f9c0, cid 3, qid 0 00:31:43.680 [2024-12-09 09:49:18.825250] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:43.680 [2024-12-09 09:49:18.825256] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:43.680 [2024-12-09 09:49:18.825260] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:43.680 [2024-12-09 09:49:18.825263] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe1f9c0) on tqpair=0xdb3ed0 00:31:43.680 [2024-12-09 09:49:18.825273] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:43.680 [2024-12-09 09:49:18.825277] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:43.680 [2024-12-09 09:49:18.825281] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb3ed0) 00:31:43.680 [2024-12-09 09:49:18.825287] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.680 [2024-12-09 09:49:18.825297] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe1f9c0, cid 3, qid 0 00:31:43.680 [2024-12-09 09:49:18.825362] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:43.680 [2024-12-09 09:49:18.825368] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:43.680 [2024-12-09 09:49:18.825372] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:43.680 [2024-12-09 09:49:18.825375] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe1f9c0) on tqpair=0xdb3ed0 00:31:43.680 [2024-12-09 09:49:18.825386] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:43.680 [2024-12-09 09:49:18.825390] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:43.680 [2024-12-09 09:49:18.825393] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb3ed0) 00:31:43.680 [2024-12-09 09:49:18.825400] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.680 [2024-12-09 09:49:18.825410] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe1f9c0, cid 3, qid 0 00:31:43.680 [2024-12-09 09:49:18.825475] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:43.680 [2024-12-09 09:49:18.825481] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:43.680 [2024-12-09 09:49:18.825484] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:43.680 [2024-12-09 09:49:18.825488] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe1f9c0) on tqpair=0xdb3ed0 00:31:43.680 [2024-12-09 09:49:18.825498] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:43.680 [2024-12-09 09:49:18.825502] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:43.680 [2024-12-09 09:49:18.825505] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb3ed0) 00:31:43.680 [2024-12-09 09:49:18.825512] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.680 [2024-12-09 09:49:18.825522] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe1f9c0, cid 3, qid 0 00:31:43.680 [2024-12-09 09:49:18.825580] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:43.680 [2024-12-09 09:49:18.825586] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:43.680 [2024-12-09 09:49:18.825590] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:43.680 [2024-12-09 09:49:18.825594] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe1f9c0) on tqpair=0xdb3ed0 00:31:43.680 [2024-12-09 09:49:18.825604] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:43.680 [2024-12-09 09:49:18.825608] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:43.680 [2024-12-09 09:49:18.825611] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb3ed0) 00:31:43.680 [2024-12-09 09:49:18.825618] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.680 [2024-12-09 09:49:18.825630] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe1f9c0, cid 3, qid 0 00:31:43.680 [2024-12-09 09:49:18.825701] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:43.680 [2024-12-09 09:49:18.825708] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:43.680 [2024-12-09 09:49:18.825711] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:43.680 [2024-12-09 09:49:18.825715] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe1f9c0) on tqpair=0xdb3ed0 00:31:43.680 [2024-12-09 09:49:18.825724] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:43.680 [2024-12-09 09:49:18.825728] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:43.681 [2024-12-09 09:49:18.825732] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb3ed0) 00:31:43.681 [2024-12-09 09:49:18.825739] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.681 [2024-12-09 09:49:18.825749] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe1f9c0, cid 3, qid 0 00:31:43.681 [2024-12-09 09:49:18.825833] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:43.681 [2024-12-09 09:49:18.825839] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:43.681 [2024-12-09 09:49:18.825842] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:43.681 [2024-12-09 09:49:18.825846] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe1f9c0) on tqpair=0xdb3ed0 00:31:43.681 [2024-12-09 09:49:18.825856] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:43.681 [2024-12-09 09:49:18.825860] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:43.681 [2024-12-09 09:49:18.825864] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb3ed0) 00:31:43.681 [2024-12-09 09:49:18.825871] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.681 [2024-12-09 09:49:18.825880] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe1f9c0, cid 3, qid 0 00:31:43.681 [2024-12-09 09:49:18.825945] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:43.681 [2024-12-09 09:49:18.825952] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:43.681 [2024-12-09 09:49:18.825955] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:43.681 [2024-12-09 09:49:18.825959] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe1f9c0) on tqpair=0xdb3ed0 00:31:43.681 [2024-12-09 09:49:18.825968] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:43.681 [2024-12-09 09:49:18.825972] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:43.681 [2024-12-09 09:49:18.825976] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb3ed0) 00:31:43.681 [2024-12-09 09:49:18.825983] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.681 [2024-12-09 09:49:18.825992] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe1f9c0, cid 3, qid 0 00:31:43.681 [2024-12-09 09:49:18.826048] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:43.681 [2024-12-09 09:49:18.826054] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:43.681 [2024-12-09 09:49:18.826058] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:43.681 [2024-12-09 09:49:18.826062] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe1f9c0) on tqpair=0xdb3ed0 00:31:43.681 [2024-12-09 09:49:18.826071] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:43.681 [2024-12-09 09:49:18.826075] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:43.681 [2024-12-09 09:49:18.826079] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb3ed0) 00:31:43.681 [2024-12-09 09:49:18.826086] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.681 [2024-12-09 09:49:18.826095] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe1f9c0, cid 3, qid 0 00:31:43.681 [2024-12-09 09:49:18.826162] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:43.681 [2024-12-09 09:49:18.826169] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:43.681 [2024-12-09 09:49:18.826172] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:43.681 [2024-12-09 09:49:18.826176] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe1f9c0) on tqpair=0xdb3ed0 00:31:43.681 [2024-12-09 09:49:18.826185] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:43.681 [2024-12-09 09:49:18.826190] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:43.681 [2024-12-09 09:49:18.826193] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb3ed0) 00:31:43.681 [2024-12-09 09:49:18.826200] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.681 [2024-12-09 09:49:18.826210] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe1f9c0, cid 3, qid 0 00:31:43.681 [2024-12-09 09:49:18.826291] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:43.681 [2024-12-09 09:49:18.826297] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:43.681 [2024-12-09 09:49:18.826300] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:43.681 [2024-12-09 09:49:18.826304] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe1f9c0) on tqpair=0xdb3ed0 00:31:43.681 [2024-12-09 09:49:18.826314] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:43.681 [2024-12-09 09:49:18.826318] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:43.681 [2024-12-09 09:49:18.826322] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb3ed0) 00:31:43.681 [2024-12-09 09:49:18.826329] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.681 [2024-12-09 09:49:18.826338] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe1f9c0, cid 3, qid 0 00:31:43.681 [2024-12-09 09:49:18.826400] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:43.681 [2024-12-09 09:49:18.826406] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:43.681 [2024-12-09 09:49:18.826410] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:43.681 [2024-12-09 09:49:18.826414] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe1f9c0) on tqpair=0xdb3ed0 00:31:43.681 [2024-12-09 09:49:18.826423] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:43.681 [2024-12-09 09:49:18.826427] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:43.681 [2024-12-09 09:49:18.826431] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb3ed0) 00:31:43.681 [2024-12-09 09:49:18.826437] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.681 [2024-12-09 09:49:18.826447] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe1f9c0, cid 3, qid 0 00:31:43.681 [2024-12-09 09:49:18.826507] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:43.681 [2024-12-09 09:49:18.826513] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:43.681 [2024-12-09 09:49:18.826516] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:43.681 [2024-12-09 09:49:18.826520] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe1f9c0) on tqpair=0xdb3ed0 00:31:43.681 [2024-12-09 09:49:18.826530] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:43.681 [2024-12-09 09:49:18.826534] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:43.681 [2024-12-09 09:49:18.826538] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb3ed0) 00:31:43.681 [2024-12-09 09:49:18.826545] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.681 [2024-12-09 09:49:18.826555] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe1f9c0, cid 3, qid 0 00:31:43.681 [2024-12-09 09:49:18.826620] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:43.681 [2024-12-09 09:49:18.826628] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:43.681 [2024-12-09 09:49:18.826631] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:43.681 [2024-12-09 09:49:18.826635] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe1f9c0) on tqpair=0xdb3ed0 00:31:43.681 [2024-12-09 09:49:18.826649] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:43.681 [2024-12-09 09:49:18.826653] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:43.681 [2024-12-09 09:49:18.826657] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb3ed0) 00:31:43.681 [2024-12-09 09:49:18.826663] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.681 [2024-12-09 09:49:18.826674] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe1f9c0, cid 3, qid 0 00:31:43.681 [2024-12-09 09:49:18.826735] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:43.681 [2024-12-09 09:49:18.826742] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:43.681 [2024-12-09 09:49:18.826745] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:43.681 [2024-12-09 09:49:18.826749] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe1f9c0) on tqpair=0xdb3ed0 00:31:43.681 [2024-12-09 09:49:18.826758] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:43.681 [2024-12-09 09:49:18.826762] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:43.681 [2024-12-09 09:49:18.826766] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb3ed0) 00:31:43.681 [2024-12-09 09:49:18.826773] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.681 [2024-12-09 09:49:18.826783] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe1f9c0, cid 3, qid 0 00:31:43.681 [2024-12-09 09:49:18.826847] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:43.681 [2024-12-09 09:49:18.826854] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:43.681 [2024-12-09 09:49:18.826857] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:43.681 [2024-12-09 09:49:18.826861] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe1f9c0) on tqpair=0xdb3ed0 00:31:43.681 [2024-12-09 09:49:18.826871] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:43.681 [2024-12-09 09:49:18.826875] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:43.681 [2024-12-09 09:49:18.826878] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb3ed0) 00:31:43.681 [2024-12-09 09:49:18.826885] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.681 [2024-12-09 09:49:18.826895] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe1f9c0, cid 3, qid 0 00:31:43.681 [2024-12-09 09:49:18.826955] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:43.681 [2024-12-09 09:49:18.826962] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:43.681 [2024-12-09 09:49:18.826965] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:43.681 [2024-12-09 09:49:18.826969] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe1f9c0) on tqpair=0xdb3ed0 00:31:43.681 [2024-12-09 09:49:18.826978] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:43.681 [2024-12-09 09:49:18.826982] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:43.681 [2024-12-09 09:49:18.826986] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb3ed0) 00:31:43.681 [2024-12-09 09:49:18.826993] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.681 [2024-12-09 09:49:18.827003] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe1f9c0, cid 3, qid 0 00:31:43.681 [2024-12-09 09:49:18.827087] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:43.681 [2024-12-09 09:49:18.827093] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:43.681 [2024-12-09 09:49:18.827098] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:43.682 [2024-12-09 09:49:18.827102] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe1f9c0) on tqpair=0xdb3ed0 00:31:43.682 [2024-12-09 09:49:18.827112] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:43.682 [2024-12-09 09:49:18.827116] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:43.682 [2024-12-09 09:49:18.827120] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb3ed0) 00:31:43.682 [2024-12-09 09:49:18.827127] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.682 [2024-12-09 09:49:18.827136] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe1f9c0, cid 3, qid 0 00:31:43.682 [2024-12-09 09:49:18.827197] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:43.682 [2024-12-09 09:49:18.827204] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:43.682 [2024-12-09 09:49:18.827207] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:43.682 [2024-12-09 09:49:18.827211] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe1f9c0) on tqpair=0xdb3ed0 00:31:43.682 [2024-12-09 09:49:18.827221] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:43.682 [2024-12-09 09:49:18.827225] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:43.682 [2024-12-09 09:49:18.827228] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb3ed0) 00:31:43.682 [2024-12-09 09:49:18.827235] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.682 [2024-12-09 09:49:18.827245] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe1f9c0, cid 3, qid 0 00:31:43.682 [2024-12-09 09:49:18.827309] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:43.682 [2024-12-09 09:49:18.827315] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:43.682 [2024-12-09 09:49:18.827318] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:43.682 [2024-12-09 09:49:18.827322] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe1f9c0) on tqpair=0xdb3ed0 00:31:43.682 [2024-12-09 09:49:18.827332] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:43.682 [2024-12-09 09:49:18.827336] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:43.682 [2024-12-09 09:49:18.827340] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb3ed0) 00:31:43.682 [2024-12-09 09:49:18.827346] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.682 [2024-12-09 09:49:18.827356] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe1f9c0, cid 3, qid 0 00:31:43.682 [2024-12-09 09:49:18.827424] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:43.682 [2024-12-09 09:49:18.827430] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:43.682 [2024-12-09 09:49:18.827433] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:43.682 [2024-12-09 09:49:18.827437] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe1f9c0) on tqpair=0xdb3ed0 00:31:43.682 [2024-12-09 09:49:18.827447] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:43.682 [2024-12-09 09:49:18.827451] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:43.682 [2024-12-09 09:49:18.827454] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb3ed0) 00:31:43.682 [2024-12-09 09:49:18.827461] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.682 [2024-12-09 09:49:18.827471] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe1f9c0, cid 3, qid 0 00:31:43.682 [2024-12-09 09:49:18.827533] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:43.682 [2024-12-09 09:49:18.827539] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:43.682 [2024-12-09 09:49:18.827543] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:43.682 [2024-12-09 09:49:18.827548] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe1f9c0) on tqpair=0xdb3ed0 00:31:43.682 [2024-12-09 09:49:18.827558] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:43.682 [2024-12-09 09:49:18.827562] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:43.682 [2024-12-09 09:49:18.827565] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb3ed0) 00:31:43.682 [2024-12-09 09:49:18.827572] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.682 [2024-12-09 09:49:18.827582] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe1f9c0, cid 3, qid 0 00:31:43.682 [2024-12-09 09:49:18.827647] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:43.682 [2024-12-09 09:49:18.827653] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:43.682 [2024-12-09 09:49:18.827657] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:43.682 [2024-12-09 09:49:18.827661] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe1f9c0) on tqpair=0xdb3ed0 00:31:43.682 [2024-12-09 09:49:18.827671] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:43.682 [2024-12-09 09:49:18.827675] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:43.682 [2024-12-09 09:49:18.827678] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb3ed0) 00:31:43.682 [2024-12-09 09:49:18.827685] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.682 [2024-12-09 09:49:18.827695] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe1f9c0, cid 3, qid 0 00:31:43.682 [2024-12-09 09:49:18.827757] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:43.682 [2024-12-09 09:49:18.827763] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:43.682 [2024-12-09 09:49:18.827766] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:43.682 [2024-12-09 09:49:18.827770] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe1f9c0) on tqpair=0xdb3ed0 00:31:43.682 [2024-12-09 09:49:18.827780] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:43.682 [2024-12-09 09:49:18.827784] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:43.682 [2024-12-09 09:49:18.827787] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb3ed0) 00:31:43.682 [2024-12-09 09:49:18.827794] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.682 [2024-12-09 09:49:18.827804] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe1f9c0, cid 3, qid 0 00:31:43.682 [2024-12-09 09:49:18.827869] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:43.682 [2024-12-09 09:49:18.827875] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:43.682 [2024-12-09 09:49:18.827878] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:43.682 [2024-12-09 09:49:18.827882] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe1f9c0) on tqpair=0xdb3ed0 00:31:43.682 [2024-12-09 09:49:18.827892] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:43.682 [2024-12-09 09:49:18.827896] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:43.682 [2024-12-09 09:49:18.827900] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb3ed0) 00:31:43.682 [2024-12-09 09:49:18.827906] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.682 [2024-12-09 09:49:18.827916] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe1f9c0, cid 3, qid 0 00:31:43.682 [2024-12-09 09:49:18.827981] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:43.682 [2024-12-09 09:49:18.827987] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:43.682 [2024-12-09 09:49:18.827990] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:43.682 [2024-12-09 09:49:18.827994] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe1f9c0) on tqpair=0xdb3ed0 00:31:43.682 [2024-12-09 09:49:18.828005] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:43.682 [2024-12-09 09:49:18.828010] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:43.682 [2024-12-09 09:49:18.828013] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb3ed0) 00:31:43.682 [2024-12-09 09:49:18.828020] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.682 [2024-12-09 09:49:18.828030] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe1f9c0, cid 3, qid 0 00:31:43.682 [2024-12-09 09:49:18.828095] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:43.682 [2024-12-09 09:49:18.828102] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:43.682 [2024-12-09 09:49:18.828105] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:43.682 [2024-12-09 09:49:18.828109] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe1f9c0) on tqpair=0xdb3ed0 00:31:43.682 [2024-12-09 09:49:18.828119] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:43.682 [2024-12-09 09:49:18.828122] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:43.682 [2024-12-09 09:49:18.828126] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb3ed0) 00:31:43.682 [2024-12-09 09:49:18.828133] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.682 [2024-12-09 09:49:18.828142] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe1f9c0, cid 3, qid 0 00:31:43.682 [2024-12-09 09:49:18.828204] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:43.682 [2024-12-09 09:49:18.828210] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:43.682 [2024-12-09 09:49:18.828214] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:43.682 [2024-12-09 09:49:18.828217] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe1f9c0) on tqpair=0xdb3ed0 00:31:43.682 [2024-12-09 09:49:18.828227] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:43.682 [2024-12-09 09:49:18.828231] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:43.682 [2024-12-09 09:49:18.828234] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb3ed0) 00:31:43.682 [2024-12-09 09:49:18.828241] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.682 [2024-12-09 09:49:18.828251] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe1f9c0, cid 3, qid 0 00:31:43.682 [2024-12-09 09:49:18.828319] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:43.682 [2024-12-09 09:49:18.828326] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:43.682 [2024-12-09 09:49:18.828329] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:43.682 [2024-12-09 09:49:18.828333] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe1f9c0) on tqpair=0xdb3ed0 00:31:43.682 [2024-12-09 09:49:18.828342] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:43.682 [2024-12-09 09:49:18.828346] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:43.682 [2024-12-09 09:49:18.828350] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb3ed0) 00:31:43.682 [2024-12-09 09:49:18.828357] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.682 [2024-12-09 09:49:18.828366] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe1f9c0, cid 3, qid 0 00:31:43.682 [2024-12-09 09:49:18.828429] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:43.683 [2024-12-09 09:49:18.828435] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:43.683 [2024-12-09 09:49:18.828438] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:43.683 [2024-12-09 09:49:18.828442] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe1f9c0) on tqpair=0xdb3ed0 00:31:43.683 [2024-12-09 09:49:18.828452] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:43.683 [2024-12-09 09:49:18.828456] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:43.683 [2024-12-09 09:49:18.828461] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb3ed0) 00:31:43.683 [2024-12-09 09:49:18.828468] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.683 [2024-12-09 09:49:18.828478] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe1f9c0, cid 3, qid 0 00:31:43.683 [2024-12-09 09:49:18.828540] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:43.683 [2024-12-09 09:49:18.828546] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:43.683 [2024-12-09 09:49:18.828550] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:43.683 [2024-12-09 09:49:18.828554] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe1f9c0) on tqpair=0xdb3ed0 00:31:43.683 [2024-12-09 09:49:18.828563] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:43.683 [2024-12-09 09:49:18.828567] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:43.683 [2024-12-09 09:49:18.828571] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb3ed0) 00:31:43.683 [2024-12-09 09:49:18.828577] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.683 [2024-12-09 09:49:18.828587] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe1f9c0, cid 3, qid 0 00:31:43.683 [2024-12-09 09:49:18.832646] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:43.683 [2024-12-09 09:49:18.832654] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:43.683 [2024-12-09 09:49:18.832658] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:43.683 [2024-12-09 09:49:18.832662] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe1f9c0) on tqpair=0xdb3ed0 00:31:43.683 [2024-12-09 09:49:18.832672] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:43.683 [2024-12-09 09:49:18.832676] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:43.683 [2024-12-09 09:49:18.832679] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdb3ed0) 00:31:43.683 [2024-12-09 09:49:18.832686] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.683 [2024-12-09 09:49:18.832697] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe1f9c0, cid 3, qid 0 00:31:43.683 [2024-12-09 09:49:18.832764] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:43.683 [2024-12-09 09:49:18.832770] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:43.683 [2024-12-09 09:49:18.832773] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:43.683 [2024-12-09 09:49:18.832777] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe1f9c0) on tqpair=0xdb3ed0 00:31:43.683 [2024-12-09 09:49:18.832785] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 7 milliseconds 00:31:43.683 00:31:43.683 09:49:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:31:43.683 [2024-12-09 09:49:18.875040] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:31:43.683 [2024-12-09 09:49:18.875087] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2956745 ] 00:31:43.683 [2024-12-09 09:49:18.930694] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:31:43.683 [2024-12-09 09:49:18.930747] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:31:43.683 [2024-12-09 09:49:18.930753] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:31:43.683 [2024-12-09 09:49:18.930766] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:31:43.683 [2024-12-09 09:49:18.930774] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:31:43.683 [2024-12-09 09:49:18.931359] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:31:43.683 [2024-12-09 09:49:18.931391] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1c32ed0 0 00:31:43.683 [2024-12-09 09:49:18.937648] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:31:43.683 [2024-12-09 09:49:18.937660] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:31:43.683 [2024-12-09 09:49:18.937667] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:31:43.683 [2024-12-09 09:49:18.937670] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:31:43.683 [2024-12-09 09:49:18.937696] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:43.683 [2024-12-09 09:49:18.937702] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:43.683 [2024-12-09 09:49:18.937706] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c32ed0) 00:31:43.683 [2024-12-09 09:49:18.937717] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:31:43.683 [2024-12-09 09:49:18.937734] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9e540, cid 0, qid 0 00:31:43.683 [2024-12-09 09:49:18.945651] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:43.683 [2024-12-09 09:49:18.945660] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:43.683 [2024-12-09 09:49:18.945664] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:43.683 [2024-12-09 09:49:18.945669] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9e540) on tqpair=0x1c32ed0 00:31:43.683 [2024-12-09 09:49:18.945678] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:31:43.683 [2024-12-09 09:49:18.945684] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:31:43.683 [2024-12-09 09:49:18.945690] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:31:43.683 [2024-12-09 09:49:18.945703] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:43.683 [2024-12-09 09:49:18.945707] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:43.683 [2024-12-09 09:49:18.945711] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c32ed0) 00:31:43.683 [2024-12-09 09:49:18.945718] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.683 [2024-12-09 09:49:18.945732] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9e540, cid 0, qid 0 00:31:43.683 [2024-12-09 09:49:18.945888] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:43.683 [2024-12-09 09:49:18.945895] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:43.683 [2024-12-09 09:49:18.945898] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:43.683 [2024-12-09 09:49:18.945902] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9e540) on tqpair=0x1c32ed0 00:31:43.683 [2024-12-09 09:49:18.945909] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:31:43.683 [2024-12-09 09:49:18.945917] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:31:43.683 [2024-12-09 09:49:18.945924] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:43.683 [2024-12-09 09:49:18.945927] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:43.683 [2024-12-09 09:49:18.945931] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c32ed0) 00:31:43.683 [2024-12-09 09:49:18.945940] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.683 [2024-12-09 09:49:18.945952] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9e540, cid 0, qid 0 00:31:43.683 [2024-12-09 09:49:18.946107] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:43.683 [2024-12-09 09:49:18.946114] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:43.683 [2024-12-09 09:49:18.946117] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:43.683 [2024-12-09 09:49:18.946121] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9e540) on tqpair=0x1c32ed0 00:31:43.683 [2024-12-09 09:49:18.946126] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:31:43.683 [2024-12-09 09:49:18.946134] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:31:43.683 [2024-12-09 09:49:18.946141] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:43.683 [2024-12-09 09:49:18.946144] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:43.683 [2024-12-09 09:49:18.946148] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c32ed0) 00:31:43.683 [2024-12-09 09:49:18.946155] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.683 [2024-12-09 09:49:18.946165] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9e540, cid 0, qid 0 00:31:43.683 [2024-12-09 09:49:18.946212] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:43.683 [2024-12-09 09:49:18.946219] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:43.683 [2024-12-09 09:49:18.946222] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:43.683 [2024-12-09 09:49:18.946226] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9e540) on tqpair=0x1c32ed0 00:31:43.683 [2024-12-09 09:49:18.946231] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:31:43.683 [2024-12-09 09:49:18.946240] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:43.683 [2024-12-09 09:49:18.946244] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:43.683 [2024-12-09 09:49:18.946248] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c32ed0) 00:31:43.683 [2024-12-09 09:49:18.946254] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.683 [2024-12-09 09:49:18.946264] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9e540, cid 0, qid 0 00:31:43.683 [2024-12-09 09:49:18.946317] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:43.683 [2024-12-09 09:49:18.946323] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:43.683 [2024-12-09 09:49:18.946327] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:43.683 [2024-12-09 09:49:18.946331] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9e540) on tqpair=0x1c32ed0 00:31:43.683 [2024-12-09 09:49:18.946335] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:31:43.684 [2024-12-09 09:49:18.946340] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:31:43.684 [2024-12-09 09:49:18.946347] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:31:43.684 [2024-12-09 09:49:18.946457] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:31:43.684 [2024-12-09 09:49:18.946462] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:31:43.684 [2024-12-09 09:49:18.946470] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:43.684 [2024-12-09 09:49:18.946475] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:43.684 [2024-12-09 09:49:18.946479] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c32ed0) 00:31:43.684 [2024-12-09 09:49:18.946486] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.684 [2024-12-09 09:49:18.946496] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9e540, cid 0, qid 0 00:31:43.684 [2024-12-09 09:49:18.946550] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:43.684 [2024-12-09 09:49:18.946556] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:43.684 [2024-12-09 09:49:18.946560] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:43.684 [2024-12-09 09:49:18.946564] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9e540) on tqpair=0x1c32ed0 00:31:43.684 [2024-12-09 09:49:18.946568] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:31:43.684 [2024-12-09 09:49:18.946578] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:43.684 [2024-12-09 09:49:18.946582] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:43.684 [2024-12-09 09:49:18.946585] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c32ed0) 00:31:43.684 [2024-12-09 09:49:18.946592] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.684 [2024-12-09 09:49:18.946602] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9e540, cid 0, qid 0 00:31:43.684 [2024-12-09 09:49:18.950645] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:43.684 [2024-12-09 09:49:18.950653] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:43.684 [2024-12-09 09:49:18.950657] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:43.684 [2024-12-09 09:49:18.950661] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9e540) on tqpair=0x1c32ed0 00:31:43.684 [2024-12-09 09:49:18.950665] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:31:43.684 [2024-12-09 09:49:18.950670] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:31:43.684 [2024-12-09 09:49:18.950678] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:31:43.684 [2024-12-09 09:49:18.950690] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:31:43.684 [2024-12-09 09:49:18.950698] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:43.684 [2024-12-09 09:49:18.950701] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c32ed0) 00:31:43.684 [2024-12-09 09:49:18.950708] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.684 [2024-12-09 09:49:18.950720] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9e540, cid 0, qid 0 00:31:43.684 [2024-12-09 09:49:18.950885] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:43.684 [2024-12-09 09:49:18.950892] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:43.684 [2024-12-09 09:49:18.950895] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:43.684 [2024-12-09 09:49:18.950899] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c32ed0): datao=0, datal=4096, cccid=0 00:31:43.684 [2024-12-09 09:49:18.950904] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c9e540) on tqpair(0x1c32ed0): expected_datao=0, payload_size=4096 00:31:43.684 [2024-12-09 09:49:18.950908] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:43.684 [2024-12-09 09:49:18.950926] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:43.684 [2024-12-09 09:49:18.950933] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:43.684 [2024-12-09 09:49:18.951074] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:43.684 [2024-12-09 09:49:18.951081] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:43.684 [2024-12-09 09:49:18.951084] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:43.684 [2024-12-09 09:49:18.951088] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9e540) on tqpair=0x1c32ed0 00:31:43.684 [2024-12-09 09:49:18.951100] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:31:43.684 [2024-12-09 09:49:18.951105] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:31:43.684 [2024-12-09 09:49:18.951109] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:31:43.684 [2024-12-09 09:49:18.951113] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:31:43.684 [2024-12-09 09:49:18.951118] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:31:43.684 [2024-12-09 09:49:18.951123] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:31:43.684 [2024-12-09 09:49:18.951131] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:31:43.684 [2024-12-09 09:49:18.951138] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:43.684 [2024-12-09 09:49:18.951142] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:43.684 [2024-12-09 09:49:18.951145] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c32ed0) 00:31:43.684 [2024-12-09 09:49:18.951152] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:43.684 [2024-12-09 09:49:18.951163] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9e540, cid 0, qid 0 00:31:43.684 [2024-12-09 09:49:18.951316] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:43.684 [2024-12-09 09:49:18.951323] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:43.684 [2024-12-09 09:49:18.951327] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:43.684 [2024-12-09 09:49:18.951330] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9e540) on tqpair=0x1c32ed0 00:31:43.684 [2024-12-09 09:49:18.951337] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:43.684 [2024-12-09 09:49:18.951341] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:43.684 [2024-12-09 09:49:18.951344] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c32ed0) 00:31:43.684 [2024-12-09 09:49:18.951351] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:43.684 [2024-12-09 09:49:18.951357] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:43.684 [2024-12-09 09:49:18.951360] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:43.684 [2024-12-09 09:49:18.951364] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1c32ed0) 00:31:43.684 [2024-12-09 09:49:18.951370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:43.684 [2024-12-09 09:49:18.951376] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:43.684 [2024-12-09 09:49:18.951380] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:43.684 [2024-12-09 09:49:18.951383] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1c32ed0) 00:31:43.684 [2024-12-09 09:49:18.951389] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:43.684 [2024-12-09 09:49:18.951395] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:43.684 [2024-12-09 09:49:18.951400] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:43.684 [2024-12-09 09:49:18.951404] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c32ed0) 00:31:43.684 [2024-12-09 09:49:18.951410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:43.684 [2024-12-09 09:49:18.951415] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:31:43.684 [2024-12-09 09:49:18.951424] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:31:43.684 [2024-12-09 09:49:18.951431] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:43.684 [2024-12-09 09:49:18.951434] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c32ed0) 00:31:43.684 [2024-12-09 09:49:18.951441] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.684 [2024-12-09 09:49:18.951453] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9e540, cid 0, qid 0 00:31:43.684 [2024-12-09 09:49:18.951458] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9e6c0, cid 1, qid 0 00:31:43.684 [2024-12-09 09:49:18.951463] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9e840, cid 2, qid 0 00:31:43.684 [2024-12-09 09:49:18.951468] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9e9c0, cid 3, qid 0 00:31:43.685 [2024-12-09 09:49:18.951472] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9eb40, cid 4, qid 0 00:31:43.685 [2024-12-09 09:49:18.951645] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:43.685 [2024-12-09 09:49:18.951652] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:43.685 [2024-12-09 09:49:18.951656] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:43.685 [2024-12-09 09:49:18.951659] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9eb40) on tqpair=0x1c32ed0 00:31:43.685 [2024-12-09 09:49:18.951664] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:31:43.685 [2024-12-09 09:49:18.951669] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:31:43.685 [2024-12-09 09:49:18.951678] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:31:43.685 [2024-12-09 09:49:18.951684] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:31:43.685 [2024-12-09 09:49:18.951690] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:43.685 [2024-12-09 09:49:18.951694] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:43.685 [2024-12-09 09:49:18.951697] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c32ed0) 00:31:43.685 [2024-12-09 09:49:18.951704] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:43.685 [2024-12-09 09:49:18.951715] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9eb40, cid 4, qid 0 00:31:43.685 [2024-12-09 09:49:18.951911] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:43.685 [2024-12-09 09:49:18.951918] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:43.685 [2024-12-09 09:49:18.951921] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:43.685 [2024-12-09 09:49:18.951925] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9eb40) on tqpair=0x1c32ed0 00:31:43.685 [2024-12-09 09:49:18.951989] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:31:43.685 [2024-12-09 09:49:18.951997] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:31:43.685 [2024-12-09 09:49:18.952007] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:43.685 [2024-12-09 09:49:18.952010] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c32ed0) 00:31:43.685 [2024-12-09 09:49:18.952017] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.685 [2024-12-09 09:49:18.952027] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9eb40, cid 4, qid 0 00:31:43.685 [2024-12-09 09:49:18.952226] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:43.685 [2024-12-09 09:49:18.952233] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:43.685 [2024-12-09 09:49:18.952236] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:43.685 [2024-12-09 09:49:18.952240] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c32ed0): datao=0, datal=4096, cccid=4 00:31:43.685 [2024-12-09 09:49:18.952245] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c9eb40) on tqpair(0x1c32ed0): expected_datao=0, payload_size=4096 00:31:43.685 [2024-12-09 09:49:18.952249] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:43.685 [2024-12-09 09:49:18.952262] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:43.685 [2024-12-09 09:49:18.952266] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:43.685 [2024-12-09 09:49:18.992805] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:43.685 [2024-12-09 09:49:18.992815] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:43.685 [2024-12-09 09:49:18.992818] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:43.685 [2024-12-09 09:49:18.992822] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9eb40) on tqpair=0x1c32ed0 00:31:43.685 [2024-12-09 09:49:18.992831] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:31:43.685 [2024-12-09 09:49:18.992842] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:31:43.685 [2024-12-09 09:49:18.992851] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:31:43.685 [2024-12-09 09:49:18.992858] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:43.685 [2024-12-09 09:49:18.992861] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c32ed0) 00:31:43.685 [2024-12-09 09:49:18.992868] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.685 [2024-12-09 09:49:18.992880] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9eb40, cid 4, qid 0 00:31:43.685 [2024-12-09 09:49:18.993138] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:43.685 [2024-12-09 09:49:18.993145] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:43.685 [2024-12-09 09:49:18.993148] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:43.685 [2024-12-09 09:49:18.993152] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c32ed0): datao=0, datal=4096, cccid=4 00:31:43.685 [2024-12-09 09:49:18.993156] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c9eb40) on tqpair(0x1c32ed0): expected_datao=0, payload_size=4096 00:31:43.685 [2024-12-09 09:49:18.993161] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:43.685 [2024-12-09 09:49:18.993167] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:43.685 [2024-12-09 09:49:18.993171] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:43.685 [2024-12-09 09:49:18.993359] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:43.685 [2024-12-09 09:49:18.993366] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:43.685 [2024-12-09 09:49:18.993369] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:43.685 [2024-12-09 09:49:18.993373] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9eb40) on tqpair=0x1c32ed0 00:31:43.685 [2024-12-09 09:49:18.993386] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:31:43.685 [2024-12-09 09:49:18.993396] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:31:43.685 [2024-12-09 09:49:18.993403] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:43.685 [2024-12-09 09:49:18.993406] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c32ed0) 00:31:43.685 [2024-12-09 09:49:18.993413] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.685 [2024-12-09 09:49:18.993424] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9eb40, cid 4, qid 0 00:31:43.685 [2024-12-09 09:49:18.993633] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:43.685 [2024-12-09 09:49:18.993643] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:43.685 [2024-12-09 09:49:18.993647] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:43.685 [2024-12-09 09:49:18.993651] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c32ed0): datao=0, datal=4096, cccid=4 00:31:43.685 [2024-12-09 09:49:18.993655] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c9eb40) on tqpair(0x1c32ed0): expected_datao=0, payload_size=4096 00:31:43.685 [2024-12-09 09:49:18.993659] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:43.685 [2024-12-09 09:49:18.993670] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:43.685 [2024-12-09 09:49:18.993674] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:43.685 [2024-12-09 09:49:19.034781] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:43.685 [2024-12-09 09:49:19.034790] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:43.685 [2024-12-09 09:49:19.034794] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:43.685 [2024-12-09 09:49:19.034798] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9eb40) on tqpair=0x1c32ed0 00:31:43.685 [2024-12-09 09:49:19.034805] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:31:43.685 [2024-12-09 09:49:19.034813] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:31:43.685 [2024-12-09 09:49:19.034821] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:31:43.685 [2024-12-09 09:49:19.034829] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:31:43.685 [2024-12-09 09:49:19.034834] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:31:43.685 [2024-12-09 09:49:19.034840] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:31:43.685 [2024-12-09 09:49:19.034845] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:31:43.685 [2024-12-09 09:49:19.034850] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:31:43.685 [2024-12-09 09:49:19.034855] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:31:43.685 [2024-12-09 09:49:19.034870] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:43.685 [2024-12-09 09:49:19.034874] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c32ed0) 00:31:43.685 [2024-12-09 09:49:19.034881] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.685 [2024-12-09 09:49:19.034891] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:43.685 [2024-12-09 09:49:19.034895] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:43.685 [2024-12-09 09:49:19.034899] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c32ed0) 00:31:43.685 [2024-12-09 09:49:19.034905] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:31:43.685 [2024-12-09 09:49:19.034919] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9eb40, cid 4, qid 0 00:31:43.685 [2024-12-09 09:49:19.034924] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9ecc0, cid 5, qid 0 00:31:43.685 [2024-12-09 09:49:19.035112] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:43.685 [2024-12-09 09:49:19.035118] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:43.685 [2024-12-09 09:49:19.035122] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:43.685 [2024-12-09 09:49:19.035126] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9eb40) on tqpair=0x1c32ed0 00:31:43.685 [2024-12-09 09:49:19.035132] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:43.685 [2024-12-09 09:49:19.035138] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:43.685 [2024-12-09 09:49:19.035142] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:43.686 [2024-12-09 09:49:19.035145] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9ecc0) on tqpair=0x1c32ed0 00:31:43.686 [2024-12-09 09:49:19.035154] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:43.686 [2024-12-09 09:49:19.035158] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c32ed0) 00:31:43.686 [2024-12-09 09:49:19.035165] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.686 [2024-12-09 09:49:19.035175] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9ecc0, cid 5, qid 0 00:31:43.686 [2024-12-09 09:49:19.035341] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:43.686 [2024-12-09 09:49:19.035348] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:43.686 [2024-12-09 09:49:19.035351] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:43.686 [2024-12-09 09:49:19.035355] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9ecc0) on tqpair=0x1c32ed0 00:31:43.686 [2024-12-09 09:49:19.035364] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:43.686 [2024-12-09 09:49:19.035368] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c32ed0) 00:31:43.686 [2024-12-09 09:49:19.035374] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.686 [2024-12-09 09:49:19.035384] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9ecc0, cid 5, qid 0 00:31:43.686 [2024-12-09 09:49:19.035606] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:43.686 [2024-12-09 09:49:19.035612] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:43.686 [2024-12-09 09:49:19.035616] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:43.686 [2024-12-09 09:49:19.035620] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9ecc0) on tqpair=0x1c32ed0 00:31:43.686 [2024-12-09 09:49:19.035629] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:43.686 [2024-12-09 09:49:19.035632] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c32ed0) 00:31:43.686 [2024-12-09 09:49:19.035643] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.686 [2024-12-09 09:49:19.035653] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9ecc0, cid 5, qid 0 00:31:43.686 [2024-12-09 09:49:19.035898] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:43.686 [2024-12-09 09:49:19.035904] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:43.686 [2024-12-09 09:49:19.035909] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:43.686 [2024-12-09 09:49:19.035913] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9ecc0) on tqpair=0x1c32ed0 00:31:43.686 [2024-12-09 09:49:19.035927] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:43.686 [2024-12-09 09:49:19.035931] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c32ed0) 00:31:43.686 [2024-12-09 09:49:19.035937] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.686 [2024-12-09 09:49:19.035945] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:43.686 [2024-12-09 09:49:19.035948] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c32ed0) 00:31:43.686 [2024-12-09 09:49:19.035954] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.686 [2024-12-09 09:49:19.035961] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:43.686 [2024-12-09 09:49:19.035965] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1c32ed0) 00:31:43.686 [2024-12-09 09:49:19.035971] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.686 [2024-12-09 09:49:19.035979] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:43.686 [2024-12-09 09:49:19.035982] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1c32ed0) 00:31:43.686 [2024-12-09 09:49:19.035989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.686 [2024-12-09 09:49:19.036000] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9ecc0, cid 5, qid 0 00:31:43.686 [2024-12-09 09:49:19.036005] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9eb40, cid 4, qid 0 00:31:43.686 [2024-12-09 09:49:19.036010] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9ee40, cid 6, qid 0 00:31:43.686 [2024-12-09 09:49:19.036015] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9efc0, cid 7, qid 0 00:31:43.686 [2024-12-09 09:49:19.036265] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:43.686 [2024-12-09 09:49:19.036272] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:43.686 [2024-12-09 09:49:19.036275] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:43.686 [2024-12-09 09:49:19.036279] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c32ed0): datao=0, datal=8192, cccid=5 00:31:43.686 [2024-12-09 09:49:19.036283] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c9ecc0) on tqpair(0x1c32ed0): expected_datao=0, payload_size=8192 00:31:43.686 [2024-12-09 09:49:19.036288] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:43.686 [2024-12-09 09:49:19.036381] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:43.686 [2024-12-09 09:49:19.036386] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:43.686 [2024-12-09 09:49:19.036391] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:43.686 [2024-12-09 09:49:19.036397] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:43.686 [2024-12-09 09:49:19.036400] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:43.686 [2024-12-09 09:49:19.036404] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c32ed0): datao=0, datal=512, cccid=4 00:31:43.686 [2024-12-09 09:49:19.036409] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c9eb40) on tqpair(0x1c32ed0): expected_datao=0, payload_size=512 00:31:43.686 [2024-12-09 09:49:19.036413] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:43.686 [2024-12-09 09:49:19.036431] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:43.686 [2024-12-09 09:49:19.036435] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:43.686 [2024-12-09 09:49:19.036442] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:43.686 [2024-12-09 09:49:19.036448] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:43.686 [2024-12-09 09:49:19.036452] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:43.686 [2024-12-09 09:49:19.036455] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c32ed0): datao=0, datal=512, cccid=6 00:31:43.686 [2024-12-09 09:49:19.036460] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c9ee40) on tqpair(0x1c32ed0): expected_datao=0, payload_size=512 00:31:43.686 [2024-12-09 09:49:19.036464] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:43.686 [2024-12-09 09:49:19.036470] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:43.686 [2024-12-09 09:49:19.036474] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:43.686 [2024-12-09 09:49:19.036480] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:31:43.686 [2024-12-09 09:49:19.036485] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:31:43.686 [2024-12-09 09:49:19.036489] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:31:43.686 [2024-12-09 09:49:19.036492] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c32ed0): datao=0, datal=4096, cccid=7 00:31:43.686 [2024-12-09 09:49:19.036497] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c9efc0) on tqpair(0x1c32ed0): expected_datao=0, payload_size=4096 00:31:43.686 [2024-12-09 09:49:19.036501] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:43.686 [2024-12-09 09:49:19.036508] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:31:43.686 [2024-12-09 09:49:19.036511] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:31:43.686 [2024-12-09 09:49:19.036615] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:43.686 [2024-12-09 09:49:19.036621] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:43.686 [2024-12-09 09:49:19.036625] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:43.686 [2024-12-09 09:49:19.036629] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9ecc0) on tqpair=0x1c32ed0 00:31:43.686 [2024-12-09 09:49:19.040645] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:43.686 [2024-12-09 09:49:19.040653] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:43.686 [2024-12-09 09:49:19.040657] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:43.686 [2024-12-09 09:49:19.040660] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9eb40) on tqpair=0x1c32ed0 00:31:43.686 [2024-12-09 09:49:19.040671] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:43.686 [2024-12-09 09:49:19.040677] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:43.686 [2024-12-09 09:49:19.040680] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:43.686 [2024-12-09 09:49:19.040684] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9ee40) on tqpair=0x1c32ed0 00:31:43.686 [2024-12-09 09:49:19.040691] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:43.686 [2024-12-09 09:49:19.040697] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:43.686 [2024-12-09 09:49:19.040700] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:43.686 [2024-12-09 09:49:19.040704] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9efc0) on tqpair=0x1c32ed0 00:31:43.686 ===================================================== 00:31:43.686 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:43.686 ===================================================== 00:31:43.686 Controller Capabilities/Features 00:31:43.686 ================================ 00:31:43.686 Vendor ID: 8086 00:31:43.686 Subsystem Vendor ID: 8086 00:31:43.686 Serial Number: SPDK00000000000001 00:31:43.686 Model Number: SPDK bdev Controller 00:31:43.686 Firmware Version: 25.01 00:31:43.686 Recommended Arb Burst: 6 00:31:43.686 IEEE OUI Identifier: e4 d2 5c 00:31:43.686 Multi-path I/O 00:31:43.686 May have multiple subsystem ports: Yes 00:31:43.686 May have multiple controllers: Yes 00:31:43.686 Associated with SR-IOV VF: No 00:31:43.686 Max Data Transfer Size: 131072 00:31:43.686 Max Number of Namespaces: 32 00:31:43.686 Max Number of I/O Queues: 127 00:31:43.686 NVMe Specification Version (VS): 1.3 00:31:43.686 NVMe Specification Version (Identify): 1.3 00:31:43.686 Maximum Queue Entries: 128 00:31:43.686 Contiguous Queues Required: Yes 00:31:43.686 Arbitration Mechanisms Supported 00:31:43.686 Weighted Round Robin: Not Supported 00:31:43.686 Vendor Specific: Not Supported 00:31:43.686 Reset Timeout: 15000 ms 00:31:43.686 Doorbell Stride: 4 bytes 00:31:43.687 NVM Subsystem Reset: Not Supported 00:31:43.687 Command Sets Supported 00:31:43.687 NVM Command Set: Supported 00:31:43.687 Boot Partition: Not Supported 00:31:43.687 Memory Page Size Minimum: 4096 bytes 00:31:43.687 Memory Page Size Maximum: 4096 bytes 00:31:43.687 Persistent Memory Region: Not Supported 00:31:43.687 Optional Asynchronous Events Supported 00:31:43.687 Namespace Attribute Notices: Supported 00:31:43.687 Firmware Activation Notices: Not Supported 00:31:43.687 ANA Change Notices: Not Supported 00:31:43.687 PLE Aggregate Log Change Notices: Not Supported 00:31:43.687 LBA Status Info Alert Notices: Not Supported 00:31:43.687 EGE Aggregate Log Change Notices: Not Supported 00:31:43.687 Normal NVM Subsystem Shutdown event: Not Supported 00:31:43.687 Zone Descriptor Change Notices: Not Supported 00:31:43.687 Discovery Log Change Notices: Not Supported 00:31:43.687 Controller Attributes 00:31:43.687 128-bit Host Identifier: Supported 00:31:43.687 Non-Operational Permissive Mode: Not Supported 00:31:43.687 NVM Sets: Not Supported 00:31:43.687 Read Recovery Levels: Not Supported 00:31:43.687 Endurance Groups: Not Supported 00:31:43.687 Predictable Latency Mode: Not Supported 00:31:43.687 Traffic Based Keep ALive: Not Supported 00:31:43.687 Namespace Granularity: Not Supported 00:31:43.687 SQ Associations: Not Supported 00:31:43.687 UUID List: Not Supported 00:31:43.687 Multi-Domain Subsystem: Not Supported 00:31:43.687 Fixed Capacity Management: Not Supported 00:31:43.687 Variable Capacity Management: Not Supported 00:31:43.687 Delete Endurance Group: Not Supported 00:31:43.687 Delete NVM Set: Not Supported 00:31:43.687 Extended LBA Formats Supported: Not Supported 00:31:43.687 Flexible Data Placement Supported: Not Supported 00:31:43.687 00:31:43.687 Controller Memory Buffer Support 00:31:43.687 ================================ 00:31:43.687 Supported: No 00:31:43.687 00:31:43.687 Persistent Memory Region Support 00:31:43.687 ================================ 00:31:43.687 Supported: No 00:31:43.687 00:31:43.687 Admin Command Set Attributes 00:31:43.687 ============================ 00:31:43.687 Security Send/Receive: Not Supported 00:31:43.687 Format NVM: Not Supported 00:31:43.687 Firmware Activate/Download: Not Supported 00:31:43.687 Namespace Management: Not Supported 00:31:43.687 Device Self-Test: Not Supported 00:31:43.687 Directives: Not Supported 00:31:43.687 NVMe-MI: Not Supported 00:31:43.687 Virtualization Management: Not Supported 00:31:43.687 Doorbell Buffer Config: Not Supported 00:31:43.687 Get LBA Status Capability: Not Supported 00:31:43.687 Command & Feature Lockdown Capability: Not Supported 00:31:43.687 Abort Command Limit: 4 00:31:43.687 Async Event Request Limit: 4 00:31:43.687 Number of Firmware Slots: N/A 00:31:43.687 Firmware Slot 1 Read-Only: N/A 00:31:43.687 Firmware Activation Without Reset: N/A 00:31:43.687 Multiple Update Detection Support: N/A 00:31:43.687 Firmware Update Granularity: No Information Provided 00:31:43.687 Per-Namespace SMART Log: No 00:31:43.687 Asymmetric Namespace Access Log Page: Not Supported 00:31:43.687 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:31:43.687 Command Effects Log Page: Supported 00:31:43.687 Get Log Page Extended Data: Supported 00:31:43.687 Telemetry Log Pages: Not Supported 00:31:43.687 Persistent Event Log Pages: Not Supported 00:31:43.687 Supported Log Pages Log Page: May Support 00:31:43.687 Commands Supported & Effects Log Page: Not Supported 00:31:43.687 Feature Identifiers & Effects Log Page:May Support 00:31:43.687 NVMe-MI Commands & Effects Log Page: May Support 00:31:43.687 Data Area 4 for Telemetry Log: Not Supported 00:31:43.687 Error Log Page Entries Supported: 128 00:31:43.687 Keep Alive: Supported 00:31:43.687 Keep Alive Granularity: 10000 ms 00:31:43.687 00:31:43.687 NVM Command Set Attributes 00:31:43.687 ========================== 00:31:43.687 Submission Queue Entry Size 00:31:43.687 Max: 64 00:31:43.687 Min: 64 00:31:43.687 Completion Queue Entry Size 00:31:43.687 Max: 16 00:31:43.687 Min: 16 00:31:43.687 Number of Namespaces: 32 00:31:43.687 Compare Command: Supported 00:31:43.687 Write Uncorrectable Command: Not Supported 00:31:43.687 Dataset Management Command: Supported 00:31:43.687 Write Zeroes Command: Supported 00:31:43.687 Set Features Save Field: Not Supported 00:31:43.687 Reservations: Supported 00:31:43.687 Timestamp: Not Supported 00:31:43.687 Copy: Supported 00:31:43.687 Volatile Write Cache: Present 00:31:43.687 Atomic Write Unit (Normal): 1 00:31:43.687 Atomic Write Unit (PFail): 1 00:31:43.687 Atomic Compare & Write Unit: 1 00:31:43.687 Fused Compare & Write: Supported 00:31:43.687 Scatter-Gather List 00:31:43.687 SGL Command Set: Supported 00:31:43.687 SGL Keyed: Supported 00:31:43.687 SGL Bit Bucket Descriptor: Not Supported 00:31:43.687 SGL Metadata Pointer: Not Supported 00:31:43.687 Oversized SGL: Not Supported 00:31:43.687 SGL Metadata Address: Not Supported 00:31:43.687 SGL Offset: Supported 00:31:43.687 Transport SGL Data Block: Not Supported 00:31:43.687 Replay Protected Memory Block: Not Supported 00:31:43.687 00:31:43.687 Firmware Slot Information 00:31:43.687 ========================= 00:31:43.687 Active slot: 1 00:31:43.687 Slot 1 Firmware Revision: 25.01 00:31:43.687 00:31:43.687 00:31:43.687 Commands Supported and Effects 00:31:43.687 ============================== 00:31:43.687 Admin Commands 00:31:43.687 -------------- 00:31:43.687 Get Log Page (02h): Supported 00:31:43.687 Identify (06h): Supported 00:31:43.687 Abort (08h): Supported 00:31:43.687 Set Features (09h): Supported 00:31:43.687 Get Features (0Ah): Supported 00:31:43.687 Asynchronous Event Request (0Ch): Supported 00:31:43.687 Keep Alive (18h): Supported 00:31:43.687 I/O Commands 00:31:43.687 ------------ 00:31:43.687 Flush (00h): Supported LBA-Change 00:31:43.687 Write (01h): Supported LBA-Change 00:31:43.687 Read (02h): Supported 00:31:43.687 Compare (05h): Supported 00:31:43.687 Write Zeroes (08h): Supported LBA-Change 00:31:43.687 Dataset Management (09h): Supported LBA-Change 00:31:43.687 Copy (19h): Supported LBA-Change 00:31:43.687 00:31:43.687 Error Log 00:31:43.687 ========= 00:31:43.687 00:31:43.687 Arbitration 00:31:43.687 =========== 00:31:43.687 Arbitration Burst: 1 00:31:43.687 00:31:43.687 Power Management 00:31:43.687 ================ 00:31:43.687 Number of Power States: 1 00:31:43.687 Current Power State: Power State #0 00:31:43.687 Power State #0: 00:31:43.687 Max Power: 0.00 W 00:31:43.687 Non-Operational State: Operational 00:31:43.687 Entry Latency: Not Reported 00:31:43.687 Exit Latency: Not Reported 00:31:43.687 Relative Read Throughput: 0 00:31:43.687 Relative Read Latency: 0 00:31:43.687 Relative Write Throughput: 0 00:31:43.687 Relative Write Latency: 0 00:31:43.687 Idle Power: Not Reported 00:31:43.687 Active Power: Not Reported 00:31:43.687 Non-Operational Permissive Mode: Not Supported 00:31:43.687 00:31:43.687 Health Information 00:31:43.687 ================== 00:31:43.687 Critical Warnings: 00:31:43.687 Available Spare Space: OK 00:31:43.687 Temperature: OK 00:31:43.687 Device Reliability: OK 00:31:43.687 Read Only: No 00:31:43.687 Volatile Memory Backup: OK 00:31:43.687 Current Temperature: 0 Kelvin (-273 Celsius) 00:31:43.687 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:31:43.687 Available Spare: 0% 00:31:43.687 Available Spare Threshold: 0% 00:31:43.687 Life Percentage Used:[2024-12-09 09:49:19.040803] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:43.687 [2024-12-09 09:49:19.040809] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1c32ed0) 00:31:43.687 [2024-12-09 09:49:19.040816] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.687 [2024-12-09 09:49:19.040829] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9efc0, cid 7, qid 0 00:31:43.687 [2024-12-09 09:49:19.041031] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:43.687 [2024-12-09 09:49:19.041039] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:43.687 [2024-12-09 09:49:19.041044] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:43.687 [2024-12-09 09:49:19.041048] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9efc0) on tqpair=0x1c32ed0 00:31:43.687 [2024-12-09 09:49:19.041080] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:31:43.687 [2024-12-09 09:49:19.041090] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9e540) on tqpair=0x1c32ed0 00:31:43.687 [2024-12-09 09:49:19.041096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.687 [2024-12-09 09:49:19.041101] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9e6c0) on tqpair=0x1c32ed0 00:31:43.687 [2024-12-09 09:49:19.041106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.688 [2024-12-09 09:49:19.041111] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9e840) on tqpair=0x1c32ed0 00:31:43.688 [2024-12-09 09:49:19.041116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.688 [2024-12-09 09:49:19.041120] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9e9c0) on tqpair=0x1c32ed0 00:31:43.688 [2024-12-09 09:49:19.041125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:43.688 [2024-12-09 09:49:19.041133] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:43.688 [2024-12-09 09:49:19.041137] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:43.688 [2024-12-09 09:49:19.041140] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c32ed0) 00:31:43.688 [2024-12-09 09:49:19.041147] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.688 [2024-12-09 09:49:19.041159] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9e9c0, cid 3, qid 0 00:31:43.688 [2024-12-09 09:49:19.041311] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:43.688 [2024-12-09 09:49:19.041318] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:43.688 [2024-12-09 09:49:19.041322] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:43.688 [2024-12-09 09:49:19.041326] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9e9c0) on tqpair=0x1c32ed0 00:31:43.688 [2024-12-09 09:49:19.041332] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:43.688 [2024-12-09 09:49:19.041336] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:43.688 [2024-12-09 09:49:19.041340] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c32ed0) 00:31:43.688 [2024-12-09 09:49:19.041347] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.688 [2024-12-09 09:49:19.041360] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9e9c0, cid 3, qid 0 00:31:43.688 [2024-12-09 09:49:19.041534] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:43.688 [2024-12-09 09:49:19.041540] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:43.688 [2024-12-09 09:49:19.041544] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:43.688 [2024-12-09 09:49:19.041548] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9e9c0) on tqpair=0x1c32ed0 00:31:43.688 [2024-12-09 09:49:19.041552] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:31:43.688 [2024-12-09 09:49:19.041557] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:31:43.688 [2024-12-09 09:49:19.041566] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:43.688 [2024-12-09 09:49:19.041570] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:43.688 [2024-12-09 09:49:19.041574] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c32ed0) 00:31:43.688 [2024-12-09 09:49:19.041581] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.688 [2024-12-09 09:49:19.041593] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9e9c0, cid 3, qid 0 00:31:43.688 [2024-12-09 09:49:19.041761] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:43.688 [2024-12-09 09:49:19.041768] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:43.688 [2024-12-09 09:49:19.041771] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:43.688 [2024-12-09 09:49:19.041775] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9e9c0) on tqpair=0x1c32ed0 00:31:43.688 [2024-12-09 09:49:19.041785] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:43.688 [2024-12-09 09:49:19.041789] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:43.688 [2024-12-09 09:49:19.041793] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c32ed0) 00:31:43.688 [2024-12-09 09:49:19.041799] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.688 [2024-12-09 09:49:19.041810] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9e9c0, cid 3, qid 0 00:31:43.688 [2024-12-09 09:49:19.041978] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:43.688 [2024-12-09 09:49:19.041985] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:43.688 [2024-12-09 09:49:19.041988] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:43.688 [2024-12-09 09:49:19.041992] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9e9c0) on tqpair=0x1c32ed0 00:31:43.688 [2024-12-09 09:49:19.042002] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:43.688 [2024-12-09 09:49:19.042005] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:43.688 [2024-12-09 09:49:19.042009] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c32ed0) 00:31:43.688 [2024-12-09 09:49:19.042016] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.688 [2024-12-09 09:49:19.042026] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9e9c0, cid 3, qid 0 00:31:43.688 [2024-12-09 09:49:19.042213] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:43.688 [2024-12-09 09:49:19.042220] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:43.688 [2024-12-09 09:49:19.042223] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:43.688 [2024-12-09 09:49:19.042227] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9e9c0) on tqpair=0x1c32ed0 00:31:43.688 [2024-12-09 09:49:19.042236] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:43.688 [2024-12-09 09:49:19.042240] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:43.688 [2024-12-09 09:49:19.042244] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c32ed0) 00:31:43.688 [2024-12-09 09:49:19.042251] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.688 [2024-12-09 09:49:19.042261] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9e9c0, cid 3, qid 0 00:31:43.688 [2024-12-09 09:49:19.042448] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:43.688 [2024-12-09 09:49:19.042455] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:43.688 [2024-12-09 09:49:19.042458] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:43.688 [2024-12-09 09:49:19.042462] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9e9c0) on tqpair=0x1c32ed0 00:31:43.688 [2024-12-09 09:49:19.042471] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:43.688 [2024-12-09 09:49:19.042475] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:43.688 [2024-12-09 09:49:19.042479] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c32ed0) 00:31:43.688 [2024-12-09 09:49:19.042486] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.688 [2024-12-09 09:49:19.042497] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9e9c0, cid 3, qid 0 00:31:43.688 [2024-12-09 09:49:19.042716] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:43.688 [2024-12-09 09:49:19.042723] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:43.688 [2024-12-09 09:49:19.042726] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:43.688 [2024-12-09 09:49:19.042730] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9e9c0) on tqpair=0x1c32ed0 00:31:43.688 [2024-12-09 09:49:19.042740] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:43.688 [2024-12-09 09:49:19.042744] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:43.688 [2024-12-09 09:49:19.042747] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c32ed0) 00:31:43.688 [2024-12-09 09:49:19.042754] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.688 [2024-12-09 09:49:19.042764] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9e9c0, cid 3, qid 0 00:31:43.688 [2024-12-09 09:49:19.042910] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:43.688 [2024-12-09 09:49:19.042917] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:43.688 [2024-12-09 09:49:19.042920] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:43.688 [2024-12-09 09:49:19.042924] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9e9c0) on tqpair=0x1c32ed0 00:31:43.688 [2024-12-09 09:49:19.042933] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:43.688 [2024-12-09 09:49:19.042937] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:43.688 [2024-12-09 09:49:19.042941] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c32ed0) 00:31:43.688 [2024-12-09 09:49:19.042948] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.688 [2024-12-09 09:49:19.042957] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9e9c0, cid 3, qid 0 00:31:43.688 [2024-12-09 09:49:19.043145] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:43.688 [2024-12-09 09:49:19.043152] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:43.688 [2024-12-09 09:49:19.043156] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:43.688 [2024-12-09 09:49:19.043159] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9e9c0) on tqpair=0x1c32ed0 00:31:43.688 [2024-12-09 09:49:19.043169] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:43.688 [2024-12-09 09:49:19.043173] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:43.688 [2024-12-09 09:49:19.043177] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c32ed0) 00:31:43.688 [2024-12-09 09:49:19.043183] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.688 [2024-12-09 09:49:19.043193] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9e9c0, cid 3, qid 0 00:31:43.688 [2024-12-09 09:49:19.043366] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:43.688 [2024-12-09 09:49:19.043372] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:43.689 [2024-12-09 09:49:19.043375] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:43.689 [2024-12-09 09:49:19.043379] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9e9c0) on tqpair=0x1c32ed0 00:31:43.689 [2024-12-09 09:49:19.043389] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:43.689 [2024-12-09 09:49:19.043393] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:43.689 [2024-12-09 09:49:19.043396] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c32ed0) 00:31:43.689 [2024-12-09 09:49:19.043403] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.689 [2024-12-09 09:49:19.043413] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9e9c0, cid 3, qid 0 00:31:43.689 [2024-12-09 09:49:19.043606] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:43.689 [2024-12-09 09:49:19.043613] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:43.689 [2024-12-09 09:49:19.043616] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:43.689 [2024-12-09 09:49:19.043620] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9e9c0) on tqpair=0x1c32ed0 00:31:43.689 [2024-12-09 09:49:19.043630] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:43.689 [2024-12-09 09:49:19.043633] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:43.689 [2024-12-09 09:49:19.043641] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c32ed0) 00:31:43.689 [2024-12-09 09:49:19.043648] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.689 [2024-12-09 09:49:19.043658] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9e9c0, cid 3, qid 0 00:31:43.689 [2024-12-09 09:49:19.043831] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:43.689 [2024-12-09 09:49:19.043837] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:43.689 [2024-12-09 09:49:19.043841] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:43.689 [2024-12-09 09:49:19.043844] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9e9c0) on tqpair=0x1c32ed0 00:31:43.689 [2024-12-09 09:49:19.043854] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:43.689 [2024-12-09 09:49:19.043858] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:43.689 [2024-12-09 09:49:19.043861] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c32ed0) 00:31:43.689 [2024-12-09 09:49:19.043868] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.689 [2024-12-09 09:49:19.043878] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9e9c0, cid 3, qid 0 00:31:43.689 [2024-12-09 09:49:19.044065] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:43.689 [2024-12-09 09:49:19.044071] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:43.689 [2024-12-09 09:49:19.044075] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:43.689 [2024-12-09 09:49:19.044078] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9e9c0) on tqpair=0x1c32ed0 00:31:43.689 [2024-12-09 09:49:19.044088] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:43.689 [2024-12-09 09:49:19.044092] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:43.689 [2024-12-09 09:49:19.044095] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c32ed0) 00:31:43.689 [2024-12-09 09:49:19.044102] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.689 [2024-12-09 09:49:19.044112] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9e9c0, cid 3, qid 0 00:31:43.689 [2024-12-09 09:49:19.044302] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:43.689 [2024-12-09 09:49:19.044308] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:43.689 [2024-12-09 09:49:19.044312] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:43.689 [2024-12-09 09:49:19.044316] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9e9c0) on tqpair=0x1c32ed0 00:31:43.689 [2024-12-09 09:49:19.044325] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:43.689 [2024-12-09 09:49:19.044329] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:43.689 [2024-12-09 09:49:19.044333] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c32ed0) 00:31:43.689 [2024-12-09 09:49:19.044340] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.689 [2024-12-09 09:49:19.044349] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9e9c0, cid 3, qid 0 00:31:43.689 [2024-12-09 09:49:19.044493] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:43.689 [2024-12-09 09:49:19.044501] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:43.689 [2024-12-09 09:49:19.044504] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:43.689 [2024-12-09 09:49:19.044508] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9e9c0) on tqpair=0x1c32ed0 00:31:43.689 [2024-12-09 09:49:19.044518] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:31:43.689 [2024-12-09 09:49:19.044522] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:31:43.689 [2024-12-09 09:49:19.044526] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c32ed0) 00:31:43.689 [2024-12-09 09:49:19.044532] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:43.689 [2024-12-09 09:49:19.044542] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c9e9c0, cid 3, qid 0 00:31:43.689 [2024-12-09 09:49:19.048645] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:31:43.689 [2024-12-09 09:49:19.048654] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:31:43.689 [2024-12-09 09:49:19.048657] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:31:43.689 [2024-12-09 09:49:19.048661] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1c9e9c0) on tqpair=0x1c32ed0 00:31:43.689 [2024-12-09 09:49:19.048670] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 7 milliseconds 00:31:43.689 0% 00:31:43.689 Data Units Read: 0 00:31:43.689 Data Units Written: 0 00:31:43.689 Host Read Commands: 0 00:31:43.689 Host Write Commands: 0 00:31:43.689 Controller Busy Time: 0 minutes 00:31:43.689 Power Cycles: 0 00:31:43.689 Power On Hours: 0 hours 00:31:43.689 Unsafe Shutdowns: 0 00:31:43.689 Unrecoverable Media Errors: 0 00:31:43.689 Lifetime Error Log Entries: 0 00:31:43.689 Warning Temperature Time: 0 minutes 00:31:43.689 Critical Temperature Time: 0 minutes 00:31:43.689 00:31:43.689 Number of Queues 00:31:43.689 ================ 00:31:43.689 Number of I/O Submission Queues: 127 00:31:43.689 Number of I/O Completion Queues: 127 00:31:43.689 00:31:43.689 Active Namespaces 00:31:43.689 ================= 00:31:43.689 Namespace ID:1 00:31:43.689 Error Recovery Timeout: Unlimited 00:31:43.689 Command Set Identifier: NVM (00h) 00:31:43.689 Deallocate: Supported 00:31:43.689 Deallocated/Unwritten Error: Not Supported 00:31:43.689 Deallocated Read Value: Unknown 00:31:43.689 Deallocate in Write Zeroes: Not Supported 00:31:43.689 Deallocated Guard Field: 0xFFFF 00:31:43.689 Flush: Supported 00:31:43.689 Reservation: Supported 00:31:43.689 Namespace Sharing Capabilities: Multiple Controllers 00:31:43.689 Size (in LBAs): 131072 (0GiB) 00:31:43.689 Capacity (in LBAs): 131072 (0GiB) 00:31:43.689 Utilization (in LBAs): 131072 (0GiB) 00:31:43.689 NGUID: ABCDEF0123456789ABCDEF0123456789 00:31:43.689 EUI64: ABCDEF0123456789 00:31:43.689 UUID: cd40461d-f8ff-4180-ba9a-2dfabd8bc896 00:31:43.689 Thin Provisioning: Not Supported 00:31:43.689 Per-NS Atomic Units: Yes 00:31:43.689 Atomic Boundary Size (Normal): 0 00:31:43.689 Atomic Boundary Size (PFail): 0 00:31:43.689 Atomic Boundary Offset: 0 00:31:43.689 Maximum Single Source Range Length: 65535 00:31:43.689 Maximum Copy Length: 65535 00:31:43.689 Maximum Source Range Count: 1 00:31:43.689 NGUID/EUI64 Never Reused: No 00:31:43.689 Namespace Write Protected: No 00:31:43.689 Number of LBA Formats: 1 00:31:43.689 Current LBA Format: LBA Format #00 00:31:43.689 LBA Format #00: Data Size: 512 Metadata Size: 0 00:31:43.689 00:31:43.689 09:49:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:31:43.689 09:49:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:43.689 09:49:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:43.689 09:49:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:43.689 09:49:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:43.689 09:49:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:31:43.689 09:49:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:31:43.689 09:49:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:43.689 09:49:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:31:43.689 09:49:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:43.689 09:49:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:31:43.689 09:49:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:43.689 09:49:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:43.689 rmmod nvme_tcp 00:31:43.689 rmmod nvme_fabrics 00:31:43.950 rmmod nvme_keyring 00:31:43.950 09:49:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:43.950 09:49:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:31:43.950 09:49:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:31:43.950 09:49:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 2956504 ']' 00:31:43.950 09:49:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 2956504 00:31:43.950 09:49:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 2956504 ']' 00:31:43.950 09:49:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 2956504 00:31:43.950 09:49:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:31:43.950 09:49:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:43.950 09:49:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2956504 00:31:43.950 09:49:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:43.950 09:49:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:43.950 09:49:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2956504' 00:31:43.950 killing process with pid 2956504 00:31:43.950 09:49:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 2956504 00:31:43.950 09:49:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 2956504 00:31:43.950 09:49:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:43.950 09:49:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:43.950 09:49:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:43.950 09:49:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:31:43.950 09:49:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:31:43.950 09:49:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:31:43.950 09:49:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:43.950 09:49:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:43.950 09:49:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:43.950 09:49:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:43.950 09:49:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:43.950 09:49:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:46.496 09:49:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:46.496 00:31:46.496 real 0m10.930s 00:31:46.496 user 0m5.914s 00:31:46.496 sys 0m6.096s 00:31:46.496 09:49:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:46.496 09:49:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:31:46.496 ************************************ 00:31:46.496 END TEST nvmf_identify 00:31:46.496 ************************************ 00:31:46.496 09:49:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:31:46.496 09:49:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:46.496 09:49:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:46.496 09:49:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:46.496 ************************************ 00:31:46.496 START TEST nvmf_perf 00:31:46.496 ************************************ 00:31:46.496 09:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:31:46.496 * Looking for test storage... 00:31:46.496 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:46.496 09:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:46.496 09:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:31:46.496 09:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:46.496 09:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:46.496 09:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:46.496 09:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:46.496 09:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:46.496 09:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:31:46.496 09:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:31:46.496 09:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:31:46.496 09:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:31:46.496 09:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:31:46.496 09:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:31:46.496 09:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:31:46.496 09:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:46.496 09:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:31:46.496 09:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:31:46.496 09:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:46.496 09:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:46.496 09:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:31:46.496 09:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:31:46.496 09:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:46.496 09:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:31:46.496 09:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:31:46.496 09:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:31:46.496 09:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:31:46.496 09:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:46.496 09:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:31:46.496 09:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:31:46.496 09:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:46.496 09:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:46.496 09:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:31:46.496 09:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:46.496 09:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:46.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:46.496 --rc genhtml_branch_coverage=1 00:31:46.496 --rc genhtml_function_coverage=1 00:31:46.496 --rc genhtml_legend=1 00:31:46.496 --rc geninfo_all_blocks=1 00:31:46.496 --rc geninfo_unexecuted_blocks=1 00:31:46.496 00:31:46.496 ' 00:31:46.496 09:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:46.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:46.496 --rc genhtml_branch_coverage=1 00:31:46.496 --rc genhtml_function_coverage=1 00:31:46.496 --rc genhtml_legend=1 00:31:46.496 --rc geninfo_all_blocks=1 00:31:46.496 --rc geninfo_unexecuted_blocks=1 00:31:46.496 00:31:46.496 ' 00:31:46.496 09:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:46.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:46.496 --rc genhtml_branch_coverage=1 00:31:46.496 --rc genhtml_function_coverage=1 00:31:46.496 --rc genhtml_legend=1 00:31:46.496 --rc geninfo_all_blocks=1 00:31:46.496 --rc geninfo_unexecuted_blocks=1 00:31:46.496 00:31:46.496 ' 00:31:46.497 09:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:46.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:46.497 --rc genhtml_branch_coverage=1 00:31:46.497 --rc genhtml_function_coverage=1 00:31:46.497 --rc genhtml_legend=1 00:31:46.497 --rc geninfo_all_blocks=1 00:31:46.497 --rc geninfo_unexecuted_blocks=1 00:31:46.497 00:31:46.497 ' 00:31:46.497 09:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:46.497 09:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:31:46.497 09:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:46.497 09:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:46.497 09:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:46.497 09:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:46.497 09:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:46.497 09:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:46.497 09:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:46.497 09:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:46.497 09:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:46.497 09:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:46.497 09:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:46.497 09:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:46.497 09:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:46.497 09:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:46.497 09:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:46.497 09:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:46.497 09:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:46.497 09:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:31:46.497 09:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:46.497 09:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:46.497 09:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:46.497 09:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:46.497 09:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:46.497 09:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:46.497 09:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:31:46.497 09:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:46.497 09:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:31:46.497 09:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:46.497 09:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:46.497 09:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:46.497 09:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:46.497 09:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:46.497 09:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:46.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:46.497 09:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:46.497 09:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:46.497 09:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:46.497 09:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:31:46.497 09:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:31:46.497 09:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:46.497 09:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:31:46.497 09:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:46.497 09:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:46.497 09:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:46.497 09:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:46.497 09:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:46.497 09:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:46.497 09:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:46.497 09:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:46.497 09:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:46.497 09:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:46.497 09:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:31:46.497 09:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:54.637 09:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:54.637 09:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:31:54.637 09:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:54.637 09:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:54.637 09:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:54.637 09:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:54.637 09:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:54.637 09:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:31:54.637 09:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:54.637 09:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:31:54.637 09:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:31:54.637 09:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:31:54.637 09:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:31:54.637 09:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:31:54.637 09:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:31:54.637 09:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:54.637 09:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:54.637 09:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:54.638 09:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:54.638 09:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:54.638 09:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:54.638 09:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:54.638 09:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:54.638 09:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:54.638 09:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:54.638 09:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:54.638 09:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:54.638 09:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:54.638 09:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:54.638 09:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:54.638 09:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:54.638 09:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:54.638 09:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:54.638 09:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:54.638 09:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:54.638 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:54.638 09:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:54.638 09:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:54.638 09:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:54.638 09:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:54.638 09:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:54.638 09:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:54.638 09:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:54.638 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:54.638 09:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:54.638 09:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:54.638 09:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:54.638 09:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:54.638 09:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:54.638 09:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:54.638 09:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:54.638 09:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:54.638 09:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:54.638 09:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:54.638 09:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:54.638 09:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:54.638 09:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:54.638 09:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:54.638 09:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:54.638 09:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:54.638 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:54.638 09:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:54.638 09:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:54.638 09:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:54.638 09:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:54.638 09:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:54.638 09:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:54.638 09:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:54.638 09:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:54.638 09:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:54.638 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:54.638 09:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:54.638 09:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:54.638 09:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:31:54.638 09:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:54.638 09:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:54.638 09:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:54.638 09:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:54.638 09:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:54.638 09:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:54.638 09:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:54.638 09:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:54.638 09:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:54.638 09:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:54.638 09:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:54.638 09:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:54.638 09:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:54.638 09:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:54.638 09:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:54.638 09:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:54.638 09:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:54.638 09:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:54.638 09:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:54.638 09:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:54.638 09:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:54.638 09:49:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:54.638 09:49:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:54.638 09:49:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:54.638 09:49:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:54.638 09:49:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:54.638 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:54.638 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.678 ms 00:31:54.638 00:31:54.638 --- 10.0.0.2 ping statistics --- 00:31:54.638 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:54.638 rtt min/avg/max/mdev = 0.678/0.678/0.678/0.000 ms 00:31:54.638 09:49:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:54.638 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:54.638 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.299 ms 00:31:54.638 00:31:54.638 --- 10.0.0.1 ping statistics --- 00:31:54.638 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:54.638 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:31:54.638 09:49:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:54.638 09:49:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:31:54.638 09:49:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:54.638 09:49:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:54.638 09:49:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:54.638 09:49:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:54.638 09:49:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:54.638 09:49:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:54.638 09:49:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:54.638 09:49:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:31:54.638 09:49:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:54.638 09:49:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:54.638 09:49:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:54.638 09:49:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=2960854 00:31:54.638 09:49:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 2960854 00:31:54.638 09:49:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:54.638 09:49:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 2960854 ']' 00:31:54.638 09:49:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:54.638 09:49:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:54.638 09:49:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:54.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:54.638 09:49:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:54.639 09:49:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:54.639 [2024-12-09 09:49:29.207788] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:31:54.639 [2024-12-09 09:49:29.207855] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:54.639 [2024-12-09 09:49:29.306606] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:54.639 [2024-12-09 09:49:29.335534] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:54.639 [2024-12-09 09:49:29.335585] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:54.639 [2024-12-09 09:49:29.335595] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:54.639 [2024-12-09 09:49:29.335602] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:54.639 [2024-12-09 09:49:29.335608] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:54.639 [2024-12-09 09:49:29.337577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:54.639 [2024-12-09 09:49:29.337804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:54.639 [2024-12-09 09:49:29.337805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:54.639 [2024-12-09 09:49:29.337695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:54.639 09:49:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:54.639 09:49:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:31:54.639 09:49:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:54.639 09:49:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:54.639 09:49:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:54.639 09:49:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:54.639 09:49:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:54.639 09:49:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:31:55.209 09:49:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:31:55.209 09:49:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:31:55.468 09:49:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:31:55.468 09:49:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:55.728 09:49:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:31:55.728 09:49:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:31:55.728 09:49:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:31:55.728 09:49:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:31:55.728 09:49:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:31:55.728 [2024-12-09 09:49:31.088591] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:55.728 09:49:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:55.988 09:49:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:31:55.988 09:49:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:56.248 09:49:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:31:56.248 09:49:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:31:56.248 09:49:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:56.508 [2024-12-09 09:49:31.831369] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:56.508 09:49:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:56.781 09:49:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:31:56.781 09:49:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:31:56.781 09:49:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:31:56.781 09:49:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:31:58.165 Initializing NVMe Controllers 00:31:58.165 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:31:58.165 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:31:58.165 Initialization complete. Launching workers. 00:31:58.165 ======================================================== 00:31:58.165 Latency(us) 00:31:58.165 Device Information : IOPS MiB/s Average min max 00:31:58.165 PCIE (0000:65:00.0) NSID 1 from core 0: 76983.68 300.72 414.86 13.35 4965.55 00:31:58.165 ======================================================== 00:31:58.165 Total : 76983.68 300.72 414.86 13.35 4965.55 00:31:58.165 00:31:58.165 09:49:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:59.550 Initializing NVMe Controllers 00:31:59.550 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:59.550 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:59.550 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:59.550 Initialization complete. Launching workers. 00:31:59.550 ======================================================== 00:31:59.550 Latency(us) 00:31:59.550 Device Information : IOPS MiB/s Average min max 00:31:59.550 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 118.00 0.46 8589.63 239.41 45635.82 00:31:59.550 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 56.00 0.22 17957.38 7955.32 47907.04 00:31:59.550 ======================================================== 00:31:59.550 Total : 174.00 0.68 11604.54 239.41 47907.04 00:31:59.550 00:31:59.550 09:49:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:00.491 Initializing NVMe Controllers 00:32:00.491 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:00.491 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:00.491 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:32:00.491 Initialization complete. Launching workers. 00:32:00.491 ======================================================== 00:32:00.491 Latency(us) 00:32:00.491 Device Information : IOPS MiB/s Average min max 00:32:00.491 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11790.62 46.06 2731.59 421.79 46260.25 00:32:00.491 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3614.40 14.12 8867.53 4335.10 19343.72 00:32:00.491 ======================================================== 00:32:00.491 Total : 15405.02 60.18 4171.23 421.79 46260.25 00:32:00.491 00:32:00.491 09:49:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:32:00.491 09:49:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:32:00.491 09:49:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:03.030 Initializing NVMe Controllers 00:32:03.030 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:03.030 Controller IO queue size 128, less than required. 00:32:03.030 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:03.030 Controller IO queue size 128, less than required. 00:32:03.030 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:03.030 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:03.030 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:32:03.030 Initialization complete. Launching workers. 00:32:03.030 ======================================================== 00:32:03.030 Latency(us) 00:32:03.030 Device Information : IOPS MiB/s Average min max 00:32:03.030 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1925.56 481.39 67457.33 39870.27 123312.17 00:32:03.030 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 604.42 151.11 219675.19 48307.12 326105.21 00:32:03.030 ======================================================== 00:32:03.030 Total : 2529.98 632.50 103822.65 39870.27 326105.21 00:32:03.030 00:32:03.030 09:49:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:32:03.611 No valid NVMe controllers or AIO or URING devices found 00:32:03.611 Initializing NVMe Controllers 00:32:03.611 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:03.611 Controller IO queue size 128, less than required. 00:32:03.611 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:03.611 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:32:03.611 Controller IO queue size 128, less than required. 00:32:03.611 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:03.611 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:32:03.611 WARNING: Some requested NVMe devices were skipped 00:32:03.611 09:49:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:32:06.150 Initializing NVMe Controllers 00:32:06.150 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:06.150 Controller IO queue size 128, less than required. 00:32:06.150 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:06.150 Controller IO queue size 128, less than required. 00:32:06.150 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:06.150 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:06.150 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:32:06.150 Initialization complete. Launching workers. 00:32:06.150 00:32:06.150 ==================== 00:32:06.150 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:32:06.150 TCP transport: 00:32:06.150 polls: 28929 00:32:06.150 idle_polls: 11824 00:32:06.150 sock_completions: 17105 00:32:06.150 nvme_completions: 7587 00:32:06.150 submitted_requests: 11340 00:32:06.150 queued_requests: 1 00:32:06.150 00:32:06.150 ==================== 00:32:06.150 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:32:06.150 TCP transport: 00:32:06.150 polls: 29165 00:32:06.150 idle_polls: 14011 00:32:06.150 sock_completions: 15154 00:32:06.150 nvme_completions: 8855 00:32:06.150 submitted_requests: 13196 00:32:06.150 queued_requests: 1 00:32:06.150 ======================================================== 00:32:06.150 Latency(us) 00:32:06.151 Device Information : IOPS MiB/s Average min max 00:32:06.151 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1896.36 474.09 68377.53 39743.59 113367.71 00:32:06.151 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2213.34 553.33 58373.55 24073.04 103080.62 00:32:06.151 ======================================================== 00:32:06.151 Total : 4109.70 1027.42 62989.74 24073.04 113367.71 00:32:06.151 00:32:06.151 09:49:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:32:06.151 09:49:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:06.151 09:49:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:32:06.151 09:49:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:65:00.0 ']' 00:32:06.151 09:49:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:32:07.093 09:49:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=9d0000a3-7612-4337-9815-55a3cda85e90 00:32:07.093 09:49:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 9d0000a3-7612-4337-9815-55a3cda85e90 00:32:07.093 09:49:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=9d0000a3-7612-4337-9815-55a3cda85e90 00:32:07.093 09:49:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:32:07.093 09:49:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:32:07.093 09:49:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:32:07.093 09:49:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:07.354 09:49:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:32:07.355 { 00:32:07.355 "uuid": "9d0000a3-7612-4337-9815-55a3cda85e90", 00:32:07.355 "name": "lvs_0", 00:32:07.355 "base_bdev": "Nvme0n1", 00:32:07.355 "total_data_clusters": 457407, 00:32:07.355 "free_clusters": 457407, 00:32:07.355 "block_size": 512, 00:32:07.355 "cluster_size": 4194304 00:32:07.355 } 00:32:07.355 ]' 00:32:07.355 09:49:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="9d0000a3-7612-4337-9815-55a3cda85e90") .free_clusters' 00:32:07.355 09:49:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=457407 00:32:07.355 09:49:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="9d0000a3-7612-4337-9815-55a3cda85e90") .cluster_size' 00:32:07.355 09:49:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:32:07.355 09:49:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=1829628 00:32:07.355 09:49:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 1829628 00:32:07.355 1829628 00:32:07.355 09:49:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 1829628 -gt 20480 ']' 00:32:07.355 09:49:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:32:07.355 09:49:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 9d0000a3-7612-4337-9815-55a3cda85e90 lbd_0 20480 00:32:07.615 09:49:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=48904698-1f5a-4ae7-8834-f63ef933fb88 00:32:07.615 09:49:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 48904698-1f5a-4ae7-8834-f63ef933fb88 lvs_n_0 00:32:09.524 09:49:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=637a50b0-cfbb-4572-b699-b496cce58056 00:32:09.524 09:49:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 637a50b0-cfbb-4572-b699-b496cce58056 00:32:09.524 09:49:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=637a50b0-cfbb-4572-b699-b496cce58056 00:32:09.524 09:49:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:32:09.524 09:49:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:32:09.524 09:49:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:32:09.524 09:49:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:09.524 09:49:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:32:09.524 { 00:32:09.524 "uuid": "9d0000a3-7612-4337-9815-55a3cda85e90", 00:32:09.524 "name": "lvs_0", 00:32:09.524 "base_bdev": "Nvme0n1", 00:32:09.524 "total_data_clusters": 457407, 00:32:09.524 "free_clusters": 452287, 00:32:09.524 "block_size": 512, 00:32:09.524 "cluster_size": 4194304 00:32:09.524 }, 00:32:09.524 { 00:32:09.524 "uuid": "637a50b0-cfbb-4572-b699-b496cce58056", 00:32:09.524 "name": "lvs_n_0", 00:32:09.524 "base_bdev": "48904698-1f5a-4ae7-8834-f63ef933fb88", 00:32:09.524 "total_data_clusters": 5114, 00:32:09.524 "free_clusters": 5114, 00:32:09.524 "block_size": 512, 00:32:09.524 "cluster_size": 4194304 00:32:09.524 } 00:32:09.524 ]' 00:32:09.524 09:49:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="637a50b0-cfbb-4572-b699-b496cce58056") .free_clusters' 00:32:09.524 09:49:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=5114 00:32:09.524 09:49:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="637a50b0-cfbb-4572-b699-b496cce58056") .cluster_size' 00:32:09.524 09:49:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:32:09.524 09:49:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=20456 00:32:09.524 09:49:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 20456 00:32:09.524 20456 00:32:09.524 09:49:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:32:09.524 09:49:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 637a50b0-cfbb-4572-b699-b496cce58056 lbd_nest_0 20456 00:32:09.784 09:49:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=e6341fcc-269c-47fc-b54c-f02bf25f93d1 00:32:09.784 09:49:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:10.044 09:49:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:32:10.044 09:49:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 e6341fcc-269c-47fc-b54c-f02bf25f93d1 00:32:10.044 09:49:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:10.304 09:49:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:32:10.304 09:49:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:32:10.304 09:49:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:32:10.304 09:49:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:10.304 09:49:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:22.523 Initializing NVMe Controllers 00:32:22.523 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:22.523 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:22.523 Initialization complete. Launching workers. 00:32:22.523 ======================================================== 00:32:22.523 Latency(us) 00:32:22.523 Device Information : IOPS MiB/s Average min max 00:32:22.523 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 44.88 0.02 22314.86 241.09 49619.04 00:32:22.523 ======================================================== 00:32:22.523 Total : 44.88 0.02 22314.86 241.09 49619.04 00:32:22.523 00:32:22.523 09:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:22.523 09:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:32.519 Initializing NVMe Controllers 00:32:32.519 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:32.519 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:32.519 Initialization complete. Launching workers. 00:32:32.519 ======================================================== 00:32:32.519 Latency(us) 00:32:32.519 Device Information : IOPS MiB/s Average min max 00:32:32.519 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 62.50 7.81 16008.53 5015.76 54871.80 00:32:32.519 ======================================================== 00:32:32.519 Total : 62.50 7.81 16008.53 5015.76 54871.80 00:32:32.519 00:32:32.519 09:50:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:32:32.519 09:50:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:32.519 09:50:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:42.518 Initializing NVMe Controllers 00:32:42.518 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:42.518 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:42.518 Initialization complete. Launching workers. 00:32:42.518 ======================================================== 00:32:42.518 Latency(us) 00:32:42.518 Device Information : IOPS MiB/s Average min max 00:32:42.518 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9400.80 4.59 3403.78 277.72 10113.37 00:32:42.518 ======================================================== 00:32:42.518 Total : 9400.80 4.59 3403.78 277.72 10113.37 00:32:42.518 00:32:42.518 09:50:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:42.518 09:50:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:52.521 Initializing NVMe Controllers 00:32:52.521 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:52.521 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:52.521 Initialization complete. Launching workers. 00:32:52.521 ======================================================== 00:32:52.521 Latency(us) 00:32:52.521 Device Information : IOPS MiB/s Average min max 00:32:52.521 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4018.30 502.29 7965.39 579.57 22266.81 00:32:52.521 ======================================================== 00:32:52.521 Total : 4018.30 502.29 7965.39 579.57 22266.81 00:32:52.521 00:32:52.521 09:50:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:32:52.521 09:50:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:32:52.521 09:50:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:02.519 Initializing NVMe Controllers 00:33:02.519 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:02.519 Controller IO queue size 128, less than required. 00:33:02.519 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:02.519 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:02.519 Initialization complete. Launching workers. 00:33:02.519 ======================================================== 00:33:02.519 Latency(us) 00:33:02.519 Device Information : IOPS MiB/s Average min max 00:33:02.519 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15806.50 7.72 8102.61 1331.27 50460.63 00:33:02.519 ======================================================== 00:33:02.519 Total : 15806.50 7.72 8102.61 1331.27 50460.63 00:33:02.519 00:33:02.519 09:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:33:02.519 09:50:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:12.532 Initializing NVMe Controllers 00:33:12.532 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:12.532 Controller IO queue size 128, less than required. 00:33:12.532 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:12.532 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:12.532 Initialization complete. Launching workers. 00:33:12.532 ======================================================== 00:33:12.532 Latency(us) 00:33:12.532 Device Information : IOPS MiB/s Average min max 00:33:12.532 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1195.04 149.38 107341.51 32288.92 219532.17 00:33:12.532 ======================================================== 00:33:12.532 Total : 1195.04 149.38 107341.51 32288.92 219532.17 00:33:12.532 00:33:12.532 09:50:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:12.532 09:50:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e6341fcc-269c-47fc-b54c-f02bf25f93d1 00:33:13.916 09:50:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:33:14.177 09:50:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 48904698-1f5a-4ae7-8834-f63ef933fb88 00:33:14.437 09:50:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:33:14.437 09:50:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:33:14.437 09:50:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:33:14.437 09:50:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:14.437 09:50:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:33:14.437 09:50:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:14.437 09:50:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:33:14.437 09:50:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:14.437 09:50:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:14.437 rmmod nvme_tcp 00:33:14.437 rmmod nvme_fabrics 00:33:14.437 rmmod nvme_keyring 00:33:14.437 09:50:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:14.698 09:50:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:33:14.698 09:50:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:33:14.698 09:50:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 2960854 ']' 00:33:14.698 09:50:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 2960854 00:33:14.698 09:50:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 2960854 ']' 00:33:14.698 09:50:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 2960854 00:33:14.698 09:50:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:33:14.698 09:50:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:14.698 09:50:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2960854 00:33:14.698 09:50:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:14.698 09:50:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:14.698 09:50:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2960854' 00:33:14.698 killing process with pid 2960854 00:33:14.698 09:50:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 2960854 00:33:14.698 09:50:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 2960854 00:33:16.608 09:50:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:16.608 09:50:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:16.608 09:50:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:16.608 09:50:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:33:16.608 09:50:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:33:16.608 09:50:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:16.608 09:50:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:33:16.608 09:50:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:16.608 09:50:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:16.608 09:50:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:16.608 09:50:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:16.608 09:50:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:19.155 09:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:19.155 00:33:19.155 real 1m32.476s 00:33:19.155 user 5m26.346s 00:33:19.155 sys 0m15.873s 00:33:19.155 09:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:19.155 09:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:33:19.155 ************************************ 00:33:19.155 END TEST nvmf_perf 00:33:19.155 ************************************ 00:33:19.155 09:50:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:33:19.155 09:50:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:19.155 09:50:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:19.155 09:50:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.155 ************************************ 00:33:19.155 START TEST nvmf_fio_host 00:33:19.155 ************************************ 00:33:19.155 09:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:33:19.155 * Looking for test storage... 00:33:19.155 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:19.155 09:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:19.155 09:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:33:19.155 09:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:19.155 09:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:19.155 09:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:19.155 09:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:19.155 09:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:19.155 09:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:33:19.155 09:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:33:19.155 09:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:33:19.155 09:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:33:19.155 09:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:33:19.155 09:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:33:19.155 09:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:33:19.155 09:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:19.155 09:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:33:19.155 09:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:33:19.155 09:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:19.155 09:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:19.155 09:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:33:19.155 09:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:33:19.155 09:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:19.155 09:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:33:19.155 09:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:33:19.155 09:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:33:19.155 09:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:33:19.155 09:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:19.155 09:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:33:19.155 09:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:33:19.156 09:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:19.156 09:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:19.156 09:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:33:19.156 09:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:19.156 09:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:19.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:19.156 --rc genhtml_branch_coverage=1 00:33:19.156 --rc genhtml_function_coverage=1 00:33:19.156 --rc genhtml_legend=1 00:33:19.156 --rc geninfo_all_blocks=1 00:33:19.156 --rc geninfo_unexecuted_blocks=1 00:33:19.156 00:33:19.156 ' 00:33:19.156 09:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:19.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:19.156 --rc genhtml_branch_coverage=1 00:33:19.156 --rc genhtml_function_coverage=1 00:33:19.156 --rc genhtml_legend=1 00:33:19.156 --rc geninfo_all_blocks=1 00:33:19.156 --rc geninfo_unexecuted_blocks=1 00:33:19.156 00:33:19.156 ' 00:33:19.156 09:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:19.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:19.156 --rc genhtml_branch_coverage=1 00:33:19.156 --rc genhtml_function_coverage=1 00:33:19.156 --rc genhtml_legend=1 00:33:19.156 --rc geninfo_all_blocks=1 00:33:19.156 --rc geninfo_unexecuted_blocks=1 00:33:19.156 00:33:19.156 ' 00:33:19.156 09:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:19.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:19.156 --rc genhtml_branch_coverage=1 00:33:19.156 --rc genhtml_function_coverage=1 00:33:19.156 --rc genhtml_legend=1 00:33:19.156 --rc geninfo_all_blocks=1 00:33:19.156 --rc geninfo_unexecuted_blocks=1 00:33:19.156 00:33:19.156 ' 00:33:19.156 09:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:19.156 09:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:33:19.156 09:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:19.156 09:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:19.156 09:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:19.156 09:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:19.156 09:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:19.156 09:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:19.156 09:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:33:19.156 09:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:19.156 09:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:19.156 09:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:33:19.156 09:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:19.156 09:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:19.156 09:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:19.156 09:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:19.156 09:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:19.156 09:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:19.156 09:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:19.156 09:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:19.156 09:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:19.156 09:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:19.156 09:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:19.156 09:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:19.156 09:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:19.156 09:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:19.156 09:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:19.156 09:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:19.156 09:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:19.156 09:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:33:19.156 09:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:19.156 09:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:19.156 09:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:19.156 09:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:19.156 09:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:19.156 09:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:19.156 09:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:33:19.157 09:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:19.157 09:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:33:19.157 09:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:19.157 09:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:19.157 09:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:19.157 09:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:19.157 09:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:19.157 09:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:19.157 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:19.157 09:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:19.157 09:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:19.157 09:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:19.157 09:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:19.157 09:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:33:19.157 09:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:19.157 09:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:19.157 09:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:19.157 09:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:19.157 09:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:19.157 09:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:19.157 09:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:19.157 09:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:19.157 09:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:19.157 09:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:19.157 09:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:33:19.157 09:50:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.754 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:25.754 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:33:25.754 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:25.754 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:25.754 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:25.754 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:25.754 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:25.754 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:33:25.754 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:25.754 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:33:25.754 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:33:25.754 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:33:25.754 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:33:25.754 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:33:25.754 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:33:25.754 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:25.754 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:25.754 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:25.754 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:25.754 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:25.754 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:25.754 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:25.754 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:25.754 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:25.754 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:25.754 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:25.754 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:25.754 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:25.754 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:25.754 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:25.754 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:25.754 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:25.754 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:25.754 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:25.754 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:25.754 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:25.754 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:25.754 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:25.754 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:25.754 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:25.754 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:25.754 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:25.754 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:25.754 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:25.754 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:25.754 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:25.754 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:25.754 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:25.754 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:25.754 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:25.754 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:25.754 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:25.754 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:25.754 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:25.754 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:25.754 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:25.754 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:25.754 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:25.754 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:25.754 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:25.754 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:25.754 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:25.754 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:25.754 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:25.754 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:25.754 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:25.754 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:25.754 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:25.754 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:25.754 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:25.754 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:25.754 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:25.754 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:25.754 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:33:25.754 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:25.754 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:25.754 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:25.754 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:25.754 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:25.754 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:25.754 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:25.754 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:25.754 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:25.754 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:25.754 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:25.754 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:25.754 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:25.754 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:25.754 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:25.754 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:25.754 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:25.754 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:26.015 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:26.015 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:26.015 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:26.015 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:26.015 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:26.015 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:26.015 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:26.015 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:26.015 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:26.015 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.696 ms 00:33:26.015 00:33:26.015 --- 10.0.0.2 ping statistics --- 00:33:26.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:26.015 rtt min/avg/max/mdev = 0.696/0.696/0.696/0.000 ms 00:33:26.015 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:26.015 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:26.015 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:33:26.015 00:33:26.015 --- 10.0.0.1 ping statistics --- 00:33:26.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:26.015 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:33:26.015 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:26.015 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:33:26.015 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:26.015 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:26.015 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:26.015 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:26.015 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:26.015 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:26.015 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:26.277 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:33:26.277 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:33:26.277 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:26.277 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.277 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2980693 00:33:26.277 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:33:26.277 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:26.277 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2980693 00:33:26.277 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 2980693 ']' 00:33:26.277 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:26.277 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:26.277 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:26.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:26.277 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:26.277 09:51:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.277 [2024-12-09 09:51:01.562288] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:33:26.277 [2024-12-09 09:51:01.562363] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:26.277 [2024-12-09 09:51:01.664200] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:26.277 [2024-12-09 09:51:01.692955] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:26.277 [2024-12-09 09:51:01.693014] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:26.277 [2024-12-09 09:51:01.693024] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:26.277 [2024-12-09 09:51:01.693032] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:26.277 [2024-12-09 09:51:01.693038] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:26.277 [2024-12-09 09:51:01.694990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:26.277 [2024-12-09 09:51:01.695121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:26.277 [2024-12-09 09:51:01.695291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:26.277 [2024-12-09 09:51:01.695291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:27.218 09:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:27.218 09:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:33:27.218 09:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:27.218 [2024-12-09 09:51:02.524567] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:27.218 09:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:33:27.218 09:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:27.218 09:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:27.218 09:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:33:27.477 Malloc1 00:33:27.477 09:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:27.735 09:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:33:27.735 09:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:27.994 [2024-12-09 09:51:03.306324] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:27.994 09:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:28.254 09:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:33:28.254 09:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:28.254 09:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:28.254 09:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:28.254 09:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:28.254 09:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:28.254 09:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:28.254 09:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:33:28.254 09:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:28.254 09:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:28.254 09:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:28.254 09:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:33:28.254 09:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:28.254 09:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:28.254 09:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:28.254 09:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:28.254 09:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:28.254 09:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:28.254 09:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:28.254 09:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:28.254 09:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:28.254 09:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:33:28.254 09:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:28.515 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:33:28.515 fio-3.35 00:33:28.515 Starting 1 thread 00:33:31.057 00:33:31.057 test: (groupid=0, jobs=1): err= 0: pid=2981368: Mon Dec 9 09:51:06 2024 00:33:31.057 read: IOPS=11.3k, BW=44.3MiB/s (46.5MB/s)(88.8MiB/2004msec) 00:33:31.057 slat (usec): min=2, max=308, avg= 2.26, stdev= 2.94 00:33:31.057 clat (usec): min=3130, max=10230, avg=6212.32, stdev=1229.03 00:33:31.057 lat (usec): min=3132, max=10232, avg=6214.58, stdev=1229.13 00:33:31.057 clat percentiles (usec): 00:33:31.057 | 1.00th=[ 4293], 5.00th=[ 4621], 10.00th=[ 4817], 20.00th=[ 5014], 00:33:31.057 | 30.00th=[ 5145], 40.00th=[ 5342], 50.00th=[ 5932], 60.00th=[ 6915], 00:33:31.057 | 70.00th=[ 7242], 80.00th=[ 7504], 90.00th=[ 7832], 95.00th=[ 8029], 00:33:31.057 | 99.00th=[ 8455], 99.50th=[ 8586], 99.90th=[ 9372], 99.95th=[ 9503], 00:33:31.057 | 99.99th=[10028] 00:33:31.057 bw ( KiB/s): min=37000, max=55720, per=99.84%, avg=45308.00, stdev=8855.02, samples=4 00:33:31.057 iops : min= 9250, max=13930, avg=11327.00, stdev=2213.75, samples=4 00:33:31.057 write: IOPS=11.3k, BW=44.0MiB/s (46.2MB/s)(88.2MiB/2004msec); 0 zone resets 00:33:31.057 slat (usec): min=2, max=325, avg= 2.33, stdev= 2.36 00:33:31.057 clat (usec): min=2583, max=8344, avg=5022.21, stdev=996.46 00:33:31.057 lat (usec): min=2586, max=8346, avg=5024.54, stdev=996.62 00:33:31.057 clat percentiles (usec): 00:33:31.057 | 1.00th=[ 3490], 5.00th=[ 3785], 10.00th=[ 3884], 20.00th=[ 4047], 00:33:31.057 | 30.00th=[ 4178], 40.00th=[ 4293], 50.00th=[ 4752], 60.00th=[ 5604], 00:33:31.057 | 70.00th=[ 5800], 80.00th=[ 6063], 90.00th=[ 6325], 95.00th=[ 6456], 00:33:31.057 | 99.00th=[ 6849], 99.50th=[ 7111], 99.90th=[ 7898], 99.95th=[ 8029], 00:33:31.057 | 99.99th=[ 8160] 00:33:31.057 bw ( KiB/s): min=37896, max=55168, per=100.00%, avg=45080.00, stdev=8516.01, samples=4 00:33:31.057 iops : min= 9474, max=13792, avg=11270.00, stdev=2129.00, samples=4 00:33:31.057 lat (msec) : 4=8.64%, 10=91.35%, 20=0.01% 00:33:31.057 cpu : usr=76.14%, sys=22.92%, ctx=28, majf=0, minf=18 00:33:31.057 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:33:31.057 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:31.057 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:31.057 issued rwts: total=22736,22585,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:31.057 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:31.057 00:33:31.057 Run status group 0 (all jobs): 00:33:31.057 READ: bw=44.3MiB/s (46.5MB/s), 44.3MiB/s-44.3MiB/s (46.5MB/s-46.5MB/s), io=88.8MiB (93.1MB), run=2004-2004msec 00:33:31.057 WRITE: bw=44.0MiB/s (46.2MB/s), 44.0MiB/s-44.0MiB/s (46.2MB/s-46.2MB/s), io=88.2MiB (92.5MB), run=2004-2004msec 00:33:31.057 09:51:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:33:31.057 09:51:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:33:31.057 09:51:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:31.057 09:51:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:31.057 09:51:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:31.057 09:51:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:31.057 09:51:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:33:31.057 09:51:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:31.057 09:51:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:31.057 09:51:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:31.057 09:51:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:33:31.057 09:51:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:31.057 09:51:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:31.057 09:51:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:31.057 09:51:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:31.057 09:51:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:31.057 09:51:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:31.057 09:51:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:31.057 09:51:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:31.057 09:51:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:31.057 09:51:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:33:31.057 09:51:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:33:31.317 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:33:31.317 fio-3.35 00:33:31.317 Starting 1 thread 00:33:33.970 00:33:33.970 test: (groupid=0, jobs=1): err= 0: pid=2982074: Mon Dec 9 09:51:09 2024 00:33:33.970 read: IOPS=9413, BW=147MiB/s (154MB/s)(295MiB/2003msec) 00:33:33.970 slat (usec): min=3, max=116, avg= 3.62, stdev= 1.68 00:33:33.970 clat (usec): min=1675, max=49965, avg=8391.20, stdev=3079.40 00:33:33.970 lat (usec): min=1679, max=49968, avg=8394.82, stdev=3079.46 00:33:33.970 clat percentiles (usec): 00:33:33.970 | 1.00th=[ 3982], 5.00th=[ 5145], 10.00th=[ 5735], 20.00th=[ 6390], 00:33:33.970 | 30.00th=[ 7046], 40.00th=[ 7570], 50.00th=[ 8094], 60.00th=[ 8717], 00:33:33.970 | 70.00th=[ 9503], 80.00th=[10421], 90.00th=[10945], 95.00th=[11469], 00:33:33.970 | 99.00th=[12649], 99.50th=[13304], 99.90th=[46924], 99.95th=[47449], 00:33:33.970 | 99.99th=[49546] 00:33:33.970 bw ( KiB/s): min=65504, max=87392, per=49.05%, avg=73880.00, stdev=9739.94, samples=4 00:33:33.970 iops : min= 4094, max= 5462, avg=4617.50, stdev=608.75, samples=4 00:33:33.970 write: IOPS=5816, BW=90.9MiB/s (95.3MB/s)(150MiB/1654msec); 0 zone resets 00:33:33.970 slat (usec): min=39, max=332, avg=40.85, stdev= 6.59 00:33:33.970 clat (usec): min=2261, max=51671, avg=9277.36, stdev=3364.94 00:33:33.970 lat (usec): min=2301, max=51711, avg=9318.21, stdev=3365.41 00:33:33.970 clat percentiles (usec): 00:33:33.970 | 1.00th=[ 6456], 5.00th=[ 7308], 10.00th=[ 7570], 20.00th=[ 7963], 00:33:33.970 | 30.00th=[ 8291], 40.00th=[ 8586], 50.00th=[ 8979], 60.00th=[ 9241], 00:33:33.970 | 70.00th=[ 9634], 80.00th=[10159], 90.00th=[10683], 95.00th=[11207], 00:33:33.970 | 99.00th=[13042], 99.50th=[48497], 99.90th=[50594], 99.95th=[50594], 00:33:33.970 | 99.99th=[51643] 00:33:33.970 bw ( KiB/s): min=67584, max=90912, per=82.70%, avg=76960.00, stdev=10388.62, samples=4 00:33:33.970 iops : min= 4224, max= 5682, avg=4810.00, stdev=649.29, samples=4 00:33:33.970 lat (msec) : 2=0.01%, 4=0.79%, 10=75.15%, 20=23.60%, 50=0.40% 00:33:33.970 lat (msec) : 100=0.05% 00:33:33.970 cpu : usr=84.67%, sys=13.98%, ctx=14, majf=0, minf=48 00:33:33.970 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:33:33.970 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:33.970 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:33.970 issued rwts: total=18855,9620,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:33.970 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:33.970 00:33:33.970 Run status group 0 (all jobs): 00:33:33.970 READ: bw=147MiB/s (154MB/s), 147MiB/s-147MiB/s (154MB/s-154MB/s), io=295MiB (309MB), run=2003-2003msec 00:33:33.970 WRITE: bw=90.9MiB/s (95.3MB/s), 90.9MiB/s-90.9MiB/s (95.3MB/s-95.3MB/s), io=150MiB (158MB), run=1654-1654msec 00:33:33.970 09:51:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:33.970 09:51:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:33:33.970 09:51:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:33:33.970 09:51:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:33:33.970 09:51:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # bdfs=() 00:33:33.970 09:51:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # local bdfs 00:33:33.970 09:51:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:33:33.970 09:51:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:33:33.971 09:51:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:33:33.971 09:51:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:33:33.971 09:51:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:33:33.971 09:51:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 -i 10.0.0.2 00:33:34.541 Nvme0n1 00:33:34.541 09:51:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:33:35.112 09:51:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=eb00ec4e-afa7-4be9-a4dd-72bcbea881f6 00:33:35.112 09:51:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb eb00ec4e-afa7-4be9-a4dd-72bcbea881f6 00:33:35.112 09:51:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=eb00ec4e-afa7-4be9-a4dd-72bcbea881f6 00:33:35.112 09:51:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:33:35.112 09:51:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:33:35.112 09:51:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:33:35.112 09:51:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:33:35.373 09:51:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:33:35.373 { 00:33:35.373 "uuid": "eb00ec4e-afa7-4be9-a4dd-72bcbea881f6", 00:33:35.373 "name": "lvs_0", 00:33:35.373 "base_bdev": "Nvme0n1", 00:33:35.373 "total_data_clusters": 1787, 00:33:35.373 "free_clusters": 1787, 00:33:35.373 "block_size": 512, 00:33:35.373 "cluster_size": 1073741824 00:33:35.373 } 00:33:35.373 ]' 00:33:35.373 09:51:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="eb00ec4e-afa7-4be9-a4dd-72bcbea881f6") .free_clusters' 00:33:35.373 09:51:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=1787 00:33:35.373 09:51:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="eb00ec4e-afa7-4be9-a4dd-72bcbea881f6") .cluster_size' 00:33:35.373 09:51:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=1073741824 00:33:35.373 09:51:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=1829888 00:33:35.373 09:51:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 1829888 00:33:35.373 1829888 00:33:35.373 09:51:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 1829888 00:33:35.633 85a16158-8cfd-4244-b953-bb553b40aeb6 00:33:35.633 09:51:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:33:35.894 09:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:33:35.894 09:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:33:36.154 09:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:36.154 09:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:36.154 09:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:36.154 09:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:36.154 09:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:36.154 09:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:36.154 09:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:33:36.154 09:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:36.154 09:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:36.154 09:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:36.154 09:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:33:36.154 09:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:36.154 09:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:36.154 09:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:36.154 09:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:36.154 09:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:36.154 09:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:36.154 09:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:36.154 09:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:36.154 09:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:36.154 09:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:33:36.154 09:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:36.414 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:33:36.414 fio-3.35 00:33:36.414 Starting 1 thread 00:33:38.958 00:33:38.958 test: (groupid=0, jobs=1): err= 0: pid=2983727: Mon Dec 9 09:51:14 2024 00:33:38.958 read: IOPS=10.4k, BW=40.5MiB/s (42.5MB/s)(81.2MiB/2005msec) 00:33:38.958 slat (usec): min=2, max=109, avg= 2.21, stdev= 1.05 00:33:38.958 clat (usec): min=1925, max=11283, avg=6809.86, stdev=504.28 00:33:38.958 lat (usec): min=1943, max=11285, avg=6812.07, stdev=504.22 00:33:38.958 clat percentiles (usec): 00:33:38.958 | 1.00th=[ 5669], 5.00th=[ 5997], 10.00th=[ 6194], 20.00th=[ 6390], 00:33:38.958 | 30.00th=[ 6587], 40.00th=[ 6718], 50.00th=[ 6783], 60.00th=[ 6915], 00:33:38.958 | 70.00th=[ 7046], 80.00th=[ 7242], 90.00th=[ 7439], 95.00th=[ 7570], 00:33:38.958 | 99.00th=[ 7963], 99.50th=[ 8094], 99.90th=[ 8717], 99.95th=[10159], 00:33:38.958 | 99.99th=[11207] 00:33:38.958 bw ( KiB/s): min=40328, max=42080, per=99.88%, avg=41446.00, stdev=778.93, samples=4 00:33:38.958 iops : min=10082, max=10520, avg=10361.50, stdev=194.73, samples=4 00:33:38.958 write: IOPS=10.4k, BW=40.5MiB/s (42.5MB/s)(81.3MiB/2005msec); 0 zone resets 00:33:38.958 slat (nsec): min=2083, max=139950, avg=2289.29, stdev=994.09 00:33:38.958 clat (usec): min=1068, max=9542, avg=5444.72, stdev=431.76 00:33:38.958 lat (usec): min=1080, max=9544, avg=5447.01, stdev=431.74 00:33:38.958 clat percentiles (usec): 00:33:38.958 | 1.00th=[ 4424], 5.00th=[ 4752], 10.00th=[ 4948], 20.00th=[ 5080], 00:33:38.958 | 30.00th=[ 5211], 40.00th=[ 5342], 50.00th=[ 5473], 60.00th=[ 5538], 00:33:38.958 | 70.00th=[ 5669], 80.00th=[ 5800], 90.00th=[ 5932], 95.00th=[ 6128], 00:33:38.958 | 99.00th=[ 6390], 99.50th=[ 6521], 99.90th=[ 7898], 99.95th=[ 8586], 00:33:38.958 | 99.99th=[ 9503] 00:33:38.958 bw ( KiB/s): min=40912, max=41928, per=100.00%, avg=41524.00, stdev=438.30, samples=4 00:33:38.958 iops : min=10228, max=10482, avg=10381.00, stdev=109.57, samples=4 00:33:38.958 lat (msec) : 2=0.02%, 4=0.12%, 10=99.83%, 20=0.03% 00:33:38.958 cpu : usr=71.31%, sys=27.64%, ctx=29, majf=0, minf=27 00:33:38.958 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:33:38.958 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:38.958 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:38.959 issued rwts: total=20799,20813,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:38.959 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:38.959 00:33:38.959 Run status group 0 (all jobs): 00:33:38.959 READ: bw=40.5MiB/s (42.5MB/s), 40.5MiB/s-40.5MiB/s (42.5MB/s-42.5MB/s), io=81.2MiB (85.2MB), run=2005-2005msec 00:33:38.959 WRITE: bw=40.5MiB/s (42.5MB/s), 40.5MiB/s-40.5MiB/s (42.5MB/s-42.5MB/s), io=81.3MiB (85.2MB), run=2005-2005msec 00:33:38.959 09:51:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:33:38.959 09:51:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:33:39.900 09:51:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=14dca20e-3a7b-4542-9142-275debf05ea5 00:33:39.900 09:51:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 14dca20e-3a7b-4542-9142-275debf05ea5 00:33:39.900 09:51:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=14dca20e-3a7b-4542-9142-275debf05ea5 00:33:39.900 09:51:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:33:39.900 09:51:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:33:39.900 09:51:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:33:39.900 09:51:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:33:40.159 09:51:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:33:40.159 { 00:33:40.159 "uuid": "eb00ec4e-afa7-4be9-a4dd-72bcbea881f6", 00:33:40.159 "name": "lvs_0", 00:33:40.159 "base_bdev": "Nvme0n1", 00:33:40.159 "total_data_clusters": 1787, 00:33:40.159 "free_clusters": 0, 00:33:40.159 "block_size": 512, 00:33:40.159 "cluster_size": 1073741824 00:33:40.159 }, 00:33:40.159 { 00:33:40.159 "uuid": "14dca20e-3a7b-4542-9142-275debf05ea5", 00:33:40.159 "name": "lvs_n_0", 00:33:40.159 "base_bdev": "85a16158-8cfd-4244-b953-bb553b40aeb6", 00:33:40.159 "total_data_clusters": 457025, 00:33:40.159 "free_clusters": 457025, 00:33:40.159 "block_size": 512, 00:33:40.159 "cluster_size": 4194304 00:33:40.159 } 00:33:40.159 ]' 00:33:40.159 09:51:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="14dca20e-3a7b-4542-9142-275debf05ea5") .free_clusters' 00:33:40.159 09:51:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=457025 00:33:40.159 09:51:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="14dca20e-3a7b-4542-9142-275debf05ea5") .cluster_size' 00:33:40.159 09:51:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=4194304 00:33:40.159 09:51:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=1828100 00:33:40.159 09:51:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 1828100 00:33:40.159 1828100 00:33:40.159 09:51:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 1828100 00:33:41.100 6f6e4297-6328-4bfb-b801-83f24623f18f 00:33:41.100 09:51:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:33:41.100 09:51:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:33:41.361 09:51:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:33:41.361 09:51:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:41.361 09:51:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:41.361 09:51:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:41.361 09:51:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:41.361 09:51:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:41.361 09:51:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:41.361 09:51:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:33:41.361 09:51:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:41.361 09:51:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:41.361 09:51:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:41.361 09:51:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:33:41.361 09:51:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:41.640 09:51:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:41.640 09:51:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:41.640 09:51:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:41.640 09:51:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:33:41.640 09:51:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:41.640 09:51:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:41.640 09:51:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:41.640 09:51:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:41.640 09:51:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:33:41.640 09:51:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:33:41.907 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:33:41.907 fio-3.35 00:33:41.907 Starting 1 thread 00:33:44.463 00:33:44.463 test: (groupid=0, jobs=1): err= 0: pid=2984908: Mon Dec 9 09:51:19 2024 00:33:44.463 read: IOPS=9009, BW=35.2MiB/s (36.9MB/s)(72.0MiB/2047msec) 00:33:44.463 slat (usec): min=2, max=113, avg= 2.24, stdev= 1.18 00:33:44.463 clat (usec): min=2791, max=54103, avg=7863.47, stdev=2817.37 00:33:44.463 lat (usec): min=2808, max=54106, avg=7865.71, stdev=2817.35 00:33:44.463 clat percentiles (usec): 00:33:44.463 | 1.00th=[ 6325], 5.00th=[ 6783], 10.00th=[ 6980], 20.00th=[ 7242], 00:33:44.463 | 30.00th=[ 7373], 40.00th=[ 7570], 50.00th=[ 7701], 60.00th=[ 7832], 00:33:44.463 | 70.00th=[ 7963], 80.00th=[ 8160], 90.00th=[ 8455], 95.00th=[ 8586], 00:33:44.463 | 99.00th=[ 9110], 99.50th=[ 9241], 99.90th=[52167], 99.95th=[53216], 00:33:44.463 | 99.99th=[54264] 00:33:44.463 bw ( KiB/s): min=35560, max=37352, per=100.00%, avg=36732.00, stdev=799.29, samples=4 00:33:44.463 iops : min= 8890, max= 9338, avg=9183.00, stdev=199.82, samples=4 00:33:44.463 write: IOPS=9014, BW=35.2MiB/s (36.9MB/s)(72.1MiB/2047msec); 0 zone resets 00:33:44.463 slat (nsec): min=2101, max=108542, avg=2310.35, stdev=840.36 00:33:44.463 clat (usec): min=1062, max=52210, avg=6240.56, stdev=2324.87 00:33:44.463 lat (usec): min=1070, max=52212, avg=6242.87, stdev=2324.87 00:33:44.463 clat percentiles (usec): 00:33:44.463 | 1.00th=[ 4948], 5.00th=[ 5342], 10.00th=[ 5538], 20.00th=[ 5735], 00:33:44.463 | 30.00th=[ 5866], 40.00th=[ 5997], 50.00th=[ 6128], 60.00th=[ 6259], 00:33:44.463 | 70.00th=[ 6390], 80.00th=[ 6521], 90.00th=[ 6718], 95.00th=[ 6915], 00:33:44.463 | 99.00th=[ 7308], 99.50th=[ 7439], 99.90th=[50594], 99.95th=[51119], 00:33:44.463 | 99.99th=[52167] 00:33:44.463 bw ( KiB/s): min=36368, max=37072, per=100.00%, avg=36804.00, stdev=332.91, samples=4 00:33:44.463 iops : min= 9092, max= 9268, avg=9201.00, stdev=83.23, samples=4 00:33:44.463 lat (msec) : 2=0.01%, 4=0.10%, 10=99.55%, 50=0.16%, 100=0.18% 00:33:44.463 cpu : usr=71.55%, sys=27.47%, ctx=48, majf=0, minf=27 00:33:44.463 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:33:44.463 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:44.463 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:44.463 issued rwts: total=18443,18452,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:44.463 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:44.463 00:33:44.463 Run status group 0 (all jobs): 00:33:44.463 READ: bw=35.2MiB/s (36.9MB/s), 35.2MiB/s-35.2MiB/s (36.9MB/s-36.9MB/s), io=72.0MiB (75.5MB), run=2047-2047msec 00:33:44.463 WRITE: bw=35.2MiB/s (36.9MB/s), 35.2MiB/s-35.2MiB/s (36.9MB/s-36.9MB/s), io=72.1MiB (75.6MB), run=2047-2047msec 00:33:44.463 09:51:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:33:44.463 09:51:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:33:44.463 09:51:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:33:46.373 09:51:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:33:46.373 09:51:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:33:46.940 09:51:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:33:47.200 09:51:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:33:49.743 09:51:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:33:49.743 09:51:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:33:49.743 09:51:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:33:49.743 09:51:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:49.743 09:51:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:33:49.743 09:51:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:49.743 09:51:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:33:49.743 09:51:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:49.743 09:51:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:49.743 rmmod nvme_tcp 00:33:49.743 rmmod nvme_fabrics 00:33:49.743 rmmod nvme_keyring 00:33:49.743 09:51:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:49.743 09:51:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:33:49.743 09:51:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:33:49.743 09:51:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 2980693 ']' 00:33:49.743 09:51:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 2980693 00:33:49.743 09:51:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 2980693 ']' 00:33:49.743 09:51:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 2980693 00:33:49.743 09:51:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:33:49.743 09:51:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:49.743 09:51:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2980693 00:33:49.743 09:51:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:49.743 09:51:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:49.743 09:51:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2980693' 00:33:49.743 killing process with pid 2980693 00:33:49.743 09:51:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 2980693 00:33:49.743 09:51:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 2980693 00:33:49.743 09:51:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:49.743 09:51:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:49.743 09:51:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:49.743 09:51:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:33:49.743 09:51:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:33:49.743 09:51:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:33:49.743 09:51:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:49.743 09:51:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:49.743 09:51:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:49.743 09:51:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:49.743 09:51:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:49.743 09:51:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:51.656 09:51:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:51.656 00:33:51.656 real 0m32.848s 00:33:51.656 user 2m43.645s 00:33:51.656 sys 0m9.680s 00:33:51.656 09:51:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:51.656 09:51:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:33:51.656 ************************************ 00:33:51.656 END TEST nvmf_fio_host 00:33:51.656 ************************************ 00:33:51.656 09:51:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:33:51.656 09:51:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:51.656 09:51:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:51.656 09:51:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:51.656 ************************************ 00:33:51.656 START TEST nvmf_failover 00:33:51.656 ************************************ 00:33:51.656 09:51:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:33:51.656 * Looking for test storage... 00:33:51.656 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:51.656 09:51:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:51.656 09:51:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:33:51.656 09:51:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:51.917 09:51:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:51.917 09:51:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:51.917 09:51:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:51.917 09:51:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:51.917 09:51:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:33:51.917 09:51:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:33:51.917 09:51:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:33:51.917 09:51:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:33:51.917 09:51:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:33:51.917 09:51:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:33:51.917 09:51:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:33:51.917 09:51:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:51.917 09:51:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:33:51.917 09:51:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:33:51.917 09:51:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:51.917 09:51:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:51.917 09:51:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:33:51.917 09:51:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:33:51.917 09:51:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:51.917 09:51:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:33:51.917 09:51:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:33:51.917 09:51:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:33:51.917 09:51:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:33:51.917 09:51:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:51.917 09:51:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:33:51.917 09:51:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:33:51.917 09:51:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:51.917 09:51:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:51.917 09:51:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:33:51.917 09:51:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:51.917 09:51:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:51.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:51.917 --rc genhtml_branch_coverage=1 00:33:51.917 --rc genhtml_function_coverage=1 00:33:51.917 --rc genhtml_legend=1 00:33:51.917 --rc geninfo_all_blocks=1 00:33:51.917 --rc geninfo_unexecuted_blocks=1 00:33:51.917 00:33:51.917 ' 00:33:51.917 09:51:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:51.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:51.917 --rc genhtml_branch_coverage=1 00:33:51.917 --rc genhtml_function_coverage=1 00:33:51.917 --rc genhtml_legend=1 00:33:51.917 --rc geninfo_all_blocks=1 00:33:51.917 --rc geninfo_unexecuted_blocks=1 00:33:51.917 00:33:51.917 ' 00:33:51.917 09:51:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:51.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:51.917 --rc genhtml_branch_coverage=1 00:33:51.917 --rc genhtml_function_coverage=1 00:33:51.917 --rc genhtml_legend=1 00:33:51.917 --rc geninfo_all_blocks=1 00:33:51.917 --rc geninfo_unexecuted_blocks=1 00:33:51.917 00:33:51.917 ' 00:33:51.917 09:51:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:51.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:51.917 --rc genhtml_branch_coverage=1 00:33:51.917 --rc genhtml_function_coverage=1 00:33:51.917 --rc genhtml_legend=1 00:33:51.917 --rc geninfo_all_blocks=1 00:33:51.917 --rc geninfo_unexecuted_blocks=1 00:33:51.917 00:33:51.917 ' 00:33:51.917 09:51:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:51.917 09:51:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:33:51.917 09:51:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:51.917 09:51:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:51.917 09:51:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:51.917 09:51:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:51.917 09:51:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:51.918 09:51:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:51.918 09:51:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:51.918 09:51:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:51.918 09:51:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:51.918 09:51:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:51.918 09:51:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:51.918 09:51:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:51.918 09:51:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:51.918 09:51:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:51.918 09:51:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:51.918 09:51:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:51.918 09:51:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:51.918 09:51:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:33:51.918 09:51:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:51.918 09:51:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:51.918 09:51:27 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:51.918 09:51:27 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:51.918 09:51:27 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:51.918 09:51:27 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:51.918 09:51:27 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:33:51.918 09:51:27 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:51.918 09:51:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:33:51.918 09:51:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:51.918 09:51:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:51.918 09:51:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:51.918 09:51:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:51.918 09:51:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:51.918 09:51:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:51.918 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:51.918 09:51:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:51.918 09:51:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:51.918 09:51:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:51.918 09:51:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:51.918 09:51:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:51.918 09:51:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:51.918 09:51:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:51.918 09:51:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:33:51.918 09:51:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:51.918 09:51:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:51.918 09:51:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:51.918 09:51:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:51.918 09:51:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:51.918 09:51:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:51.918 09:51:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:51.918 09:51:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:51.918 09:51:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:51.918 09:51:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:51.918 09:51:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:33:51.918 09:51:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:00.064 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:00.064 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:00.064 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:00.064 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:00.064 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:00.064 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.694 ms 00:34:00.064 00:34:00.064 --- 10.0.0.2 ping statistics --- 00:34:00.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:00.064 rtt min/avg/max/mdev = 0.694/0.694/0.694/0.000 ms 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:00.064 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:00.064 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.302 ms 00:34:00.064 00:34:00.064 --- 10.0.0.1 ping statistics --- 00:34:00.064 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:00.064 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=2990483 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 2990483 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2990483 ']' 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:00.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:00.064 09:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:00.064 [2024-12-09 09:51:34.623522] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:34:00.064 [2024-12-09 09:51:34.623590] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:00.064 [2024-12-09 09:51:34.724515] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:00.064 [2024-12-09 09:51:34.752037] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:00.064 [2024-12-09 09:51:34.752088] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:00.064 [2024-12-09 09:51:34.752097] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:00.064 [2024-12-09 09:51:34.752104] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:00.064 [2024-12-09 09:51:34.752110] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:00.064 [2024-12-09 09:51:34.754044] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:00.064 [2024-12-09 09:51:34.754209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:00.064 [2024-12-09 09:51:34.754210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:00.064 09:51:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:00.064 09:51:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:34:00.064 09:51:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:00.064 09:51:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:00.064 09:51:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:00.064 09:51:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:00.065 09:51:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:00.324 [2024-12-09 09:51:35.619055] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:00.324 09:51:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:34:00.584 Malloc0 00:34:00.584 09:51:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:00.844 09:51:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:00.844 09:51:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:01.104 [2024-12-09 09:51:36.373153] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:01.104 09:51:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:01.104 [2024-12-09 09:51:36.545619] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:01.365 09:51:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:34:01.365 [2024-12-09 09:51:36.714138] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:34:01.365 09:51:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2990946 00:34:01.365 09:51:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:34:01.365 09:51:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:01.365 09:51:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2990946 /var/tmp/bdevperf.sock 00:34:01.365 09:51:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2990946 ']' 00:34:01.365 09:51:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:01.365 09:51:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:01.365 09:51:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:01.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:01.365 09:51:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:01.365 09:51:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:02.306 09:51:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:02.306 09:51:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:34:02.306 09:51:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:34:02.567 NVMe0n1 00:34:02.567 09:51:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:34:02.827 00:34:03.089 09:51:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2991269 00:34:03.089 09:51:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:34:03.089 09:51:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:34:04.031 09:51:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:04.031 [2024-12-09 09:51:39.453816] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd5fc0 is same with the state(6) to be set 00:34:04.031 [2024-12-09 09:51:39.453873] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd5fc0 is same with the state(6) to be set 00:34:04.031 [2024-12-09 09:51:39.453880] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd5fc0 is same with the state(6) to be set 00:34:04.031 [2024-12-09 09:51:39.453886] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd5fc0 is same with the state(6) to be set 00:34:04.031 [2024-12-09 09:51:39.453890] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd5fc0 is same with the state(6) to be set 00:34:04.031 [2024-12-09 09:51:39.453895] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd5fc0 is same with the state(6) to be set 00:34:04.031 [2024-12-09 09:51:39.453900] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd5fc0 is same with the state(6) to be set 00:34:04.031 [2024-12-09 09:51:39.453904] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd5fc0 is same with the state(6) to be set 00:34:04.031 [2024-12-09 09:51:39.453909] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd5fc0 is same with the state(6) to be set 00:34:04.031 [2024-12-09 09:51:39.453913] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd5fc0 is same with the state(6) to be set 00:34:04.031 [2024-12-09 09:51:39.453918] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd5fc0 is same with the state(6) to be set 00:34:04.031 [2024-12-09 09:51:39.453922] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd5fc0 is same with the state(6) to be set 00:34:04.031 [2024-12-09 09:51:39.453927] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd5fc0 is same with the state(6) to be set 00:34:04.031 [2024-12-09 09:51:39.453931] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd5fc0 is same with the state(6) to be set 00:34:04.031 [2024-12-09 09:51:39.453936] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd5fc0 is same with the state(6) to be set 00:34:04.031 [2024-12-09 09:51:39.453940] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd5fc0 is same with the state(6) to be set 00:34:04.031 [2024-12-09 09:51:39.453945] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd5fc0 is same with the state(6) to be set 00:34:04.031 [2024-12-09 09:51:39.453949] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd5fc0 is same with the state(6) to be set 00:34:04.032 [2024-12-09 09:51:39.453954] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd5fc0 is same with the state(6) to be set 00:34:04.032 [2024-12-09 09:51:39.453959] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd5fc0 is same with the state(6) to be set 00:34:04.032 [2024-12-09 09:51:39.453963] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd5fc0 is same with the state(6) to be set 00:34:04.032 [2024-12-09 09:51:39.453967] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd5fc0 is same with the state(6) to be set 00:34:04.032 [2024-12-09 09:51:39.453972] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd5fc0 is same with the state(6) to be set 00:34:04.032 [2024-12-09 09:51:39.453977] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd5fc0 is same with the state(6) to be set 00:34:04.032 [2024-12-09 09:51:39.453981] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd5fc0 is same with the state(6) to be set 00:34:04.032 [2024-12-09 09:51:39.453986] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd5fc0 is same with the state(6) to be set 00:34:04.032 [2024-12-09 09:51:39.453995] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd5fc0 is same with the state(6) to be set 00:34:04.032 [2024-12-09 09:51:39.453999] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd5fc0 is same with the state(6) to be set 00:34:04.032 [2024-12-09 09:51:39.454004] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd5fc0 is same with the state(6) to be set 00:34:04.032 [2024-12-09 09:51:39.454009] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd5fc0 is same with the state(6) to be set 00:34:04.032 [2024-12-09 09:51:39.454014] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd5fc0 is same with the state(6) to be set 00:34:04.032 [2024-12-09 09:51:39.454018] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd5fc0 is same with the state(6) to be set 00:34:04.032 [2024-12-09 09:51:39.454022] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd5fc0 is same with the state(6) to be set 00:34:04.032 [2024-12-09 09:51:39.454027] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd5fc0 is same with the state(6) to be set 00:34:04.032 [2024-12-09 09:51:39.454031] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd5fc0 is same with the state(6) to be set 00:34:04.032 [2024-12-09 09:51:39.454036] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd5fc0 is same with the state(6) to be set 00:34:04.032 [2024-12-09 09:51:39.454040] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd5fc0 is same with the state(6) to be set 00:34:04.032 [2024-12-09 09:51:39.454044] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd5fc0 is same with the state(6) to be set 00:34:04.032 [2024-12-09 09:51:39.454049] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd5fc0 is same with the state(6) to be set 00:34:04.032 [2024-12-09 09:51:39.454053] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd5fc0 is same with the state(6) to be set 00:34:04.032 [2024-12-09 09:51:39.454058] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd5fc0 is same with the state(6) to be set 00:34:04.032 [2024-12-09 09:51:39.454062] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd5fc0 is same with the state(6) to be set 00:34:04.032 [2024-12-09 09:51:39.454067] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd5fc0 is same with the state(6) to be set 00:34:04.032 [2024-12-09 09:51:39.454071] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd5fc0 is same with the state(6) to be set 00:34:04.032 [2024-12-09 09:51:39.454075] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd5fc0 is same with the state(6) to be set 00:34:04.032 [2024-12-09 09:51:39.454080] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd5fc0 is same with the state(6) to be set 00:34:04.032 [2024-12-09 09:51:39.454084] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd5fc0 is same with the state(6) to be set 00:34:04.032 [2024-12-09 09:51:39.454089] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd5fc0 is same with the state(6) to be set 00:34:04.292 09:51:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:34:07.592 09:51:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:34:07.592 00:34:07.592 09:51:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:07.592 [2024-12-09 09:51:43.033858] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.592 [2024-12-09 09:51:43.033899] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.592 [2024-12-09 09:51:43.033910] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.592 [2024-12-09 09:51:43.033915] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.592 [2024-12-09 09:51:43.033920] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.592 [2024-12-09 09:51:43.033924] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.592 [2024-12-09 09:51:43.033929] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.592 [2024-12-09 09:51:43.033934] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.592 [2024-12-09 09:51:43.033939] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.592 [2024-12-09 09:51:43.033943] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.592 [2024-12-09 09:51:43.033948] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.592 [2024-12-09 09:51:43.033953] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.592 [2024-12-09 09:51:43.033957] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.592 [2024-12-09 09:51:43.033962] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.592 [2024-12-09 09:51:43.033966] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.592 [2024-12-09 09:51:43.033970] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.592 [2024-12-09 09:51:43.033975] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.592 [2024-12-09 09:51:43.033979] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.592 [2024-12-09 09:51:43.033983] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.592 [2024-12-09 09:51:43.033988] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.592 [2024-12-09 09:51:43.033993] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.593 [2024-12-09 09:51:43.033997] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.593 [2024-12-09 09:51:43.034002] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.593 [2024-12-09 09:51:43.034006] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.593 [2024-12-09 09:51:43.034011] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.593 [2024-12-09 09:51:43.034015] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.593 [2024-12-09 09:51:43.034019] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.593 [2024-12-09 09:51:43.034024] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.593 [2024-12-09 09:51:43.034028] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.593 [2024-12-09 09:51:43.034034] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.593 [2024-12-09 09:51:43.034038] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.593 [2024-12-09 09:51:43.034042] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.593 [2024-12-09 09:51:43.034047] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.593 [2024-12-09 09:51:43.034051] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.593 [2024-12-09 09:51:43.034056] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.593 [2024-12-09 09:51:43.034061] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.593 [2024-12-09 09:51:43.034065] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.593 [2024-12-09 09:51:43.034069] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.593 [2024-12-09 09:51:43.034074] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.593 [2024-12-09 09:51:43.034078] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.593 [2024-12-09 09:51:43.034083] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.593 [2024-12-09 09:51:43.034087] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.593 [2024-12-09 09:51:43.034091] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.593 [2024-12-09 09:51:43.034096] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.593 [2024-12-09 09:51:43.034101] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.593 [2024-12-09 09:51:43.034106] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.593 [2024-12-09 09:51:43.034110] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.593 [2024-12-09 09:51:43.034115] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.593 [2024-12-09 09:51:43.034119] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.593 [2024-12-09 09:51:43.034123] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.593 [2024-12-09 09:51:43.034128] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.593 [2024-12-09 09:51:43.034133] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.593 [2024-12-09 09:51:43.034137] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.593 [2024-12-09 09:51:43.034141] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.593 [2024-12-09 09:51:43.034146] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.593 [2024-12-09 09:51:43.034150] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.593 [2024-12-09 09:51:43.034156] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.593 [2024-12-09 09:51:43.034161] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.593 [2024-12-09 09:51:43.034165] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.593 [2024-12-09 09:51:43.034169] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.593 [2024-12-09 09:51:43.034174] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.593 [2024-12-09 09:51:43.034178] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.593 [2024-12-09 09:51:43.034183] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.593 [2024-12-09 09:51:43.034187] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.593 [2024-12-09 09:51:43.034191] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.593 [2024-12-09 09:51:43.034196] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.593 [2024-12-09 09:51:43.034201] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.593 [2024-12-09 09:51:43.034206] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.593 [2024-12-09 09:51:43.034211] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.593 [2024-12-09 09:51:43.034215] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.593 [2024-12-09 09:51:43.034220] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.593 [2024-12-09 09:51:43.034225] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.593 [2024-12-09 09:51:43.034230] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.593 [2024-12-09 09:51:43.034234] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.593 [2024-12-09 09:51:43.034239] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.593 [2024-12-09 09:51:43.034243] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.593 [2024-12-09 09:51:43.034248] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.593 [2024-12-09 09:51:43.034252] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.593 [2024-12-09 09:51:43.034257] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.593 [2024-12-09 09:51:43.034262] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.593 [2024-12-09 09:51:43.034267] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.593 [2024-12-09 09:51:43.034271] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.593 [2024-12-09 09:51:43.034276] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.593 [2024-12-09 09:51:43.034285] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.593 [2024-12-09 09:51:43.034290] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.593 [2024-12-09 09:51:43.034295] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.593 [2024-12-09 09:51:43.034299] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.593 [2024-12-09 09:51:43.034304] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.593 [2024-12-09 09:51:43.034309] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.593 [2024-12-09 09:51:43.034313] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.593 [2024-12-09 09:51:43.034317] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.593 [2024-12-09 09:51:43.034322] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.593 [2024-12-09 09:51:43.034327] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.593 [2024-12-09 09:51:43.034332] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.593 [2024-12-09 09:51:43.034337] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.593 [2024-12-09 09:51:43.034341] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.593 [2024-12-09 09:51:43.034346] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.593 [2024-12-09 09:51:43.034350] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.593 [2024-12-09 09:51:43.034355] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.593 [2024-12-09 09:51:43.034360] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.593 [2024-12-09 09:51:43.034365] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.593 [2024-12-09 09:51:43.034369] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.593 [2024-12-09 09:51:43.034374] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.594 [2024-12-09 09:51:43.034378] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.594 [2024-12-09 09:51:43.034383] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.594 [2024-12-09 09:51:43.034388] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.594 [2024-12-09 09:51:43.034393] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.594 [2024-12-09 09:51:43.034397] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.594 [2024-12-09 09:51:43.034402] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.594 [2024-12-09 09:51:43.034406] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.594 [2024-12-09 09:51:43.034412] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.594 [2024-12-09 09:51:43.034417] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.594 [2024-12-09 09:51:43.034422] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.594 [2024-12-09 09:51:43.034427] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.594 [2024-12-09 09:51:43.034431] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.594 [2024-12-09 09:51:43.034436] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.594 [2024-12-09 09:51:43.034441] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.594 [2024-12-09 09:51:43.034446] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.594 [2024-12-09 09:51:43.034450] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.594 [2024-12-09 09:51:43.034455] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.594 [2024-12-09 09:51:43.034459] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.594 [2024-12-09 09:51:43.034463] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.594 [2024-12-09 09:51:43.034468] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.594 [2024-12-09 09:51:43.034472] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.594 [2024-12-09 09:51:43.034477] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.594 [2024-12-09 09:51:43.034481] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.594 [2024-12-09 09:51:43.034485] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.594 [2024-12-09 09:51:43.034490] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd6c90 is same with the state(6) to be set 00:34:07.855 09:51:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:34:11.158 09:51:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:11.158 [2024-12-09 09:51:46.225960] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:11.158 09:51:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:34:12.100 09:51:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:34:12.100 [2024-12-09 09:51:47.418091] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.101 [2024-12-09 09:51:47.418127] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.101 [2024-12-09 09:51:47.418135] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.101 [2024-12-09 09:51:47.418140] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.101 [2024-12-09 09:51:47.418145] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.101 [2024-12-09 09:51:47.418155] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.101 [2024-12-09 09:51:47.418159] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.101 [2024-12-09 09:51:47.418164] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.101 [2024-12-09 09:51:47.418169] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.101 [2024-12-09 09:51:47.418173] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.101 [2024-12-09 09:51:47.418178] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.101 [2024-12-09 09:51:47.418182] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.101 [2024-12-09 09:51:47.418187] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.101 [2024-12-09 09:51:47.418192] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.101 [2024-12-09 09:51:47.418196] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.101 [2024-12-09 09:51:47.418200] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.101 [2024-12-09 09:51:47.418205] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.101 [2024-12-09 09:51:47.418209] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.101 [2024-12-09 09:51:47.418214] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.101 [2024-12-09 09:51:47.418219] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.101 [2024-12-09 09:51:47.418223] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.101 [2024-12-09 09:51:47.418227] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.101 [2024-12-09 09:51:47.418232] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.101 [2024-12-09 09:51:47.418236] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.101 [2024-12-09 09:51:47.418241] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.101 [2024-12-09 09:51:47.418246] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.101 [2024-12-09 09:51:47.418250] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.101 [2024-12-09 09:51:47.418255] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.101 [2024-12-09 09:51:47.418259] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.101 [2024-12-09 09:51:47.418264] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.101 [2024-12-09 09:51:47.418268] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.101 [2024-12-09 09:51:47.418273] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.101 [2024-12-09 09:51:47.418278] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.101 [2024-12-09 09:51:47.418283] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.101 [2024-12-09 09:51:47.418288] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.101 [2024-12-09 09:51:47.418293] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.101 [2024-12-09 09:51:47.418297] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.101 [2024-12-09 09:51:47.418302] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.101 [2024-12-09 09:51:47.418306] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.101 [2024-12-09 09:51:47.418310] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.101 [2024-12-09 09:51:47.418315] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.101 [2024-12-09 09:51:47.418319] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.101 [2024-12-09 09:51:47.418324] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.101 [2024-12-09 09:51:47.418328] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.101 [2024-12-09 09:51:47.418333] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.101 [2024-12-09 09:51:47.418338] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.101 [2024-12-09 09:51:47.418342] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.101 [2024-12-09 09:51:47.418347] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.101 [2024-12-09 09:51:47.418351] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.101 [2024-12-09 09:51:47.418355] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.101 [2024-12-09 09:51:47.418360] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.101 [2024-12-09 09:51:47.418364] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.101 [2024-12-09 09:51:47.418369] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.101 [2024-12-09 09:51:47.418373] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.101 [2024-12-09 09:51:47.418377] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.101 [2024-12-09 09:51:47.418382] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.101 [2024-12-09 09:51:47.418386] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.101 [2024-12-09 09:51:47.418391] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.101 [2024-12-09 09:51:47.418396] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.101 [2024-12-09 09:51:47.418401] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.101 [2024-12-09 09:51:47.418406] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.101 [2024-12-09 09:51:47.418410] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.101 [2024-12-09 09:51:47.418415] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.101 [2024-12-09 09:51:47.418420] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.101 [2024-12-09 09:51:47.418424] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.101 [2024-12-09 09:51:47.418429] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.101 [2024-12-09 09:51:47.418434] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.101 [2024-12-09 09:51:47.418440] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.101 [2024-12-09 09:51:47.418445] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.101 [2024-12-09 09:51:47.418450] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.101 [2024-12-09 09:51:47.418455] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.101 [2024-12-09 09:51:47.418459] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.101 [2024-12-09 09:51:47.418465] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.101 [2024-12-09 09:51:47.418469] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.101 [2024-12-09 09:51:47.418474] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.101 [2024-12-09 09:51:47.418478] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.101 [2024-12-09 09:51:47.418483] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.101 [2024-12-09 09:51:47.418488] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.101 [2024-12-09 09:51:47.418492] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.101 [2024-12-09 09:51:47.418497] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.102 [2024-12-09 09:51:47.418501] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.102 [2024-12-09 09:51:47.418506] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.102 [2024-12-09 09:51:47.418511] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.102 [2024-12-09 09:51:47.418516] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.102 [2024-12-09 09:51:47.418521] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.102 [2024-12-09 09:51:47.418525] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.102 [2024-12-09 09:51:47.418531] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.102 [2024-12-09 09:51:47.418536] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.102 [2024-12-09 09:51:47.418540] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.102 [2024-12-09 09:51:47.418545] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.102 [2024-12-09 09:51:47.418549] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.102 [2024-12-09 09:51:47.418554] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.102 [2024-12-09 09:51:47.418558] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.102 [2024-12-09 09:51:47.418562] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.102 [2024-12-09 09:51:47.418567] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.102 [2024-12-09 09:51:47.418572] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.102 [2024-12-09 09:51:47.418577] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.102 [2024-12-09 09:51:47.418581] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.102 [2024-12-09 09:51:47.418586] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.102 [2024-12-09 09:51:47.418591] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.102 [2024-12-09 09:51:47.418596] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.102 [2024-12-09 09:51:47.418600] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.102 [2024-12-09 09:51:47.418605] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.102 [2024-12-09 09:51:47.418610] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.102 [2024-12-09 09:51:47.418614] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.102 [2024-12-09 09:51:47.418619] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.102 [2024-12-09 09:51:47.418624] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.102 [2024-12-09 09:51:47.418628] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.102 [2024-12-09 09:51:47.418633] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.102 [2024-12-09 09:51:47.418644] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.102 [2024-12-09 09:51:47.418648] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.102 [2024-12-09 09:51:47.418653] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.102 [2024-12-09 09:51:47.418657] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.102 [2024-12-09 09:51:47.418663] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.102 [2024-12-09 09:51:47.418668] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.102 [2024-12-09 09:51:47.418672] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.102 [2024-12-09 09:51:47.418677] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.102 [2024-12-09 09:51:47.418681] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.102 [2024-12-09 09:51:47.418685] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.102 [2024-12-09 09:51:47.418690] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.102 [2024-12-09 09:51:47.418694] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.102 [2024-12-09 09:51:47.418699] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.102 [2024-12-09 09:51:47.418703] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.102 [2024-12-09 09:51:47.418708] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.102 [2024-12-09 09:51:47.418712] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.102 [2024-12-09 09:51:47.418717] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd79d0 is same with the state(6) to be set 00:34:12.102 09:51:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 2991269 00:34:18.710 { 00:34:18.710 "results": [ 00:34:18.710 { 00:34:18.710 "job": "NVMe0n1", 00:34:18.710 "core_mask": "0x1", 00:34:18.711 "workload": "verify", 00:34:18.711 "status": "finished", 00:34:18.711 "verify_range": { 00:34:18.711 "start": 0, 00:34:18.711 "length": 16384 00:34:18.711 }, 00:34:18.711 "queue_depth": 128, 00:34:18.711 "io_size": 4096, 00:34:18.711 "runtime": 15.003837, 00:34:18.711 "iops": 12456.346999770793, 00:34:18.711 "mibps": 48.65760546785466, 00:34:18.711 "io_failed": 10845, 00:34:18.711 "io_timeout": 0, 00:34:18.711 "avg_latency_us": 9691.559488751109, 00:34:18.711 "min_latency_us": 373.76, 00:34:18.711 "max_latency_us": 23265.28 00:34:18.711 } 00:34:18.711 ], 00:34:18.711 "core_count": 1 00:34:18.711 } 00:34:18.711 09:51:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 2990946 00:34:18.711 09:51:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2990946 ']' 00:34:18.711 09:51:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2990946 00:34:18.711 09:51:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:34:18.711 09:51:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:18.711 09:51:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2990946 00:34:18.711 09:51:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:18.711 09:51:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:18.711 09:51:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2990946' 00:34:18.711 killing process with pid 2990946 00:34:18.711 09:51:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2990946 00:34:18.711 09:51:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2990946 00:34:18.711 09:51:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:18.711 [2024-12-09 09:51:36.793206] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:34:18.711 [2024-12-09 09:51:36.793262] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2990946 ] 00:34:18.711 [2024-12-09 09:51:36.881611] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:18.711 [2024-12-09 09:51:36.899518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:18.711 Running I/O for 15 seconds... 00:34:18.711 11226.00 IOPS, 43.85 MiB/s [2024-12-09T08:51:54.164Z] [2024-12-09 09:51:39.454620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:96296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.711 [2024-12-09 09:51:39.454659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.711 [2024-12-09 09:51:39.454675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:96304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.711 [2024-12-09 09:51:39.454684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.711 [2024-12-09 09:51:39.454694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:96312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.711 [2024-12-09 09:51:39.454701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.711 [2024-12-09 09:51:39.454711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:96320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.711 [2024-12-09 09:51:39.454718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.711 [2024-12-09 09:51:39.454728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:96328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.711 [2024-12-09 09:51:39.454736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.711 [2024-12-09 09:51:39.454746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:96336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.711 [2024-12-09 09:51:39.454753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.711 [2024-12-09 09:51:39.454763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:96344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.711 [2024-12-09 09:51:39.454770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.711 [2024-12-09 09:51:39.454780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:96352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.711 [2024-12-09 09:51:39.454788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.711 [2024-12-09 09:51:39.454797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:96360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.711 [2024-12-09 09:51:39.454805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.711 [2024-12-09 09:51:39.454814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:96368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.711 [2024-12-09 09:51:39.454821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.711 [2024-12-09 09:51:39.454830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:96376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.711 [2024-12-09 09:51:39.454838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.711 [2024-12-09 09:51:39.454853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:96384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.711 [2024-12-09 09:51:39.454861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.711 [2024-12-09 09:51:39.454870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:96392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.711 [2024-12-09 09:51:39.454878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.711 [2024-12-09 09:51:39.454888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:96400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.711 [2024-12-09 09:51:39.454896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.711 [2024-12-09 09:51:39.454905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:96408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.711 [2024-12-09 09:51:39.454913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.711 [2024-12-09 09:51:39.454922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:96416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.711 [2024-12-09 09:51:39.454929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.711 [2024-12-09 09:51:39.454939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:96424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.711 [2024-12-09 09:51:39.454946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.711 [2024-12-09 09:51:39.454955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:96432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.711 [2024-12-09 09:51:39.454963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.711 [2024-12-09 09:51:39.454972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:96440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.711 [2024-12-09 09:51:39.454979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.711 [2024-12-09 09:51:39.454989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:96448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.711 [2024-12-09 09:51:39.454997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.712 [2024-12-09 09:51:39.455007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:96456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.712 [2024-12-09 09:51:39.455014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.712 [2024-12-09 09:51:39.455023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:96464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.712 [2024-12-09 09:51:39.455031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.712 [2024-12-09 09:51:39.455040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:96472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.712 [2024-12-09 09:51:39.455047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.712 [2024-12-09 09:51:39.455057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:96480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.712 [2024-12-09 09:51:39.455070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.712 [2024-12-09 09:51:39.455079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:96488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.712 [2024-12-09 09:51:39.455087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.712 [2024-12-09 09:51:39.455096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:96496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.712 [2024-12-09 09:51:39.455104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.712 [2024-12-09 09:51:39.455113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:96504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.712 [2024-12-09 09:51:39.455120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.712 [2024-12-09 09:51:39.455130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:96512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.712 [2024-12-09 09:51:39.455137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.712 [2024-12-09 09:51:39.455146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:96520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.712 [2024-12-09 09:51:39.455154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.712 [2024-12-09 09:51:39.455163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:96528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.712 [2024-12-09 09:51:39.455170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.712 [2024-12-09 09:51:39.455180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:96536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.712 [2024-12-09 09:51:39.455187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.712 [2024-12-09 09:51:39.455196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:96544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.712 [2024-12-09 09:51:39.455204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.712 [2024-12-09 09:51:39.455214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:96552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.712 [2024-12-09 09:51:39.455222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.712 [2024-12-09 09:51:39.455232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:96560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.712 [2024-12-09 09:51:39.455239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.712 [2024-12-09 09:51:39.455249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:96568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.712 [2024-12-09 09:51:39.455256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.712 [2024-12-09 09:51:39.455266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:96576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.712 [2024-12-09 09:51:39.455273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.712 [2024-12-09 09:51:39.455284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:96584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.712 [2024-12-09 09:51:39.455292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.712 [2024-12-09 09:51:39.455301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:96592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.712 [2024-12-09 09:51:39.455309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.712 [2024-12-09 09:51:39.455319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:96600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.712 [2024-12-09 09:51:39.455326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.712 [2024-12-09 09:51:39.455336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:96608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.712 [2024-12-09 09:51:39.455343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.712 [2024-12-09 09:51:39.455353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:96616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.712 [2024-12-09 09:51:39.455360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.712 [2024-12-09 09:51:39.455370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:96624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.712 [2024-12-09 09:51:39.455377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.712 [2024-12-09 09:51:39.455387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:96632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.712 [2024-12-09 09:51:39.455394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.712 [2024-12-09 09:51:39.455404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:96640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.712 [2024-12-09 09:51:39.455411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.712 [2024-12-09 09:51:39.455421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:96648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.712 [2024-12-09 09:51:39.455429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.712 [2024-12-09 09:51:39.455438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:96656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.712 [2024-12-09 09:51:39.455446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.712 [2024-12-09 09:51:39.455455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:96664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.712 [2024-12-09 09:51:39.455462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.712 [2024-12-09 09:51:39.455472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:96672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.712 [2024-12-09 09:51:39.455479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.712 [2024-12-09 09:51:39.455488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:96680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.712 [2024-12-09 09:51:39.455497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.712 [2024-12-09 09:51:39.455507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:96688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.713 [2024-12-09 09:51:39.455514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.713 [2024-12-09 09:51:39.455523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:96696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.713 [2024-12-09 09:51:39.455531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.713 [2024-12-09 09:51:39.455540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:96704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.713 [2024-12-09 09:51:39.455547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.713 [2024-12-09 09:51:39.455557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:96712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.713 [2024-12-09 09:51:39.455564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.713 [2024-12-09 09:51:39.455574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:96720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.713 [2024-12-09 09:51:39.455581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.713 [2024-12-09 09:51:39.455590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:96728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.713 [2024-12-09 09:51:39.455597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.713 [2024-12-09 09:51:39.455607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:96736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.713 [2024-12-09 09:51:39.455614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.713 [2024-12-09 09:51:39.455624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:96744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.713 [2024-12-09 09:51:39.455631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.713 [2024-12-09 09:51:39.455644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:96752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.713 [2024-12-09 09:51:39.455652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.713 [2024-12-09 09:51:39.455661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:96760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.713 [2024-12-09 09:51:39.455668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.713 [2024-12-09 09:51:39.455677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:96768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.713 [2024-12-09 09:51:39.455685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.713 [2024-12-09 09:51:39.455694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:96776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.713 [2024-12-09 09:51:39.455702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.713 [2024-12-09 09:51:39.455711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:96784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.713 [2024-12-09 09:51:39.455720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.713 [2024-12-09 09:51:39.455729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:96792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.713 [2024-12-09 09:51:39.455737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.713 [2024-12-09 09:51:39.455746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:96800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.713 [2024-12-09 09:51:39.455754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.713 [2024-12-09 09:51:39.455763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:96808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.713 [2024-12-09 09:51:39.455771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.713 [2024-12-09 09:51:39.455780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:96816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.713 [2024-12-09 09:51:39.455788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.713 [2024-12-09 09:51:39.455797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:96824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.713 [2024-12-09 09:51:39.455805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.713 [2024-12-09 09:51:39.455815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:96832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.713 [2024-12-09 09:51:39.455822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.713 [2024-12-09 09:51:39.455831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:96840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.713 [2024-12-09 09:51:39.455839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.713 [2024-12-09 09:51:39.455848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:96848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.713 [2024-12-09 09:51:39.455856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.713 [2024-12-09 09:51:39.455865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:96856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.713 [2024-12-09 09:51:39.455872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.713 [2024-12-09 09:51:39.455882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:96864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.713 [2024-12-09 09:51:39.455889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.713 [2024-12-09 09:51:39.455898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:96872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.713 [2024-12-09 09:51:39.455906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.713 [2024-12-09 09:51:39.455915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:96880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.713 [2024-12-09 09:51:39.455923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.713 [2024-12-09 09:51:39.455934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:96888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.713 [2024-12-09 09:51:39.455941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.713 [2024-12-09 09:51:39.455950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:96896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.713 [2024-12-09 09:51:39.455958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.713 [2024-12-09 09:51:39.455967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:96904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.713 [2024-12-09 09:51:39.455974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.713 [2024-12-09 09:51:39.455984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:96912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.713 [2024-12-09 09:51:39.455991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.713 [2024-12-09 09:51:39.456000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:96920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.713 [2024-12-09 09:51:39.456008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.713 [2024-12-09 09:51:39.456017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:96928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.713 [2024-12-09 09:51:39.456025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.713 [2024-12-09 09:51:39.456034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:96936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.714 [2024-12-09 09:51:39.456041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.714 [2024-12-09 09:51:39.456051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:96944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.714 [2024-12-09 09:51:39.456058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.714 [2024-12-09 09:51:39.456067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:96952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.714 [2024-12-09 09:51:39.456078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.714 [2024-12-09 09:51:39.456088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:96960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.714 [2024-12-09 09:51:39.456095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.714 [2024-12-09 09:51:39.456105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:96968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.714 [2024-12-09 09:51:39.456112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.714 [2024-12-09 09:51:39.456121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:96976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.714 [2024-12-09 09:51:39.456129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.714 [2024-12-09 09:51:39.456138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:96984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.714 [2024-12-09 09:51:39.456147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.714 [2024-12-09 09:51:39.456156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:97136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.714 [2024-12-09 09:51:39.456163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.714 [2024-12-09 09:51:39.456173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:97144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.714 [2024-12-09 09:51:39.456180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.714 [2024-12-09 09:51:39.456189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:97152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.714 [2024-12-09 09:51:39.456197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.714 [2024-12-09 09:51:39.456206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:97160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.714 [2024-12-09 09:51:39.456213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.714 [2024-12-09 09:51:39.456222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:97168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.714 [2024-12-09 09:51:39.456230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.714 [2024-12-09 09:51:39.456239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:97176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.714 [2024-12-09 09:51:39.456246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.714 [2024-12-09 09:51:39.456256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:96992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.714 [2024-12-09 09:51:39.456263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.714 [2024-12-09 09:51:39.456272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:97000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.714 [2024-12-09 09:51:39.456280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.714 [2024-12-09 09:51:39.456289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:97008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.714 [2024-12-09 09:51:39.456296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.714 [2024-12-09 09:51:39.456305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:97016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.714 [2024-12-09 09:51:39.456314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.714 [2024-12-09 09:51:39.456323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:97024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.714 [2024-12-09 09:51:39.456331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.714 [2024-12-09 09:51:39.456340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:97032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.714 [2024-12-09 09:51:39.456347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.714 [2024-12-09 09:51:39.456359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:97040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.714 [2024-12-09 09:51:39.456366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.714 [2024-12-09 09:51:39.456376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:97048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.714 [2024-12-09 09:51:39.456383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.714 [2024-12-09 09:51:39.456392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:97056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.714 [2024-12-09 09:51:39.456400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.714 [2024-12-09 09:51:39.456409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:97064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.714 [2024-12-09 09:51:39.456417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.714 [2024-12-09 09:51:39.456426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:97072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.714 [2024-12-09 09:51:39.456433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.714 [2024-12-09 09:51:39.456442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:97080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.714 [2024-12-09 09:51:39.456450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.714 [2024-12-09 09:51:39.456460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.714 [2024-12-09 09:51:39.456468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.714 [2024-12-09 09:51:39.456477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:97096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.714 [2024-12-09 09:51:39.456484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.714 [2024-12-09 09:51:39.456493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:97104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.714 [2024-12-09 09:51:39.456502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.714 [2024-12-09 09:51:39.456511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:97112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.714 [2024-12-09 09:51:39.456519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.714 [2024-12-09 09:51:39.456528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:97120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.714 [2024-12-09 09:51:39.456535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.715 [2024-12-09 09:51:39.456545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:97128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.715 [2024-12-09 09:51:39.456552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.715 [2024-12-09 09:51:39.456561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:97184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.715 [2024-12-09 09:51:39.456570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.715 [2024-12-09 09:51:39.456580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:97192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.715 [2024-12-09 09:51:39.456587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.715 [2024-12-09 09:51:39.456596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:97200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.715 [2024-12-09 09:51:39.456603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.715 [2024-12-09 09:51:39.456613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:97208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.715 [2024-12-09 09:51:39.456622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.715 [2024-12-09 09:51:39.456631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:97216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.715 [2024-12-09 09:51:39.456642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.715 [2024-12-09 09:51:39.456652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:97224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.715 [2024-12-09 09:51:39.456659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.715 [2024-12-09 09:51:39.456668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:97232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.715 [2024-12-09 09:51:39.456675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.715 [2024-12-09 09:51:39.456685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:97240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.715 [2024-12-09 09:51:39.456692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.715 [2024-12-09 09:51:39.456701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:97248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.715 [2024-12-09 09:51:39.456709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.715 [2024-12-09 09:51:39.456718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:97256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.715 [2024-12-09 09:51:39.456725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.715 [2024-12-09 09:51:39.456734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:97264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.715 [2024-12-09 09:51:39.456742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.715 [2024-12-09 09:51:39.456751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:97272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.715 [2024-12-09 09:51:39.456758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.715 [2024-12-09 09:51:39.456768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:97280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.715 [2024-12-09 09:51:39.456775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.715 [2024-12-09 09:51:39.456784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:97288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.715 [2024-12-09 09:51:39.456793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.715 [2024-12-09 09:51:39.456802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:97296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.715 [2024-12-09 09:51:39.456809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.715 [2024-12-09 09:51:39.456818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:97304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.715 [2024-12-09 09:51:39.456826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.715 [2024-12-09 09:51:39.456847] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:18.715 [2024-12-09 09:51:39.456854] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:18.715 [2024-12-09 09:51:39.456861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97312 len:8 PRP1 0x0 PRP2 0x0 00:34:18.715 [2024-12-09 09:51:39.456869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.715 [2024-12-09 09:51:39.456908] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:34:18.715 [2024-12-09 09:51:39.456929] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:18.715 [2024-12-09 09:51:39.456937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.715 [2024-12-09 09:51:39.456946] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:18.715 [2024-12-09 09:51:39.456953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.715 [2024-12-09 09:51:39.456961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:18.715 [2024-12-09 09:51:39.456969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.715 [2024-12-09 09:51:39.456977] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:18.715 [2024-12-09 09:51:39.456984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.715 [2024-12-09 09:51:39.456992] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:34:18.715 [2024-12-09 09:51:39.460586] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:34:18.715 [2024-12-09 09:51:39.460611] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6287a0 (9): Bad file descriptor 00:34:18.715 [2024-12-09 09:51:39.484139] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:34:18.715 11192.50 IOPS, 43.72 MiB/s [2024-12-09T08:51:54.168Z] 11469.67 IOPS, 44.80 MiB/s [2024-12-09T08:51:54.168Z] 11822.00 IOPS, 46.18 MiB/s [2024-12-09T08:51:54.168Z] [2024-12-09 09:51:43.035148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:59184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.715 [2024-12-09 09:51:43.035177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.715 [2024-12-09 09:51:43.035189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:59192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.715 [2024-12-09 09:51:43.035195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.715 [2024-12-09 09:51:43.035206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:59200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.715 [2024-12-09 09:51:43.035212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.715 [2024-12-09 09:51:43.035218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:59208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.715 [2024-12-09 09:51:43.035224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.716 [2024-12-09 09:51:43.035230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:59216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.716 [2024-12-09 09:51:43.035236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.716 [2024-12-09 09:51:43.035242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:59224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.716 [2024-12-09 09:51:43.035247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.716 [2024-12-09 09:51:43.035254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:59232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.716 [2024-12-09 09:51:43.035259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.716 [2024-12-09 09:51:43.035266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:59240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.716 [2024-12-09 09:51:43.035271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.716 [2024-12-09 09:51:43.035278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:59248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.716 [2024-12-09 09:51:43.035283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.716 [2024-12-09 09:51:43.035289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:59256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.716 [2024-12-09 09:51:43.035295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.716 [2024-12-09 09:51:43.035301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:59264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.716 [2024-12-09 09:51:43.035306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.716 [2024-12-09 09:51:43.035313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:59272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.716 [2024-12-09 09:51:43.035318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.716 [2024-12-09 09:51:43.035324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:59280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.716 [2024-12-09 09:51:43.035330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.716 [2024-12-09 09:51:43.035336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:59288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.716 [2024-12-09 09:51:43.035341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.716 [2024-12-09 09:51:43.035348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:59296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.716 [2024-12-09 09:51:43.035354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.716 [2024-12-09 09:51:43.035361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:59304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.716 [2024-12-09 09:51:43.035366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.716 [2024-12-09 09:51:43.035372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:59312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.716 [2024-12-09 09:51:43.035378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.716 [2024-12-09 09:51:43.035384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:59320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.716 [2024-12-09 09:51:43.035389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.716 [2024-12-09 09:51:43.035395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:59328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.716 [2024-12-09 09:51:43.035400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.716 [2024-12-09 09:51:43.035407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:59336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.716 [2024-12-09 09:51:43.035412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.716 [2024-12-09 09:51:43.035419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:59344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.716 [2024-12-09 09:51:43.035424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.716 [2024-12-09 09:51:43.035430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:59352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.716 [2024-12-09 09:51:43.035435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.716 [2024-12-09 09:51:43.035442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:59360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.716 [2024-12-09 09:51:43.035447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.716 [2024-12-09 09:51:43.035453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:59368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.716 [2024-12-09 09:51:43.035458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.716 [2024-12-09 09:51:43.035464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:59376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.716 [2024-12-09 09:51:43.035470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.716 [2024-12-09 09:51:43.035477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:59384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.716 [2024-12-09 09:51:43.035482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.716 [2024-12-09 09:51:43.035488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:59392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.716 [2024-12-09 09:51:43.035493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.716 [2024-12-09 09:51:43.035501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:59400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.716 [2024-12-09 09:51:43.035506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.717 [2024-12-09 09:51:43.035512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:59408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.717 [2024-12-09 09:51:43.035519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.717 [2024-12-09 09:51:43.035526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:59416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.717 [2024-12-09 09:51:43.035531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.717 [2024-12-09 09:51:43.035537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:59424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.717 [2024-12-09 09:51:43.035543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.717 [2024-12-09 09:51:43.035549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:59432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.717 [2024-12-09 09:51:43.035554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.717 [2024-12-09 09:51:43.035561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:59440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.717 [2024-12-09 09:51:43.035567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.717 [2024-12-09 09:51:43.035573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:59448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.717 [2024-12-09 09:51:43.035579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.717 [2024-12-09 09:51:43.035585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:59456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.717 [2024-12-09 09:51:43.035590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.717 [2024-12-09 09:51:43.035597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:59464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.717 [2024-12-09 09:51:43.035602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.717 [2024-12-09 09:51:43.035608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:59472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.717 [2024-12-09 09:51:43.035613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.717 [2024-12-09 09:51:43.035621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:59480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.717 [2024-12-09 09:51:43.035626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.717 [2024-12-09 09:51:43.035632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:59488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.717 [2024-12-09 09:51:43.035642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.717 [2024-12-09 09:51:43.035649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:59496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.717 [2024-12-09 09:51:43.035656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.717 [2024-12-09 09:51:43.035662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:59504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.717 [2024-12-09 09:51:43.035668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.717 [2024-12-09 09:51:43.035674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:59512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.717 [2024-12-09 09:51:43.035680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.717 [2024-12-09 09:51:43.035686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:59520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.717 [2024-12-09 09:51:43.035691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.717 [2024-12-09 09:51:43.035698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:59528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.717 [2024-12-09 09:51:43.035703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.717 [2024-12-09 09:51:43.035709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:59536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.717 [2024-12-09 09:51:43.035715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.717 [2024-12-09 09:51:43.035721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:59544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.717 [2024-12-09 09:51:43.035726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.717 [2024-12-09 09:51:43.035733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:59552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.717 [2024-12-09 09:51:43.035738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.717 [2024-12-09 09:51:43.035745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:59560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.717 [2024-12-09 09:51:43.035750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.717 [2024-12-09 09:51:43.035756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:59568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.717 [2024-12-09 09:51:43.035761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.717 [2024-12-09 09:51:43.035768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:59576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.717 [2024-12-09 09:51:43.035773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.717 [2024-12-09 09:51:43.035780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:59584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.717 [2024-12-09 09:51:43.035785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.717 [2024-12-09 09:51:43.035791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:59592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.717 [2024-12-09 09:51:43.035796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.717 [2024-12-09 09:51:43.035804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:59600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.717 [2024-12-09 09:51:43.035809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.717 [2024-12-09 09:51:43.035815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:59608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.717 [2024-12-09 09:51:43.035820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.717 [2024-12-09 09:51:43.035827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:59616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.717 [2024-12-09 09:51:43.035832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.717 [2024-12-09 09:51:43.035839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:59624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.717 [2024-12-09 09:51:43.035844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.717 [2024-12-09 09:51:43.035850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:59632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.717 [2024-12-09 09:51:43.035855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.718 [2024-12-09 09:51:43.035862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:59640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.718 [2024-12-09 09:51:43.035867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.718 [2024-12-09 09:51:43.035873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:59648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.718 [2024-12-09 09:51:43.035878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.718 [2024-12-09 09:51:43.035885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:59656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.718 [2024-12-09 09:51:43.035890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.718 [2024-12-09 09:51:43.035896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:59664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.718 [2024-12-09 09:51:43.035902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.718 [2024-12-09 09:51:43.035908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:59672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.718 [2024-12-09 09:51:43.035913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.718 [2024-12-09 09:51:43.035920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:59680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.718 [2024-12-09 09:51:43.035925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.718 [2024-12-09 09:51:43.035932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:59688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.718 [2024-12-09 09:51:43.035937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.718 [2024-12-09 09:51:43.035944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:59696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.718 [2024-12-09 09:51:43.035949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.718 [2024-12-09 09:51:43.035957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:59704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.718 [2024-12-09 09:51:43.035962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.718 [2024-12-09 09:51:43.035968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:59712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.718 [2024-12-09 09:51:43.035974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.718 [2024-12-09 09:51:43.035981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:59720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.718 [2024-12-09 09:51:43.035986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.718 [2024-12-09 09:51:43.035993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:59728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.718 [2024-12-09 09:51:43.035998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.718 [2024-12-09 09:51:43.036005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:59736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.718 [2024-12-09 09:51:43.036010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.718 [2024-12-09 09:51:43.036016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:59744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.718 [2024-12-09 09:51:43.036022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.718 [2024-12-09 09:51:43.036028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:59752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.718 [2024-12-09 09:51:43.036033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.718 [2024-12-09 09:51:43.036040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:59760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.718 [2024-12-09 09:51:43.036045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.718 [2024-12-09 09:51:43.036051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:59768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.718 [2024-12-09 09:51:43.036056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.718 [2024-12-09 09:51:43.036063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:59776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.718 [2024-12-09 09:51:43.036068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.718 [2024-12-09 09:51:43.036074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:59784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.718 [2024-12-09 09:51:43.036079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.718 [2024-12-09 09:51:43.036086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:59792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.718 [2024-12-09 09:51:43.036091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.718 [2024-12-09 09:51:43.036097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:59800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.718 [2024-12-09 09:51:43.036104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.718 [2024-12-09 09:51:43.036110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:59808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.718 [2024-12-09 09:51:43.036115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.718 [2024-12-09 09:51:43.036122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:59816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.718 [2024-12-09 09:51:43.036127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.718 [2024-12-09 09:51:43.036133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:59824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.718 [2024-12-09 09:51:43.036138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.718 [2024-12-09 09:51:43.036145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:59832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.718 [2024-12-09 09:51:43.036150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.718 [2024-12-09 09:51:43.036157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:59840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.718 [2024-12-09 09:51:43.036162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.718 [2024-12-09 09:51:43.036168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:59848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.718 [2024-12-09 09:51:43.036173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.718 [2024-12-09 09:51:43.036180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:59856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.718 [2024-12-09 09:51:43.036185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.718 [2024-12-09 09:51:43.036191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:59864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.718 [2024-12-09 09:51:43.036196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.718 [2024-12-09 09:51:43.036203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:59872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.718 [2024-12-09 09:51:43.036207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.718 [2024-12-09 09:51:43.036214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:59880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.719 [2024-12-09 09:51:43.036219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.719 [2024-12-09 09:51:43.036225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:59888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.719 [2024-12-09 09:51:43.036231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.719 [2024-12-09 09:51:43.036237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:59896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.719 [2024-12-09 09:51:43.036243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.719 [2024-12-09 09:51:43.036250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:59904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.719 [2024-12-09 09:51:43.036255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.719 [2024-12-09 09:51:43.036262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:59912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.719 [2024-12-09 09:51:43.036267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.719 [2024-12-09 09:51:43.036273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:59920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.719 [2024-12-09 09:51:43.036278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.719 [2024-12-09 09:51:43.036285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:59928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.719 [2024-12-09 09:51:43.036290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.719 [2024-12-09 09:51:43.036296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:59936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.719 [2024-12-09 09:51:43.036301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.719 [2024-12-09 09:51:43.036308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:59944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.719 [2024-12-09 09:51:43.036313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.719 [2024-12-09 09:51:43.036320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:59952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.719 [2024-12-09 09:51:43.036325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.719 [2024-12-09 09:51:43.036332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:59960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.719 [2024-12-09 09:51:43.036337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.719 [2024-12-09 09:51:43.036343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:59968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.719 [2024-12-09 09:51:43.036348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.719 [2024-12-09 09:51:43.036354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:59976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.719 [2024-12-09 09:51:43.036359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.719 [2024-12-09 09:51:43.036366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:59984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.719 [2024-12-09 09:51:43.036371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.719 [2024-12-09 09:51:43.036378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:59992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.719 [2024-12-09 09:51:43.036383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.719 [2024-12-09 09:51:43.036390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:60000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.719 [2024-12-09 09:51:43.036396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.719 [2024-12-09 09:51:43.036402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:60008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.719 [2024-12-09 09:51:43.036407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.719 [2024-12-09 09:51:43.036414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:60016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.719 [2024-12-09 09:51:43.036419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.719 [2024-12-09 09:51:43.036425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:60024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.719 [2024-12-09 09:51:43.036430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.719 [2024-12-09 09:51:43.036437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:60032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.719 [2024-12-09 09:51:43.036442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.719 [2024-12-09 09:51:43.036448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:60040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.719 [2024-12-09 09:51:43.036453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.719 [2024-12-09 09:51:43.036459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.719 [2024-12-09 09:51:43.036464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.719 [2024-12-09 09:51:43.036471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:60056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.719 [2024-12-09 09:51:43.036476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.719 [2024-12-09 09:51:43.036482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:60064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.719 [2024-12-09 09:51:43.036487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.719 [2024-12-09 09:51:43.036493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:60072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.719 [2024-12-09 09:51:43.036498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.719 [2024-12-09 09:51:43.036504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:60080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.719 [2024-12-09 09:51:43.036509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.719 [2024-12-09 09:51:43.036516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:60088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.719 [2024-12-09 09:51:43.036521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.719 [2024-12-09 09:51:43.036527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:60096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.719 [2024-12-09 09:51:43.036535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.719 [2024-12-09 09:51:43.036541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:60104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.719 [2024-12-09 09:51:43.036547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.719 [2024-12-09 09:51:43.036554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:60112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.719 [2024-12-09 09:51:43.036559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.719 [2024-12-09 09:51:43.036565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:60120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.720 [2024-12-09 09:51:43.036570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.720 [2024-12-09 09:51:43.036576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:60128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.720 [2024-12-09 09:51:43.036581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.720 [2024-12-09 09:51:43.036588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:60136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.720 [2024-12-09 09:51:43.036593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.720 [2024-12-09 09:51:43.036599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:60144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.720 [2024-12-09 09:51:43.036604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.720 [2024-12-09 09:51:43.036610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:60152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.720 [2024-12-09 09:51:43.036615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.720 [2024-12-09 09:51:43.036622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:60160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.720 [2024-12-09 09:51:43.036627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.720 [2024-12-09 09:51:43.036633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:60168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.720 [2024-12-09 09:51:43.036641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.720 [2024-12-09 09:51:43.036648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:60176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.720 [2024-12-09 09:51:43.036653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.720 [2024-12-09 09:51:43.036659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:60184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.720 [2024-12-09 09:51:43.036664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.720 [2024-12-09 09:51:43.036670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:60192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.720 [2024-12-09 09:51:43.036675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.720 [2024-12-09 09:51:43.036690] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:18.720 [2024-12-09 09:51:43.036695] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:18.720 [2024-12-09 09:51:43.036701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60200 len:8 PRP1 0x0 PRP2 0x0 00:34:18.720 [2024-12-09 09:51:43.036707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.720 [2024-12-09 09:51:43.036739] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:34:18.720 [2024-12-09 09:51:43.036755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:18.720 [2024-12-09 09:51:43.036761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.720 [2024-12-09 09:51:43.036767] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:18.720 [2024-12-09 09:51:43.036772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.720 [2024-12-09 09:51:43.036778] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:18.720 [2024-12-09 09:51:43.036783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.720 [2024-12-09 09:51:43.036789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:18.720 [2024-12-09 09:51:43.036794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.720 [2024-12-09 09:51:43.036799] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:34:18.720 [2024-12-09 09:51:43.039249] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:34:18.720 [2024-12-09 09:51:43.039269] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6287a0 (9): Bad file descriptor 00:34:18.720 [2024-12-09 09:51:43.105872] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:34:18.720 11816.20 IOPS, 46.16 MiB/s [2024-12-09T08:51:54.173Z] 11987.17 IOPS, 46.82 MiB/s [2024-12-09T08:51:54.173Z] 12124.57 IOPS, 47.36 MiB/s [2024-12-09T08:51:54.173Z] 12250.88 IOPS, 47.85 MiB/s [2024-12-09T08:51:54.173Z] 12348.78 IOPS, 48.24 MiB/s [2024-12-09T08:51:54.173Z] [2024-12-09 09:51:47.419836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:12720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.720 [2024-12-09 09:51:47.419864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.720 [2024-12-09 09:51:47.419877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:12728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.720 [2024-12-09 09:51:47.419883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.720 [2024-12-09 09:51:47.419890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:12736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.720 [2024-12-09 09:51:47.419896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.720 [2024-12-09 09:51:47.419903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:12744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.720 [2024-12-09 09:51:47.419908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.720 [2024-12-09 09:51:47.419915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:12752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.720 [2024-12-09 09:51:47.419920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.720 [2024-12-09 09:51:47.419932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.720 [2024-12-09 09:51:47.419938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.720 [2024-12-09 09:51:47.419944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:12768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.720 [2024-12-09 09:51:47.419949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.720 [2024-12-09 09:51:47.419956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:12776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.720 [2024-12-09 09:51:47.419961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.720 [2024-12-09 09:51:47.419968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:12784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.720 [2024-12-09 09:51:47.419973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.720 [2024-12-09 09:51:47.419979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:12792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.720 [2024-12-09 09:51:47.419984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.720 [2024-12-09 09:51:47.419991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:12800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.720 [2024-12-09 09:51:47.419996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.720 [2024-12-09 09:51:47.420003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:12808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.721 [2024-12-09 09:51:47.420008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.721 [2024-12-09 09:51:47.420015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.721 [2024-12-09 09:51:47.420020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.721 [2024-12-09 09:51:47.420026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:12824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.721 [2024-12-09 09:51:47.420031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.721 [2024-12-09 09:51:47.420038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:12832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.721 [2024-12-09 09:51:47.420043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.721 [2024-12-09 09:51:47.420050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.721 [2024-12-09 09:51:47.420055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.721 [2024-12-09 09:51:47.420062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:12848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.721 [2024-12-09 09:51:47.420067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.721 [2024-12-09 09:51:47.420074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:12856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.721 [2024-12-09 09:51:47.420079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.721 [2024-12-09 09:51:47.420087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:12864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.721 [2024-12-09 09:51:47.420092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.721 [2024-12-09 09:51:47.420098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:12872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.721 [2024-12-09 09:51:47.420104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.721 [2024-12-09 09:51:47.420110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:12880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.721 [2024-12-09 09:51:47.420115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.721 [2024-12-09 09:51:47.420122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:12888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.721 [2024-12-09 09:51:47.420127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.721 [2024-12-09 09:51:47.420133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.721 [2024-12-09 09:51:47.420138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.721 [2024-12-09 09:51:47.420145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:12904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.721 [2024-12-09 09:51:47.420150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.721 [2024-12-09 09:51:47.420158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:12912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.721 [2024-12-09 09:51:47.420163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.721 [2024-12-09 09:51:47.420170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:12920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.721 [2024-12-09 09:51:47.420175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.721 [2024-12-09 09:51:47.420181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:12928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.721 [2024-12-09 09:51:47.420186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.721 [2024-12-09 09:51:47.420193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:12936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.721 [2024-12-09 09:51:47.420198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.721 [2024-12-09 09:51:47.420205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.721 [2024-12-09 09:51:47.420210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.721 [2024-12-09 09:51:47.420216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:12952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.721 [2024-12-09 09:51:47.420221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.721 [2024-12-09 09:51:47.420230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:12960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.721 [2024-12-09 09:51:47.420237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.721 [2024-12-09 09:51:47.420243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.721 [2024-12-09 09:51:47.420249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.721 [2024-12-09 09:51:47.420256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:12976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.721 [2024-12-09 09:51:47.420261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.721 [2024-12-09 09:51:47.420268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:12984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.721 [2024-12-09 09:51:47.420273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.721 [2024-12-09 09:51:47.420280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:12992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.721 [2024-12-09 09:51:47.420285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.721 [2024-12-09 09:51:47.420292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:13000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.721 [2024-12-09 09:51:47.420297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.721 [2024-12-09 09:51:47.420303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:13008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.721 [2024-12-09 09:51:47.420309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.721 [2024-12-09 09:51:47.420316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:13016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.721 [2024-12-09 09:51:47.420320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.721 [2024-12-09 09:51:47.420327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:13024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.721 [2024-12-09 09:51:47.420332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.721 [2024-12-09 09:51:47.420339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:13032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.721 [2024-12-09 09:51:47.420344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.721 [2024-12-09 09:51:47.420350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:13040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.721 [2024-12-09 09:51:47.420355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.721 [2024-12-09 09:51:47.420362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:13048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.721 [2024-12-09 09:51:47.420367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.721 [2024-12-09 09:51:47.420374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:13056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.722 [2024-12-09 09:51:47.420379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.722 [2024-12-09 09:51:47.420390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:13064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.722 [2024-12-09 09:51:47.420395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.722 [2024-12-09 09:51:47.420402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:13072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.722 [2024-12-09 09:51:47.420407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.722 [2024-12-09 09:51:47.420414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:13080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.722 [2024-12-09 09:51:47.420419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.722 [2024-12-09 09:51:47.420426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:13088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.722 [2024-12-09 09:51:47.420431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.722 [2024-12-09 09:51:47.420438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:13096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.722 [2024-12-09 09:51:47.420443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.722 [2024-12-09 09:51:47.420449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:13104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.722 [2024-12-09 09:51:47.420454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.722 [2024-12-09 09:51:47.420461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:13112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.722 [2024-12-09 09:51:47.420466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.722 [2024-12-09 09:51:47.420473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.722 [2024-12-09 09:51:47.420478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.722 [2024-12-09 09:51:47.420485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.722 [2024-12-09 09:51:47.420490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.722 [2024-12-09 09:51:47.420496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:13136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.722 [2024-12-09 09:51:47.420501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.722 [2024-12-09 09:51:47.420508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:13144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.722 [2024-12-09 09:51:47.420513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.722 [2024-12-09 09:51:47.420519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.722 [2024-12-09 09:51:47.420525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.722 [2024-12-09 09:51:47.420532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.722 [2024-12-09 09:51:47.420538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.722 [2024-12-09 09:51:47.420545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:13168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.722 [2024-12-09 09:51:47.420550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.722 [2024-12-09 09:51:47.420556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:13176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.722 [2024-12-09 09:51:47.420561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.722 [2024-12-09 09:51:47.420568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:13184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.722 [2024-12-09 09:51:47.420573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.722 [2024-12-09 09:51:47.420579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:13192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.722 [2024-12-09 09:51:47.420585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.722 [2024-12-09 09:51:47.420591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.722 [2024-12-09 09:51:47.420597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.722 [2024-12-09 09:51:47.420604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:13208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.722 [2024-12-09 09:51:47.420609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.722 [2024-12-09 09:51:47.420616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:13216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.722 [2024-12-09 09:51:47.420621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.722 [2024-12-09 09:51:47.420627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:13224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.722 [2024-12-09 09:51:47.420632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.722 [2024-12-09 09:51:47.420643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:13232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.722 [2024-12-09 09:51:47.420650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.722 [2024-12-09 09:51:47.420657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:13240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:18.722 [2024-12-09 09:51:47.420662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.722 [2024-12-09 09:51:47.420668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:13248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.722 [2024-12-09 09:51:47.420673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.722 [2024-12-09 09:51:47.420680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:13256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.723 [2024-12-09 09:51:47.420685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.723 [2024-12-09 09:51:47.420693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:13264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.723 [2024-12-09 09:51:47.420698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.723 [2024-12-09 09:51:47.420705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.723 [2024-12-09 09:51:47.420710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.723 [2024-12-09 09:51:47.420716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:13280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.723 [2024-12-09 09:51:47.420721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.723 [2024-12-09 09:51:47.420728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:13288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.723 [2024-12-09 09:51:47.420733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.723 [2024-12-09 09:51:47.420739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:13296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.723 [2024-12-09 09:51:47.420744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.723 [2024-12-09 09:51:47.420751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:13304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.723 [2024-12-09 09:51:47.420756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.723 [2024-12-09 09:51:47.420762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:13312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.723 [2024-12-09 09:51:47.420767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.723 [2024-12-09 09:51:47.420774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:13320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.723 [2024-12-09 09:51:47.420779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.723 [2024-12-09 09:51:47.420785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:13328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.723 [2024-12-09 09:51:47.420790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.723 [2024-12-09 09:51:47.420797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:13336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.723 [2024-12-09 09:51:47.420802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.723 [2024-12-09 09:51:47.420808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:13344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.723 [2024-12-09 09:51:47.420813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.723 [2024-12-09 09:51:47.420819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:13352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.723 [2024-12-09 09:51:47.420824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.723 [2024-12-09 09:51:47.420834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:13360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.723 [2024-12-09 09:51:47.420838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.723 [2024-12-09 09:51:47.420846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:13368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.723 [2024-12-09 09:51:47.420852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.723 [2024-12-09 09:51:47.420858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:13376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.723 [2024-12-09 09:51:47.420863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.723 [2024-12-09 09:51:47.420869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:13384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.723 [2024-12-09 09:51:47.420874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.723 [2024-12-09 09:51:47.420880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.723 [2024-12-09 09:51:47.420885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.723 [2024-12-09 09:51:47.420892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:13400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.723 [2024-12-09 09:51:47.420897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.723 [2024-12-09 09:51:47.420903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:13408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.723 [2024-12-09 09:51:47.420908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.723 [2024-12-09 09:51:47.420914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:13416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.723 [2024-12-09 09:51:47.420919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.723 [2024-12-09 09:51:47.420925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:13424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.723 [2024-12-09 09:51:47.420930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.723 [2024-12-09 09:51:47.420937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:13432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.723 [2024-12-09 09:51:47.420941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.723 [2024-12-09 09:51:47.420948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:13440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.723 [2024-12-09 09:51:47.420953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.723 [2024-12-09 09:51:47.420959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:13448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.723 [2024-12-09 09:51:47.420964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.723 [2024-12-09 09:51:47.420970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:13456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.723 [2024-12-09 09:51:47.420975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.723 [2024-12-09 09:51:47.420981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:13464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.723 [2024-12-09 09:51:47.420987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.723 [2024-12-09 09:51:47.420994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:13472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.723 [2024-12-09 09:51:47.420999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.723 [2024-12-09 09:51:47.421005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:13480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.723 [2024-12-09 09:51:47.421010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.723 [2024-12-09 09:51:47.421017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:13488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.723 [2024-12-09 09:51:47.421022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.723 [2024-12-09 09:51:47.421029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:13496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.723 [2024-12-09 09:51:47.421034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.723 [2024-12-09 09:51:47.421040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:13504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.723 [2024-12-09 09:51:47.421045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.723 [2024-12-09 09:51:47.421051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:13512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.723 [2024-12-09 09:51:47.421056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.724 [2024-12-09 09:51:47.421063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:13520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.724 [2024-12-09 09:51:47.421067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.724 [2024-12-09 09:51:47.421074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:13528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.724 [2024-12-09 09:51:47.421079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.724 [2024-12-09 09:51:47.421085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:13536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.724 [2024-12-09 09:51:47.421090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.724 [2024-12-09 09:51:47.421096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:13544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.724 [2024-12-09 09:51:47.421102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.724 [2024-12-09 09:51:47.421109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:13552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.724 [2024-12-09 09:51:47.421114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.724 [2024-12-09 09:51:47.421120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:18.724 [2024-12-09 09:51:47.421125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.724 [2024-12-09 09:51:47.421143] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:18.724 [2024-12-09 09:51:47.421149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13568 len:8 PRP1 0x0 PRP2 0x0 00:34:18.724 [2024-12-09 09:51:47.421154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.724 [2024-12-09 09:51:47.421162] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:18.724 [2024-12-09 09:51:47.421166] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:18.724 [2024-12-09 09:51:47.421170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13576 len:8 PRP1 0x0 PRP2 0x0 00:34:18.724 [2024-12-09 09:51:47.421174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.724 [2024-12-09 09:51:47.421180] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:18.724 [2024-12-09 09:51:47.421183] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:18.724 [2024-12-09 09:51:47.421187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13584 len:8 PRP1 0x0 PRP2 0x0 00:34:18.724 [2024-12-09 09:51:47.421192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.724 [2024-12-09 09:51:47.421197] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:18.724 [2024-12-09 09:51:47.421203] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:18.724 [2024-12-09 09:51:47.421208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13592 len:8 PRP1 0x0 PRP2 0x0 00:34:18.724 [2024-12-09 09:51:47.421213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.724 [2024-12-09 09:51:47.421218] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:18.724 [2024-12-09 09:51:47.421222] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:18.724 [2024-12-09 09:51:47.421226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13600 len:8 PRP1 0x0 PRP2 0x0 00:34:18.724 [2024-12-09 09:51:47.421231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.724 [2024-12-09 09:51:47.421236] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:18.724 [2024-12-09 09:51:47.421239] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:18.724 [2024-12-09 09:51:47.421244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13608 len:8 PRP1 0x0 PRP2 0x0 00:34:18.724 [2024-12-09 09:51:47.421248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.724 [2024-12-09 09:51:47.421254] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:18.724 [2024-12-09 09:51:47.421258] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:18.724 [2024-12-09 09:51:47.421262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13616 len:8 PRP1 0x0 PRP2 0x0 00:34:18.724 [2024-12-09 09:51:47.421267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.724 [2024-12-09 09:51:47.421272] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:18.724 [2024-12-09 09:51:47.421276] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:18.724 [2024-12-09 09:51:47.421280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13624 len:8 PRP1 0x0 PRP2 0x0 00:34:18.724 [2024-12-09 09:51:47.421284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.724 [2024-12-09 09:51:47.421291] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:18.724 [2024-12-09 09:51:47.421295] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:18.724 [2024-12-09 09:51:47.421299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:8 PRP1 0x0 PRP2 0x0 00:34:18.724 [2024-12-09 09:51:47.421304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.724 [2024-12-09 09:51:47.421309] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:18.724 [2024-12-09 09:51:47.421313] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:18.724 [2024-12-09 09:51:47.421317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13640 len:8 PRP1 0x0 PRP2 0x0 00:34:18.724 [2024-12-09 09:51:47.421322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.724 [2024-12-09 09:51:47.421327] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:18.724 [2024-12-09 09:51:47.421331] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:18.724 [2024-12-09 09:51:47.421335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13648 len:8 PRP1 0x0 PRP2 0x0 00:34:18.724 [2024-12-09 09:51:47.421340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.724 [2024-12-09 09:51:47.421345] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:18.724 [2024-12-09 09:51:47.421349] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:18.724 [2024-12-09 09:51:47.421354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13656 len:8 PRP1 0x0 PRP2 0x0 00:34:18.724 [2024-12-09 09:51:47.421359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.724 [2024-12-09 09:51:47.421364] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:18.724 [2024-12-09 09:51:47.421368] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:18.724 [2024-12-09 09:51:47.421372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:8 PRP1 0x0 PRP2 0x0 00:34:18.724 [2024-12-09 09:51:47.421377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.724 [2024-12-09 09:51:47.421382] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:18.724 [2024-12-09 09:51:47.421386] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:18.724 [2024-12-09 09:51:47.421390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13672 len:8 PRP1 0x0 PRP2 0x0 00:34:18.724 [2024-12-09 09:51:47.421394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.725 [2024-12-09 09:51:47.421400] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:18.725 [2024-12-09 09:51:47.421403] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:18.725 [2024-12-09 09:51:47.421407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13680 len:8 PRP1 0x0 PRP2 0x0 00:34:18.725 [2024-12-09 09:51:47.421412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.725 [2024-12-09 09:51:47.421418] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:18.725 [2024-12-09 09:51:47.421421] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:18.725 [2024-12-09 09:51:47.421425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13688 len:8 PRP1 0x0 PRP2 0x0 00:34:18.725 [2024-12-09 09:51:47.421431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.725 [2024-12-09 09:51:47.421437] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:18.725 [2024-12-09 09:51:47.421440] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:18.725 [2024-12-09 09:51:47.421444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:8 PRP1 0x0 PRP2 0x0 00:34:18.725 [2024-12-09 09:51:47.421449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.725 [2024-12-09 09:51:47.435050] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:18.725 [2024-12-09 09:51:47.435076] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:18.725 [2024-12-09 09:51:47.435086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13704 len:8 PRP1 0x0 PRP2 0x0 00:34:18.725 [2024-12-09 09:51:47.435095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.725 [2024-12-09 09:51:47.435103] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:18.725 [2024-12-09 09:51:47.435108] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:18.725 [2024-12-09 09:51:47.435114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13712 len:8 PRP1 0x0 PRP2 0x0 00:34:18.725 [2024-12-09 09:51:47.435121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.725 [2024-12-09 09:51:47.435128] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:18.725 [2024-12-09 09:51:47.435134] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:18.725 [2024-12-09 09:51:47.435139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13720 len:8 PRP1 0x0 PRP2 0x0 00:34:18.725 [2024-12-09 09:51:47.435146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.725 [2024-12-09 09:51:47.435153] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:18.725 [2024-12-09 09:51:47.435159] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:18.725 [2024-12-09 09:51:47.435164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13728 len:8 PRP1 0x0 PRP2 0x0 00:34:18.725 [2024-12-09 09:51:47.435171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.725 [2024-12-09 09:51:47.435178] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:18.725 [2024-12-09 09:51:47.435183] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:18.725 [2024-12-09 09:51:47.435188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13736 len:8 PRP1 0x0 PRP2 0x0 00:34:18.725 [2024-12-09 09:51:47.435195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.725 [2024-12-09 09:51:47.435234] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:34:18.725 [2024-12-09 09:51:47.435263] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:18.725 [2024-12-09 09:51:47.435272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.725 [2024-12-09 09:51:47.435281] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:18.725 [2024-12-09 09:51:47.435288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.725 [2024-12-09 09:51:47.435300] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:18.725 [2024-12-09 09:51:47.435307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.725 [2024-12-09 09:51:47.435314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:18.725 [2024-12-09 09:51:47.435321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.725 [2024-12-09 09:51:47.435328] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:34:18.725 [2024-12-09 09:51:47.435368] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6287a0 (9): Bad file descriptor 00:34:18.725 [2024-12-09 09:51:47.438663] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:34:18.725 [2024-12-09 09:51:47.549205] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:34:18.725 12221.80 IOPS, 47.74 MiB/s [2024-12-09T08:51:54.178Z] 12262.36 IOPS, 47.90 MiB/s [2024-12-09T08:51:54.178Z] 12316.50 IOPS, 48.11 MiB/s [2024-12-09T08:51:54.178Z] 12376.62 IOPS, 48.35 MiB/s [2024-12-09T08:51:54.178Z] 12424.86 IOPS, 48.53 MiB/s 00:34:18.725 Latency(us) 00:34:18.725 [2024-12-09T08:51:54.178Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:18.725 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:18.725 Verification LBA range: start 0x0 length 0x4000 00:34:18.725 NVMe0n1 : 15.00 12456.35 48.66 722.82 0.00 9691.56 373.76 23265.28 00:34:18.725 [2024-12-09T08:51:54.178Z] =================================================================================================================== 00:34:18.725 [2024-12-09T08:51:54.178Z] Total : 12456.35 48.66 722.82 0.00 9691.56 373.76 23265.28 00:34:18.725 Received shutdown signal, test time was about 15.000000 seconds 00:34:18.725 00:34:18.725 Latency(us) 00:34:18.725 [2024-12-09T08:51:54.178Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:18.725 [2024-12-09T08:51:54.178Z] =================================================================================================================== 00:34:18.725 [2024-12-09T08:51:54.178Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:18.725 09:51:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:34:18.725 09:51:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:34:18.725 09:51:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:34:18.725 09:51:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2993994 00:34:18.725 09:51:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2993994 /var/tmp/bdevperf.sock 00:34:18.725 09:51:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:34:18.725 09:51:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2993994 ']' 00:34:18.725 09:51:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:18.725 09:51:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:18.726 09:51:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:18.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:18.726 09:51:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:18.726 09:51:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:19.299 09:51:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:19.299 09:51:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:34:19.299 09:51:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:19.299 [2024-12-09 09:51:54.624365] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:19.299 09:51:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:34:19.560 [2024-12-09 09:51:54.804787] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:34:19.560 09:51:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:34:19.826 NVMe0n1 00:34:19.826 09:51:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:34:20.119 00:34:20.119 09:51:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:34:20.415 00:34:20.415 09:51:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:20.415 09:51:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:34:20.675 09:51:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:20.935 09:51:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:34:24.235 09:51:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:24.235 09:51:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:34:24.235 09:51:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:34:24.235 09:51:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2995190 00:34:24.235 09:51:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 2995190 00:34:25.176 { 00:34:25.176 "results": [ 00:34:25.176 { 00:34:25.176 "job": "NVMe0n1", 00:34:25.176 "core_mask": "0x1", 00:34:25.176 "workload": "verify", 00:34:25.176 "status": "finished", 00:34:25.176 "verify_range": { 00:34:25.176 "start": 0, 00:34:25.176 "length": 16384 00:34:25.176 }, 00:34:25.176 "queue_depth": 128, 00:34:25.176 "io_size": 4096, 00:34:25.176 "runtime": 1.011225, 00:34:25.176 "iops": 12726.148977725037, 00:34:25.176 "mibps": 49.71151944423843, 00:34:25.176 "io_failed": 0, 00:34:25.176 "io_timeout": 0, 00:34:25.177 "avg_latency_us": 10023.511806667184, 00:34:25.177 "min_latency_us": 1638.4, 00:34:25.177 "max_latency_us": 13271.04 00:34:25.177 } 00:34:25.177 ], 00:34:25.177 "core_count": 1 00:34:25.177 } 00:34:25.177 09:52:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:25.177 [2024-12-09 09:51:53.678481] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:34:25.177 [2024-12-09 09:51:53.678539] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2993994 ] 00:34:25.177 [2024-12-09 09:51:53.761354] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:25.177 [2024-12-09 09:51:53.775937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:25.177 [2024-12-09 09:51:56.148420] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:34:25.177 [2024-12-09 09:51:56.148458] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:25.177 [2024-12-09 09:51:56.148466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:25.177 [2024-12-09 09:51:56.148473] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:25.177 [2024-12-09 09:51:56.148479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:25.177 [2024-12-09 09:51:56.148485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:25.177 [2024-12-09 09:51:56.148490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:25.177 [2024-12-09 09:51:56.148495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:25.177 [2024-12-09 09:51:56.148500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:25.177 [2024-12-09 09:51:56.148506] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:34:25.177 [2024-12-09 09:51:56.148526] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:34:25.177 [2024-12-09 09:51:56.148537] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x98f7a0 (9): Bad file descriptor 00:34:25.177 [2024-12-09 09:51:56.282844] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:34:25.177 Running I/O for 1 seconds... 00:34:25.177 12711.00 IOPS, 49.65 MiB/s 00:34:25.177 Latency(us) 00:34:25.177 [2024-12-09T08:52:00.630Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:25.177 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:25.177 Verification LBA range: start 0x0 length 0x4000 00:34:25.177 NVMe0n1 : 1.01 12726.15 49.71 0.00 0.00 10023.51 1638.40 13271.04 00:34:25.177 [2024-12-09T08:52:00.630Z] =================================================================================================================== 00:34:25.177 [2024-12-09T08:52:00.630Z] Total : 12726.15 49.71 0.00 0.00 10023.51 1638.40 13271.04 00:34:25.177 09:52:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:25.177 09:52:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:34:25.437 09:52:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:25.437 09:52:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:25.437 09:52:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:34:25.698 09:52:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:25.958 09:52:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:34:29.256 09:52:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:34:29.256 09:52:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:34:29.256 09:52:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 2993994 00:34:29.256 09:52:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2993994 ']' 00:34:29.256 09:52:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2993994 00:34:29.256 09:52:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:34:29.256 09:52:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:29.256 09:52:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2993994 00:34:29.256 09:52:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:29.256 09:52:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:29.256 09:52:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2993994' 00:34:29.256 killing process with pid 2993994 00:34:29.256 09:52:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2993994 00:34:29.256 09:52:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2993994 00:34:29.256 09:52:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:34:29.256 09:52:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:29.516 09:52:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:34:29.516 09:52:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:29.516 09:52:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:34:29.516 09:52:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:29.516 09:52:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:34:29.516 09:52:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:29.516 09:52:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:34:29.516 09:52:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:29.516 09:52:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:29.516 rmmod nvme_tcp 00:34:29.516 rmmod nvme_fabrics 00:34:29.516 rmmod nvme_keyring 00:34:29.516 09:52:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:29.516 09:52:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:34:29.516 09:52:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:34:29.516 09:52:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 2990483 ']' 00:34:29.516 09:52:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 2990483 00:34:29.516 09:52:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2990483 ']' 00:34:29.516 09:52:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2990483 00:34:29.516 09:52:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:34:29.516 09:52:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:29.516 09:52:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2990483 00:34:29.516 09:52:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:29.516 09:52:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:29.517 09:52:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2990483' 00:34:29.517 killing process with pid 2990483 00:34:29.517 09:52:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2990483 00:34:29.517 09:52:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2990483 00:34:29.777 09:52:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:29.777 09:52:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:29.777 09:52:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:29.777 09:52:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:34:29.777 09:52:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:34:29.777 09:52:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:29.777 09:52:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:34:29.777 09:52:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:29.777 09:52:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:29.777 09:52:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:29.777 09:52:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:29.777 09:52:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:31.689 09:52:07 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:31.689 00:34:31.689 real 0m40.100s 00:34:31.689 user 2m3.792s 00:34:31.689 sys 0m8.566s 00:34:31.689 09:52:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:31.689 09:52:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:34:31.689 ************************************ 00:34:31.689 END TEST nvmf_failover 00:34:31.689 ************************************ 00:34:31.689 09:52:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:34:31.689 09:52:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:31.689 09:52:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:31.689 09:52:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.951 ************************************ 00:34:31.951 START TEST nvmf_host_discovery 00:34:31.951 ************************************ 00:34:31.951 09:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:34:31.951 * Looking for test storage... 00:34:31.951 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:31.951 09:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:31.951 09:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:34:31.951 09:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:31.951 09:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:31.951 09:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:31.951 09:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:31.951 09:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:31.951 09:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:34:31.951 09:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:34:31.951 09:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:34:31.951 09:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:34:31.951 09:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:34:31.951 09:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:34:31.951 09:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:34:31.951 09:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:31.952 09:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:34:31.952 09:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:34:31.952 09:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:31.952 09:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:31.952 09:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:34:31.952 09:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:34:31.952 09:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:31.952 09:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:34:31.952 09:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:34:31.952 09:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:34:31.952 09:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:34:31.952 09:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:31.952 09:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:34:31.952 09:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:34:31.952 09:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:31.952 09:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:31.952 09:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:34:31.952 09:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:31.952 09:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:31.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:31.952 --rc genhtml_branch_coverage=1 00:34:31.952 --rc genhtml_function_coverage=1 00:34:31.952 --rc genhtml_legend=1 00:34:31.952 --rc geninfo_all_blocks=1 00:34:31.952 --rc geninfo_unexecuted_blocks=1 00:34:31.952 00:34:31.952 ' 00:34:31.952 09:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:31.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:31.952 --rc genhtml_branch_coverage=1 00:34:31.952 --rc genhtml_function_coverage=1 00:34:31.952 --rc genhtml_legend=1 00:34:31.952 --rc geninfo_all_blocks=1 00:34:31.952 --rc geninfo_unexecuted_blocks=1 00:34:31.952 00:34:31.952 ' 00:34:31.952 09:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:31.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:31.952 --rc genhtml_branch_coverage=1 00:34:31.952 --rc genhtml_function_coverage=1 00:34:31.952 --rc genhtml_legend=1 00:34:31.952 --rc geninfo_all_blocks=1 00:34:31.952 --rc geninfo_unexecuted_blocks=1 00:34:31.952 00:34:31.952 ' 00:34:31.952 09:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:31.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:31.952 --rc genhtml_branch_coverage=1 00:34:31.952 --rc genhtml_function_coverage=1 00:34:31.952 --rc genhtml_legend=1 00:34:31.952 --rc geninfo_all_blocks=1 00:34:31.952 --rc geninfo_unexecuted_blocks=1 00:34:31.952 00:34:31.952 ' 00:34:31.952 09:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:31.952 09:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:34:31.952 09:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:31.952 09:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:31.952 09:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:31.952 09:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:31.952 09:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:31.952 09:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:31.952 09:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:31.952 09:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:31.952 09:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:31.952 09:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:31.952 09:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:31.952 09:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:31.952 09:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:31.952 09:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:31.952 09:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:31.952 09:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:31.952 09:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:31.952 09:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:34:31.952 09:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:31.952 09:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:31.952 09:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:31.952 09:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:31.952 09:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:31.952 09:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:31.952 09:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:34:31.952 09:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:32.215 09:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:34:32.215 09:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:32.215 09:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:32.215 09:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:32.215 09:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:32.215 09:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:32.215 09:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:32.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:32.215 09:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:32.215 09:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:32.215 09:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:32.215 09:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:34:32.215 09:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:34:32.215 09:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:34:32.215 09:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:34:32.215 09:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:34:32.215 09:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:34:32.215 09:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:34:32.215 09:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:32.215 09:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:32.215 09:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:32.215 09:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:32.215 09:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:32.215 09:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:32.215 09:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:32.215 09:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:32.215 09:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:32.215 09:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:32.215 09:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:34:32.215 09:52:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:40.368 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:40.368 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:34:40.368 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:40.368 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:40.368 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:40.368 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:40.368 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:40.368 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:34:40.368 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:40.368 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:34:40.368 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:34:40.368 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:34:40.368 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:34:40.368 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:34:40.368 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:34:40.368 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:40.368 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:40.368 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:40.368 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:40.368 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:40.368 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:40.368 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:40.368 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:40.368 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:40.368 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:40.368 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:40.368 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:40.368 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:40.368 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:40.368 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:40.368 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:40.368 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:40.368 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:40.368 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:40.368 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:40.368 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:40.368 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:40.368 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:40.368 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:40.368 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:40.368 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:40.368 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:40.368 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:40.368 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:40.368 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:40.368 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:40.368 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:40.368 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:40.368 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:40.368 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:40.368 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:40.368 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:40.368 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:40.368 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:40.368 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:40.368 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:40.368 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:40.368 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:40.368 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:40.368 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:40.368 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:40.368 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:40.368 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:40.368 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:40.368 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:40.368 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:40.368 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:40.368 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:40.368 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:40.368 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:40.368 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:40.368 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:40.368 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:40.368 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:34:40.368 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:40.368 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:40.368 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:40.368 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:40.368 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:40.368 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:40.368 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:40.368 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:40.368 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:40.368 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:40.368 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:40.368 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:40.369 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:40.369 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:40.369 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:40.369 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:40.369 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:40.369 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:40.369 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:40.369 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:40.369 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:40.369 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:40.369 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:40.369 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:40.369 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:40.369 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:40.369 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:40.369 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.656 ms 00:34:40.369 00:34:40.369 --- 10.0.0.2 ping statistics --- 00:34:40.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:40.369 rtt min/avg/max/mdev = 0.656/0.656/0.656/0.000 ms 00:34:40.369 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:40.369 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:40.369 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:34:40.369 00:34:40.369 --- 10.0.0.1 ping statistics --- 00:34:40.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:40.369 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:34:40.369 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:40.369 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:34:40.369 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:40.369 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:40.369 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:40.369 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:40.369 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:40.369 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:40.369 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:40.369 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:34:40.369 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:40.369 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:40.369 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:40.369 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=3000331 00:34:40.369 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 3000331 00:34:40.369 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:34:40.369 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 3000331 ']' 00:34:40.369 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:40.369 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:40.369 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:40.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:40.369 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:40.369 09:52:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:40.369 [2024-12-09 09:52:14.987487] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:34:40.369 [2024-12-09 09:52:14.987562] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:40.369 [2024-12-09 09:52:15.084782] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:40.369 [2024-12-09 09:52:15.110744] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:40.369 [2024-12-09 09:52:15.110793] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:40.369 [2024-12-09 09:52:15.110801] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:40.369 [2024-12-09 09:52:15.110809] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:40.369 [2024-12-09 09:52:15.110815] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:40.369 [2024-12-09 09:52:15.111515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:40.369 09:52:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:40.369 09:52:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:34:40.369 09:52:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:40.369 09:52:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:40.369 09:52:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:40.631 09:52:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:40.631 09:52:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:40.631 09:52:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.631 09:52:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:40.631 [2024-12-09 09:52:15.847099] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:40.631 09:52:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.631 09:52:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:34:40.631 09:52:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.631 09:52:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:40.631 [2024-12-09 09:52:15.859354] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:34:40.631 09:52:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.631 09:52:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:34:40.631 09:52:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.631 09:52:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:40.631 null0 00:34:40.631 09:52:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.631 09:52:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:34:40.631 09:52:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.631 09:52:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:40.631 null1 00:34:40.631 09:52:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.631 09:52:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:34:40.631 09:52:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.631 09:52:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:40.631 09:52:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.631 09:52:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=3000650 00:34:40.631 09:52:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 3000650 /tmp/host.sock 00:34:40.631 09:52:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:34:40.631 09:52:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 3000650 ']' 00:34:40.631 09:52:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:34:40.631 09:52:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:40.631 09:52:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:34:40.631 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:34:40.631 09:52:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:40.631 09:52:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:40.631 [2024-12-09 09:52:15.965700] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:34:40.631 [2024-12-09 09:52:15.965769] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3000650 ] 00:34:40.631 [2024-12-09 09:52:16.058280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:40.893 [2024-12-09 09:52:16.086939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:41.466 09:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:41.466 09:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:34:41.466 09:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:41.466 09:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:34:41.466 09:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.466 09:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:41.466 09:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.466 09:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:34:41.466 09:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.466 09:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:41.466 09:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.466 09:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:34:41.466 09:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:34:41.466 09:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:41.466 09:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.466 09:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:41.466 09:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:41.466 09:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:41.466 09:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:41.466 09:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.466 09:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:34:41.466 09:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:34:41.466 09:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:41.466 09:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:41.466 09:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.466 09:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:41.466 09:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:41.466 09:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:41.466 09:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.727 09:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:34:41.727 09:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:34:41.727 09:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.727 09:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:41.727 09:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.727 09:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:34:41.727 09:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:41.727 09:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.727 09:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:41.727 09:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:41.727 09:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:41.727 09:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:41.727 09:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.727 09:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:34:41.727 09:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:34:41.727 09:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:41.727 09:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:41.727 09:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.727 09:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:41.727 09:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:41.727 09:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:41.727 09:52:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.727 09:52:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:34:41.728 09:52:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:34:41.728 09:52:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.728 09:52:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:41.728 09:52:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.728 09:52:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:34:41.728 09:52:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:41.728 09:52:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.728 09:52:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:41.728 09:52:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:41.728 09:52:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:41.728 09:52:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:41.728 09:52:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.728 09:52:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:34:41.728 09:52:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:34:41.728 09:52:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:41.728 09:52:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:41.728 09:52:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.728 09:52:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:41.728 09:52:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:41.728 09:52:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:41.728 09:52:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.728 09:52:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:34:41.728 09:52:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:41.728 09:52:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.728 09:52:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:41.728 [2024-12-09 09:52:17.158663] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:41.728 09:52:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.728 09:52:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:34:41.728 09:52:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:41.728 09:52:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.728 09:52:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:41.728 09:52:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:41.728 09:52:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:41.728 09:52:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:41.989 09:52:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.989 09:52:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:34:41.989 09:52:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:34:41.989 09:52:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:41.989 09:52:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:41.989 09:52:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:41.989 09:52:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.989 09:52:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:41.989 09:52:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:41.989 09:52:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.990 09:52:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:34:41.990 09:52:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:34:41.990 09:52:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:34:41.990 09:52:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:41.990 09:52:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:41.990 09:52:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:41.990 09:52:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:41.990 09:52:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:41.990 09:52:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:34:41.990 09:52:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:34:41.990 09:52:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:41.990 09:52:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.990 09:52:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:41.990 09:52:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.990 09:52:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:34:41.990 09:52:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:34:41.990 09:52:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:34:41.990 09:52:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:41.990 09:52:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:34:41.990 09:52:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.990 09:52:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:41.990 09:52:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.990 09:52:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:41.990 09:52:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:41.990 09:52:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:41.990 09:52:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:41.990 09:52:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:34:41.990 09:52:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:34:41.990 09:52:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:41.990 09:52:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:41.990 09:52:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.990 09:52:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:41.990 09:52:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:41.990 09:52:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:41.990 09:52:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.990 09:52:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:34:41.990 09:52:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:34:42.610 [2024-12-09 09:52:17.865397] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:34:42.610 [2024-12-09 09:52:17.865417] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:34:42.610 [2024-12-09 09:52:17.865430] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:42.610 [2024-12-09 09:52:17.954721] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:34:42.869 [2024-12-09 09:52:18.139964] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:34:42.869 [2024-12-09 09:52:18.141039] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1651f60:1 started. 00:34:42.869 [2024-12-09 09:52:18.142652] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:34:42.869 [2024-12-09 09:52:18.142671] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:34:42.869 [2024-12-09 09:52:18.185385] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1651f60 was disconnected and freed. delete nvme_qpair. 00:34:43.129 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:43.129 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:34:43.129 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:34:43.129 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:43.129 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:43.129 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.129 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:43.129 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:43.129 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:43.129 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.129 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:43.129 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:43.129 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:34:43.129 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:34:43.129 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:43.129 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:43.129 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:34:43.129 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:34:43.129 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:43.129 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:43.129 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.129 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:43.129 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:43.129 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:43.129 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.129 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:34:43.129 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:43.129 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:34:43.129 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:34:43.129 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:43.129 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:43.129 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:34:43.129 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:34:43.129 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:34:43.129 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:34:43.129 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:34:43.129 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:34:43.129 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.129 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:43.130 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.130 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:34:43.130 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:43.130 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:34:43.130 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:34:43.130 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:43.130 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:43.130 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:43.130 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:43.130 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:43.130 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:34:43.130 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:34:43.130 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.130 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:43.130 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:43.130 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.391 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:34:43.391 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:34:43.391 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:34:43.391 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:43.391 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:34:43.391 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.391 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:43.391 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.391 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:43.391 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:43.391 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:43.391 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:43.391 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:34:43.391 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:34:43.391 [2024-12-09 09:52:18.604675] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x16522e0:1 started. 00:34:43.391 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:43.391 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.391 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:43.391 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:43.391 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:43.391 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:43.391 [2024-12-09 09:52:18.615916] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x16522e0 was disconnected and freed. delete nvme_qpair. 00:34:43.391 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.391 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:43.391 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:43.391 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:34:43.391 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:34:43.391 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:43.391 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:43.391 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:43.391 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:43.391 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:43.391 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:34:43.391 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:34:43.391 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:43.391 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.391 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:43.391 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.391 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:34:43.391 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:34:43.391 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:34:43.391 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:43.391 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:34:43.391 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.391 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:43.391 [2024-12-09 09:52:18.706530] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:43.391 [2024-12-09 09:52:18.706799] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:34:43.391 [2024-12-09 09:52:18.706818] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:43.391 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.391 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:43.391 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:43.391 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:43.391 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:43.391 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:34:43.391 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:34:43.391 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:43.391 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:43.391 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:43.391 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.391 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:43.391 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:43.391 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.391 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:43.391 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:43.391 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:43.391 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:43.391 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:43.391 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:43.391 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:34:43.391 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:34:43.391 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:43.391 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:43.391 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.391 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:43.391 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:43.391 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:43.391 [2024-12-09 09:52:18.795068] bdev_nvme.c:7435:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:34:43.391 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.391 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:43.391 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:43.391 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:34:43.391 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:34:43.391 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:43.391 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:43.391 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:34:43.391 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:34:43.392 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:34:43.392 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:34:43.392 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:34:43.392 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.392 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:34:43.392 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:43.392 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.652 [2024-12-09 09:52:18.860682] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:34:43.652 [2024-12-09 09:52:18.860718] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:34:43.652 [2024-12-09 09:52:18.860726] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:34:43.652 [2024-12-09 09:52:18.860731] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:34:43.652 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:34:43.652 09:52:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:34:44.595 09:52:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:44.595 09:52:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:34:44.595 09:52:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:34:44.595 09:52:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:34:44.595 09:52:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:34:44.595 09:52:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:34:44.595 09:52:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.595 09:52:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:34:44.595 09:52:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:44.595 09:52:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.595 09:52:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:34:44.595 09:52:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:44.595 09:52:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:34:44.595 09:52:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:34:44.595 09:52:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:44.595 09:52:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:44.595 09:52:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:44.595 09:52:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:44.595 09:52:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:44.595 09:52:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:34:44.595 09:52:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:44.595 09:52:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:34:44.595 09:52:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.595 09:52:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:44.595 09:52:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.595 09:52:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:34:44.595 09:52:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:34:44.595 09:52:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:34:44.595 09:52:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:44.595 09:52:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:44.595 09:52:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.595 09:52:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:44.595 [2024-12-09 09:52:19.962845] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:34:44.595 [2024-12-09 09:52:19.962861] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:44.595 [2024-12-09 09:52:19.964411] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:44.595 [2024-12-09 09:52:19.964424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:44.595 [2024-12-09 09:52:19.964432] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:44.595 [2024-12-09 09:52:19.964437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:44.595 [2024-12-09 09:52:19.964443] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:44.595 [2024-12-09 09:52:19.964448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:44.595 [2024-12-09 09:52:19.964454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:44.595 [2024-12-09 09:52:19.964459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:44.595 [2024-12-09 09:52:19.964464] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16240b0 is same with the state(6) to be set 00:34:44.595 09:52:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.595 09:52:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:44.595 09:52:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:34:44.595 09:52:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:44.595 09:52:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:44.595 09:52:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:34:44.595 09:52:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:34:44.595 09:52:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:44.595 09:52:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:44.595 09:52:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.595 09:52:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:44.595 09:52:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:44.595 09:52:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:44.595 [2024-12-09 09:52:19.974428] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16240b0 (9): Bad file descriptor 00:34:44.595 [2024-12-09 09:52:19.984462] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:34:44.595 [2024-12-09 09:52:19.984472] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:34:44.595 [2024-12-09 09:52:19.984477] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:44.595 [2024-12-09 09:52:19.984481] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:44.595 [2024-12-09 09:52:19.984494] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:34:44.595 [2024-12-09 09:52:19.984917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.595 [2024-12-09 09:52:19.984947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16240b0 with addr=10.0.0.2, port=4420 00:34:44.595 [2024-12-09 09:52:19.984956] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16240b0 is same with the state(6) to be set 00:34:44.595 [2024-12-09 09:52:19.984972] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16240b0 (9): Bad file descriptor 00:34:44.595 [2024-12-09 09:52:19.984993] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:44.595 [2024-12-09 09:52:19.984998] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:44.596 [2024-12-09 09:52:19.985005] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:44.596 [2024-12-09 09:52:19.985010] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:34:44.596 [2024-12-09 09:52:19.985014] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:34:44.596 [2024-12-09 09:52:19.985018] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:44.596 09:52:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.596 [2024-12-09 09:52:19.994523] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:34:44.596 [2024-12-09 09:52:19.994533] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:34:44.596 [2024-12-09 09:52:19.994536] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:44.596 [2024-12-09 09:52:19.994540] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:44.596 [2024-12-09 09:52:19.994551] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:34:44.596 [2024-12-09 09:52:19.994837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.596 [2024-12-09 09:52:19.994848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16240b0 with addr=10.0.0.2, port=4420 00:34:44.596 [2024-12-09 09:52:19.994854] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16240b0 is same with the state(6) to be set 00:34:44.596 [2024-12-09 09:52:19.994862] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16240b0 (9): Bad file descriptor 00:34:44.596 [2024-12-09 09:52:19.994869] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:44.596 [2024-12-09 09:52:19.994878] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:44.596 [2024-12-09 09:52:19.994883] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:44.596 [2024-12-09 09:52:19.994887] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:34:44.596 [2024-12-09 09:52:19.994891] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:34:44.596 [2024-12-09 09:52:19.994894] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:44.596 [2024-12-09 09:52:20.004581] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:34:44.596 [2024-12-09 09:52:20.004590] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:34:44.596 [2024-12-09 09:52:20.004593] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:44.596 [2024-12-09 09:52:20.004596] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:44.596 [2024-12-09 09:52:20.004606] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:34:44.596 [2024-12-09 09:52:20.004950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.596 [2024-12-09 09:52:20.004960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16240b0 with addr=10.0.0.2, port=4420 00:34:44.596 [2024-12-09 09:52:20.004965] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16240b0 is same with the state(6) to be set 00:34:44.596 [2024-12-09 09:52:20.004973] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16240b0 (9): Bad file descriptor 00:34:44.596 [2024-12-09 09:52:20.004981] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:44.596 [2024-12-09 09:52:20.004985] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:44.596 [2024-12-09 09:52:20.004990] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:44.596 [2024-12-09 09:52:20.004995] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:34:44.596 [2024-12-09 09:52:20.004999] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:34:44.596 [2024-12-09 09:52:20.005002] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:44.596 [2024-12-09 09:52:20.014646] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:34:44.596 [2024-12-09 09:52:20.014673] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:34:44.596 [2024-12-09 09:52:20.014678] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:44.596 [2024-12-09 09:52:20.014684] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:44.596 [2024-12-09 09:52:20.014715] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:34:44.596 [2024-12-09 09:52:20.015092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.596 [2024-12-09 09:52:20.015104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16240b0 with addr=10.0.0.2, port=4420 00:34:44.596 [2024-12-09 09:52:20.015110] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16240b0 is same with the state(6) to be set 00:34:44.596 [2024-12-09 09:52:20.015118] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16240b0 (9): Bad file descriptor 00:34:44.596 [2024-12-09 09:52:20.015129] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:44.596 [2024-12-09 09:52:20.015134] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:44.596 [2024-12-09 09:52:20.015139] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:44.596 [2024-12-09 09:52:20.015144] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:34:44.596 [2024-12-09 09:52:20.015147] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:34:44.596 [2024-12-09 09:52:20.015151] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:44.596 09:52:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:44.596 09:52:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:44.596 09:52:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:44.596 09:52:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:34:44.596 09:52:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:44.596 09:52:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:44.596 09:52:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:34:44.596 09:52:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:34:44.596 [2024-12-09 09:52:20.024743] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:34:44.596 [2024-12-09 09:52:20.024753] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:34:44.596 [2024-12-09 09:52:20.024757] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:44.596 [2024-12-09 09:52:20.024760] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:44.596 [2024-12-09 09:52:20.024770] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:34:44.596 [2024-12-09 09:52:20.025042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.596 [2024-12-09 09:52:20.025051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16240b0 with addr=10.0.0.2, port=4420 00:34:44.596 [2024-12-09 09:52:20.025056] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16240b0 is same with the state(6) to be set 00:34:44.596 [2024-12-09 09:52:20.025064] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16240b0 (9): Bad file descriptor 00:34:44.596 [2024-12-09 09:52:20.025071] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:44.596 [2024-12-09 09:52:20.025076] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:44.596 [2024-12-09 09:52:20.025081] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:44.596 [2024-12-09 09:52:20.025085] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:34:44.596 [2024-12-09 09:52:20.025088] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:34:44.596 [2024-12-09 09:52:20.025092] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:44.596 09:52:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:44.596 09:52:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:44.596 09:52:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.596 09:52:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:44.596 09:52:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:44.596 09:52:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:44.596 [2024-12-09 09:52:20.034799] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:34:44.596 [2024-12-09 09:52:20.034810] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:34:44.596 [2024-12-09 09:52:20.034813] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:44.596 [2024-12-09 09:52:20.034817] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:44.596 [2024-12-09 09:52:20.034828] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:34:44.596 [2024-12-09 09:52:20.035120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.596 [2024-12-09 09:52:20.035129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16240b0 with addr=10.0.0.2, port=4420 00:34:44.596 [2024-12-09 09:52:20.035135] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16240b0 is same with the state(6) to be set 00:34:44.596 [2024-12-09 09:52:20.035143] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16240b0 (9): Bad file descriptor 00:34:44.596 [2024-12-09 09:52:20.035150] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:44.596 [2024-12-09 09:52:20.035154] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:44.596 [2024-12-09 09:52:20.035160] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:44.597 [2024-12-09 09:52:20.035164] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:34:44.597 [2024-12-09 09:52:20.035167] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:34:44.597 [2024-12-09 09:52:20.035171] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:44.859 [2024-12-09 09:52:20.044857] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:34:44.860 [2024-12-09 09:52:20.044868] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:34:44.860 [2024-12-09 09:52:20.044871] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:44.860 [2024-12-09 09:52:20.044875] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:44.860 [2024-12-09 09:52:20.044886] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:34:44.860 [2024-12-09 09:52:20.045151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.860 [2024-12-09 09:52:20.045160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16240b0 with addr=10.0.0.2, port=4420 00:34:44.860 [2024-12-09 09:52:20.045165] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16240b0 is same with the state(6) to be set 00:34:44.860 [2024-12-09 09:52:20.045173] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16240b0 (9): Bad file descriptor 00:34:44.860 [2024-12-09 09:52:20.045180] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:44.860 [2024-12-09 09:52:20.045185] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:44.860 [2024-12-09 09:52:20.045196] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:44.860 [2024-12-09 09:52:20.045201] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:34:44.860 [2024-12-09 09:52:20.045204] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:34:44.860 [2024-12-09 09:52:20.045207] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:44.860 [2024-12-09 09:52:20.049476] bdev_nvme.c:7298:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:34:44.860 [2024-12-09 09:52:20.049489] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:34:44.860 09:52:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.860 09:52:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:44.860 09:52:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:44.860 09:52:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:34:44.860 09:52:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:34:44.860 09:52:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:44.860 09:52:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:44.860 09:52:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:34:44.860 09:52:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:34:44.860 09:52:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:34:44.860 09:52:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:34:44.860 09:52:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:34:44.860 09:52:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.860 09:52:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:44.860 09:52:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:34:44.860 09:52:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.860 09:52:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:34:44.860 09:52:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:44.860 09:52:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:34:44.860 09:52:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:34:44.860 09:52:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:44.860 09:52:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:44.860 09:52:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:44.860 09:52:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:44.860 09:52:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:44.860 09:52:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:34:44.860 09:52:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:34:44.860 09:52:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.860 09:52:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:44.860 09:52:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:44.860 09:52:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.860 09:52:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:34:44.860 09:52:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:34:44.860 09:52:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:34:44.860 09:52:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:44.860 09:52:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:34:44.860 09:52:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.860 09:52:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:44.860 09:52:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.860 09:52:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:34:44.860 09:52:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:34:44.860 09:52:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:44.860 09:52:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:44.860 09:52:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:34:44.860 09:52:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:34:44.860 09:52:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:34:44.860 09:52:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:34:44.860 09:52:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:34:44.860 09:52:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.860 09:52:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:44.860 09:52:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:34:44.860 09:52:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.860 09:52:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:34:44.860 09:52:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:44.860 09:52:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:34:44.860 09:52:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:34:44.860 09:52:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:44.860 09:52:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:44.860 09:52:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:34:44.860 09:52:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:34:44.860 09:52:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:44.860 09:52:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:44.860 09:52:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:44.860 09:52:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.860 09:52:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:44.860 09:52:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:44.860 09:52:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.860 09:52:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:34:44.860 09:52:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:44.860 09:52:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:34:44.860 09:52:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:34:44.860 09:52:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:34:44.860 09:52:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:34:44.860 09:52:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:34:44.860 09:52:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:34:44.860 09:52:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:34:44.860 09:52:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:34:44.860 09:52:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:34:44.860 09:52:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:34:44.860 09:52:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.860 09:52:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:45.122 09:52:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.122 09:52:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:34:45.122 09:52:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:34:45.122 09:52:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:34:45.122 09:52:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:34:45.122 09:52:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:45.122 09:52:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.122 09:52:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:46.066 [2024-12-09 09:52:21.355073] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:34:46.066 [2024-12-09 09:52:21.355087] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:34:46.066 [2024-12-09 09:52:21.355096] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:46.066 [2024-12-09 09:52:21.442339] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:34:46.327 [2024-12-09 09:52:21.712618] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:34:46.327 [2024-12-09 09:52:21.713257] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x1650a30:1 started. 00:34:46.328 [2024-12-09 09:52:21.714605] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:34:46.328 [2024-12-09 09:52:21.714627] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:34:46.328 09:52:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.328 09:52:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:46.328 09:52:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:34:46.328 09:52:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:46.328 09:52:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:46.328 09:52:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:46.328 09:52:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:46.328 09:52:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:46.328 09:52:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:46.328 09:52:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.328 09:52:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:46.328 [2024-12-09 09:52:21.723961] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x1650a30 was disconnected and freed. delete nvme_qpair. 00:34:46.328 request: 00:34:46.328 { 00:34:46.328 "name": "nvme", 00:34:46.328 "trtype": "tcp", 00:34:46.328 "traddr": "10.0.0.2", 00:34:46.328 "adrfam": "ipv4", 00:34:46.328 "trsvcid": "8009", 00:34:46.328 "hostnqn": "nqn.2021-12.io.spdk:test", 00:34:46.328 "wait_for_attach": true, 00:34:46.328 "method": "bdev_nvme_start_discovery", 00:34:46.328 "req_id": 1 00:34:46.328 } 00:34:46.328 Got JSON-RPC error response 00:34:46.328 response: 00:34:46.328 { 00:34:46.328 "code": -17, 00:34:46.328 "message": "File exists" 00:34:46.328 } 00:34:46.328 09:52:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:46.328 09:52:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:34:46.328 09:52:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:46.328 09:52:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:46.328 09:52:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:46.328 09:52:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:34:46.328 09:52:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:34:46.328 09:52:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:34:46.328 09:52:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.328 09:52:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:34:46.328 09:52:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:46.328 09:52:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:34:46.328 09:52:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.589 09:52:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:34:46.589 09:52:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:34:46.589 09:52:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:46.589 09:52:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:46.589 09:52:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:46.589 09:52:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.589 09:52:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:46.589 09:52:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:46.589 09:52:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.589 09:52:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:46.589 09:52:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:46.589 09:52:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:34:46.589 09:52:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:46.589 09:52:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:46.589 09:52:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:46.589 09:52:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:46.589 09:52:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:46.589 09:52:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:34:46.589 09:52:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.589 09:52:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:46.589 request: 00:34:46.589 { 00:34:46.589 "name": "nvme_second", 00:34:46.589 "trtype": "tcp", 00:34:46.589 "traddr": "10.0.0.2", 00:34:46.589 "adrfam": "ipv4", 00:34:46.589 "trsvcid": "8009", 00:34:46.589 "hostnqn": "nqn.2021-12.io.spdk:test", 00:34:46.589 "wait_for_attach": true, 00:34:46.589 "method": "bdev_nvme_start_discovery", 00:34:46.589 "req_id": 1 00:34:46.589 } 00:34:46.589 Got JSON-RPC error response 00:34:46.589 response: 00:34:46.589 { 00:34:46.589 "code": -17, 00:34:46.589 "message": "File exists" 00:34:46.589 } 00:34:46.589 09:52:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:46.589 09:52:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:34:46.589 09:52:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:46.589 09:52:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:46.589 09:52:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:46.589 09:52:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:34:46.589 09:52:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:34:46.589 09:52:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:34:46.589 09:52:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.589 09:52:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:34:46.589 09:52:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:46.589 09:52:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:34:46.589 09:52:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.589 09:52:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:34:46.589 09:52:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:34:46.589 09:52:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:46.589 09:52:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:34:46.589 09:52:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.589 09:52:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:34:46.589 09:52:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:46.589 09:52:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:34:46.589 09:52:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.589 09:52:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:34:46.589 09:52:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:34:46.589 09:52:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:34:46.589 09:52:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:34:46.589 09:52:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:46.589 09:52:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:46.589 09:52:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:46.589 09:52:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:46.589 09:52:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:34:46.589 09:52:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.589 09:52:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:47.534 [2024-12-09 09:52:22.974395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:47.534 [2024-12-09 09:52:22.974418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x163a3f0 with addr=10.0.0.2, port=8010 00:34:47.534 [2024-12-09 09:52:22.974429] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:34:47.534 [2024-12-09 09:52:22.974434] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:47.534 [2024-12-09 09:52:22.974439] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:34:48.921 [2024-12-09 09:52:23.976697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:48.921 [2024-12-09 09:52:23.976717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x163a3f0 with addr=10.0.0.2, port=8010 00:34:48.921 [2024-12-09 09:52:23.976725] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:34:48.921 [2024-12-09 09:52:23.976730] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:48.921 [2024-12-09 09:52:23.976735] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:34:49.863 [2024-12-09 09:52:24.978736] bdev_nvme.c:7554:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:34:49.863 request: 00:34:49.863 { 00:34:49.863 "name": "nvme_second", 00:34:49.863 "trtype": "tcp", 00:34:49.863 "traddr": "10.0.0.2", 00:34:49.863 "adrfam": "ipv4", 00:34:49.863 "trsvcid": "8010", 00:34:49.863 "hostnqn": "nqn.2021-12.io.spdk:test", 00:34:49.863 "wait_for_attach": false, 00:34:49.863 "attach_timeout_ms": 3000, 00:34:49.863 "method": "bdev_nvme_start_discovery", 00:34:49.863 "req_id": 1 00:34:49.863 } 00:34:49.863 Got JSON-RPC error response 00:34:49.863 response: 00:34:49.863 { 00:34:49.863 "code": -110, 00:34:49.863 "message": "Connection timed out" 00:34:49.863 } 00:34:49.863 09:52:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:49.863 09:52:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:34:49.863 09:52:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:49.863 09:52:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:49.863 09:52:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:49.863 09:52:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:34:49.863 09:52:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:34:49.863 09:52:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:34:49.863 09:52:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.863 09:52:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:34:49.863 09:52:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:49.863 09:52:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:34:49.863 09:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.863 09:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:34:49.863 09:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:34:49.863 09:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 3000650 00:34:49.863 09:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:34:49.863 09:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:49.863 09:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:34:49.863 09:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:49.863 09:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:34:49.863 09:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:49.863 09:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:49.863 rmmod nvme_tcp 00:34:49.863 rmmod nvme_fabrics 00:34:49.863 rmmod nvme_keyring 00:34:49.863 09:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:49.863 09:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:34:49.863 09:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:34:49.863 09:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 3000331 ']' 00:34:49.863 09:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 3000331 00:34:49.863 09:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 3000331 ']' 00:34:49.863 09:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 3000331 00:34:49.863 09:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:34:49.863 09:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:49.863 09:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3000331 00:34:49.863 09:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:49.863 09:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:49.863 09:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3000331' 00:34:49.863 killing process with pid 3000331 00:34:49.863 09:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 3000331 00:34:49.863 09:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 3000331 00:34:49.863 09:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:49.863 09:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:49.863 09:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:49.863 09:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:34:49.863 09:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:34:49.863 09:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:49.863 09:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:34:49.863 09:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:49.863 09:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:49.863 09:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:49.863 09:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:49.863 09:52:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:52.403 09:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:52.403 00:34:52.403 real 0m20.170s 00:34:52.403 user 0m23.381s 00:34:52.403 sys 0m7.118s 00:34:52.403 09:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:52.403 09:52:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:34:52.403 ************************************ 00:34:52.403 END TEST nvmf_host_discovery 00:34:52.403 ************************************ 00:34:52.403 09:52:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:34:52.403 09:52:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:52.403 09:52:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:52.403 09:52:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.403 ************************************ 00:34:52.403 START TEST nvmf_host_multipath_status 00:34:52.403 ************************************ 00:34:52.403 09:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:34:52.403 * Looking for test storage... 00:34:52.403 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:52.403 09:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:52.403 09:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:34:52.403 09:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:52.403 09:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:52.403 09:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:52.403 09:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:52.403 09:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:52.403 09:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:34:52.403 09:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:34:52.403 09:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:34:52.403 09:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:34:52.403 09:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:34:52.403 09:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:34:52.403 09:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:34:52.403 09:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:52.403 09:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:34:52.403 09:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:34:52.403 09:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:52.403 09:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:52.403 09:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:34:52.403 09:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:34:52.403 09:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:52.403 09:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:34:52.403 09:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:34:52.403 09:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:34:52.403 09:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:34:52.403 09:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:52.403 09:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:34:52.403 09:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:34:52.403 09:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:52.403 09:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:52.403 09:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:34:52.403 09:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:52.403 09:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:52.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:52.403 --rc genhtml_branch_coverage=1 00:34:52.403 --rc genhtml_function_coverage=1 00:34:52.403 --rc genhtml_legend=1 00:34:52.403 --rc geninfo_all_blocks=1 00:34:52.403 --rc geninfo_unexecuted_blocks=1 00:34:52.403 00:34:52.403 ' 00:34:52.403 09:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:52.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:52.403 --rc genhtml_branch_coverage=1 00:34:52.403 --rc genhtml_function_coverage=1 00:34:52.403 --rc genhtml_legend=1 00:34:52.403 --rc geninfo_all_blocks=1 00:34:52.403 --rc geninfo_unexecuted_blocks=1 00:34:52.403 00:34:52.403 ' 00:34:52.403 09:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:52.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:52.403 --rc genhtml_branch_coverage=1 00:34:52.403 --rc genhtml_function_coverage=1 00:34:52.403 --rc genhtml_legend=1 00:34:52.403 --rc geninfo_all_blocks=1 00:34:52.403 --rc geninfo_unexecuted_blocks=1 00:34:52.403 00:34:52.403 ' 00:34:52.403 09:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:52.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:52.403 --rc genhtml_branch_coverage=1 00:34:52.403 --rc genhtml_function_coverage=1 00:34:52.403 --rc genhtml_legend=1 00:34:52.403 --rc geninfo_all_blocks=1 00:34:52.403 --rc geninfo_unexecuted_blocks=1 00:34:52.403 00:34:52.403 ' 00:34:52.403 09:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:52.403 09:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:34:52.403 09:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:52.403 09:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:52.403 09:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:52.403 09:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:52.403 09:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:52.403 09:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:52.403 09:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:52.403 09:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:52.403 09:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:52.404 09:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:52.404 09:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:52.404 09:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:52.404 09:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:52.404 09:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:52.404 09:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:52.404 09:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:52.404 09:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:52.404 09:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:34:52.404 09:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:52.404 09:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:52.404 09:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:52.404 09:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:52.404 09:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:52.404 09:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:52.404 09:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:34:52.404 09:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:52.404 09:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:34:52.404 09:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:52.404 09:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:52.404 09:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:52.404 09:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:52.404 09:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:52.404 09:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:52.404 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:52.404 09:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:52.404 09:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:52.404 09:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:52.404 09:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:34:52.404 09:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:34:52.404 09:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:52.404 09:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:34:52.404 09:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:34:52.404 09:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:34:52.404 09:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:34:52.404 09:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:52.404 09:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:52.404 09:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:52.404 09:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:52.404 09:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:52.404 09:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:52.404 09:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:52.404 09:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:52.404 09:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:52.404 09:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:52.404 09:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:34:52.404 09:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:35:00.542 09:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:00.542 09:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:35:00.542 09:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:00.542 09:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:00.542 09:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:00.542 09:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:00.542 09:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:00.542 09:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:35:00.542 09:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:00.542 09:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:35:00.542 09:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:35:00.542 09:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:35:00.542 09:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:35:00.542 09:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:35:00.542 09:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:35:00.542 09:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:00.542 09:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:00.542 09:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:00.542 09:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:00.542 09:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:00.542 09:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:00.542 09:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:00.542 09:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:00.542 09:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:00.542 09:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:00.542 09:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:00.542 09:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:00.542 09:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:00.542 09:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:00.542 09:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:00.542 09:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:00.542 09:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:00.542 09:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:00.542 09:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:00.542 09:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:00.542 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:00.542 09:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:00.542 09:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:00.542 09:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:00.542 09:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:00.542 09:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:00.542 09:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:00.542 09:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:00.542 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:00.542 09:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:00.542 09:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:00.542 09:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:00.542 09:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:00.542 09:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:00.542 09:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:00.542 09:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:00.542 09:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:00.542 09:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:00.542 09:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:00.542 09:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:00.542 09:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:00.542 09:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:00.542 09:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:00.542 09:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:00.542 09:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:00.542 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:00.542 09:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:00.542 09:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:00.542 09:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:00.542 09:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:00.542 09:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:00.542 09:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:00.542 09:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:00.542 09:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:00.542 09:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:00.542 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:00.542 09:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:00.542 09:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:00.542 09:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:35:00.542 09:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:00.542 09:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:00.542 09:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:00.542 09:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:00.542 09:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:00.542 09:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:00.542 09:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:00.542 09:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:00.542 09:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:00.542 09:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:00.542 09:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:00.542 09:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:00.542 09:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:00.542 09:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:00.542 09:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:00.542 09:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:00.542 09:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:00.542 09:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:00.542 09:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:00.542 09:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:00.542 09:52:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:00.542 09:52:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:00.542 09:52:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:00.542 09:52:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:00.542 09:52:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:00.542 09:52:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:00.542 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:00.542 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.662 ms 00:35:00.542 00:35:00.542 --- 10.0.0.2 ping statistics --- 00:35:00.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:00.542 rtt min/avg/max/mdev = 0.662/0.662/0.662/0.000 ms 00:35:00.542 09:52:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:00.542 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:00.542 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:35:00.542 00:35:00.542 --- 10.0.0.1 ping statistics --- 00:35:00.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:00.542 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:35:00.542 09:52:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:00.542 09:52:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:35:00.542 09:52:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:00.542 09:52:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:00.542 09:52:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:00.542 09:52:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:00.542 09:52:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:00.542 09:52:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:00.542 09:52:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:00.542 09:52:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:35:00.542 09:52:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:00.542 09:52:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:00.542 09:52:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:35:00.542 09:52:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=3006541 00:35:00.542 09:52:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 3006541 00:35:00.542 09:52:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:35:00.542 09:52:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 3006541 ']' 00:35:00.542 09:52:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:00.542 09:52:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:00.542 09:52:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:00.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:00.542 09:52:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:00.542 09:52:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:35:00.542 [2024-12-09 09:52:35.253437] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:35:00.542 [2024-12-09 09:52:35.253513] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:00.542 [2024-12-09 09:52:35.352007] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:00.542 [2024-12-09 09:52:35.379224] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:00.542 [2024-12-09 09:52:35.379276] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:00.542 [2024-12-09 09:52:35.379285] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:00.542 [2024-12-09 09:52:35.379292] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:00.542 [2024-12-09 09:52:35.379298] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:00.542 [2024-12-09 09:52:35.381050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:00.542 [2024-12-09 09:52:35.381053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:00.802 09:52:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:00.802 09:52:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:35:00.802 09:52:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:00.802 09:52:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:00.802 09:52:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:35:00.802 09:52:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:00.802 09:52:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3006541 00:35:00.802 09:52:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:35:01.079 [2024-12-09 09:52:36.273094] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:01.079 09:52:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:35:01.079 Malloc0 00:35:01.079 09:52:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:35:01.339 09:52:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:01.599 09:52:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:01.599 [2024-12-09 09:52:37.029596] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:01.599 09:52:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:35:01.860 [2024-12-09 09:52:37.214088] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:35:01.860 09:52:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:35:01.860 09:52:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3006957 00:35:01.860 09:52:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:35:01.860 09:52:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3006957 /var/tmp/bdevperf.sock 00:35:01.860 09:52:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 3006957 ']' 00:35:01.860 09:52:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:35:01.860 09:52:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:01.860 09:52:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:35:01.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:35:01.860 09:52:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:01.860 09:52:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:35:02.122 09:52:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:02.122 09:52:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:35:02.122 09:52:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:35:02.382 09:52:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:35:02.643 Nvme0n1 00:35:02.643 09:52:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:35:02.903 Nvme0n1 00:35:03.164 09:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:35:03.164 09:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:35:05.077 09:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:35:05.077 09:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:35:05.336 09:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:35:05.336 09:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:35:06.718 09:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:35:06.718 09:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:06.718 09:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:06.718 09:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:06.718 09:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:06.718 09:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:35:06.718 09:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:06.718 09:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:06.718 09:52:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:06.718 09:52:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:06.718 09:52:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:06.718 09:52:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:06.978 09:52:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:06.978 09:52:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:06.978 09:52:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:06.978 09:52:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:07.239 09:52:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:07.239 09:52:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:07.239 09:52:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:07.239 09:52:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:07.239 09:52:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:07.239 09:52:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:07.239 09:52:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:07.239 09:52:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:07.501 09:52:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:07.501 09:52:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:35:07.501 09:52:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:35:07.762 09:52:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:35:08.023 09:52:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:35:08.963 09:52:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:35:08.963 09:52:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:35:08.963 09:52:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:08.963 09:52:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:08.963 09:52:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:08.963 09:52:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:35:08.963 09:52:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:08.963 09:52:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:09.225 09:52:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:09.225 09:52:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:09.225 09:52:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:09.225 09:52:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:09.486 09:52:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:09.486 09:52:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:09.486 09:52:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:09.486 09:52:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:09.486 09:52:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:09.486 09:52:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:09.486 09:52:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:09.486 09:52:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:09.746 09:52:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:09.746 09:52:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:09.746 09:52:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:09.746 09:52:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:10.005 09:52:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:10.005 09:52:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:35:10.005 09:52:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:35:10.265 09:52:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:35:10.265 09:52:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:35:11.649 09:52:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:35:11.649 09:52:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:11.649 09:52:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:11.649 09:52:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:11.649 09:52:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:11.649 09:52:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:35:11.649 09:52:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:11.649 09:52:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:11.649 09:52:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:11.649 09:52:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:11.649 09:52:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:11.649 09:52:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:11.909 09:52:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:11.909 09:52:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:11.909 09:52:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:11.909 09:52:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:12.170 09:52:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:12.170 09:52:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:12.170 09:52:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:12.170 09:52:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:12.170 09:52:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:12.170 09:52:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:12.170 09:52:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:12.170 09:52:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:12.430 09:52:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:12.430 09:52:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:35:12.430 09:52:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:35:12.691 09:52:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:35:12.964 09:52:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:35:13.905 09:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:35:13.905 09:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:13.905 09:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:13.905 09:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:13.905 09:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:13.905 09:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:35:13.905 09:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:13.905 09:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:14.166 09:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:14.166 09:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:14.166 09:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:14.166 09:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:14.427 09:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:14.428 09:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:14.428 09:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:14.428 09:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:14.428 09:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:14.428 09:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:14.428 09:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:14.428 09:52:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:14.688 09:52:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:14.688 09:52:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:35:14.688 09:52:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:14.688 09:52:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:14.951 09:52:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:14.951 09:52:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:35:14.951 09:52:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:35:14.951 09:52:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:35:15.214 09:52:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:35:16.161 09:52:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:35:16.161 09:52:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:35:16.161 09:52:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:16.161 09:52:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:16.421 09:52:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:16.422 09:52:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:35:16.422 09:52:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:16.422 09:52:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:16.683 09:52:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:16.683 09:52:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:16.683 09:52:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:16.683 09:52:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:16.683 09:52:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:16.683 09:52:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:16.683 09:52:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:16.683 09:52:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:16.943 09:52:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:16.943 09:52:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:35:16.943 09:52:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:16.943 09:52:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:17.203 09:52:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:17.203 09:52:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:35:17.203 09:52:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:17.204 09:52:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:17.204 09:52:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:17.204 09:52:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:35:17.204 09:52:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:35:17.463 09:52:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:35:17.728 09:52:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:35:18.783 09:52:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:35:18.783 09:52:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:35:18.783 09:52:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:18.783 09:52:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:18.783 09:52:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:18.783 09:52:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:35:18.783 09:52:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:18.783 09:52:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:19.043 09:52:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:19.043 09:52:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:19.044 09:52:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:19.044 09:52:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:19.304 09:52:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:19.304 09:52:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:19.304 09:52:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:19.304 09:52:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:19.564 09:52:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:19.564 09:52:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:35:19.564 09:52:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:19.564 09:52:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:19.564 09:52:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:19.564 09:52:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:19.564 09:52:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:19.564 09:52:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:19.833 09:52:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:19.833 09:52:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:35:20.095 09:52:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:35:20.095 09:52:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:35:20.095 09:52:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:35:20.355 09:52:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:35:21.297 09:52:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:35:21.297 09:52:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:21.297 09:52:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:21.297 09:52:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:21.558 09:52:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:21.558 09:52:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:35:21.558 09:52:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:21.558 09:52:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:21.558 09:52:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:21.558 09:52:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:21.819 09:52:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:21.819 09:52:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:21.819 09:52:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:21.819 09:52:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:21.819 09:52:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:21.819 09:52:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:22.080 09:52:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:22.080 09:52:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:22.080 09:52:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:22.080 09:52:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:22.341 09:52:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:22.341 09:52:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:22.341 09:52:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:22.341 09:52:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:22.341 09:52:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:22.341 09:52:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:35:22.341 09:52:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:35:22.602 09:52:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:35:22.863 09:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:35:23.807 09:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:35:23.807 09:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:35:23.807 09:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:23.807 09:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:24.068 09:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:24.069 09:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:35:24.069 09:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:24.069 09:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:24.069 09:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:24.069 09:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:24.069 09:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:24.069 09:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:24.330 09:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:24.330 09:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:24.330 09:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:24.330 09:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:24.591 09:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:24.591 09:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:24.591 09:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:24.591 09:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:24.875 09:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:24.875 09:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:24.875 09:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:24.875 09:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:24.875 09:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:24.875 09:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:35:24.875 09:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:35:25.138 09:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:35:25.138 09:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:35:26.526 09:53:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:35:26.526 09:53:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:26.526 09:53:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:26.526 09:53:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:26.526 09:53:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:26.526 09:53:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:35:26.526 09:53:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:26.526 09:53:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:26.526 09:53:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:26.526 09:53:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:26.526 09:53:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:26.526 09:53:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:26.787 09:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:26.787 09:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:26.787 09:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:26.787 09:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:27.048 09:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:27.048 09:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:27.048 09:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:27.048 09:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:27.309 09:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:27.309 09:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:35:27.309 09:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:27.309 09:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:27.309 09:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:27.309 09:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:35:27.309 09:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:35:27.569 09:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:35:27.830 09:53:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:35:28.771 09:53:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:35:28.771 09:53:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:35:28.771 09:53:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:28.771 09:53:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:35:29.032 09:53:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:29.032 09:53:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:35:29.032 09:53:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:29.032 09:53:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:35:29.032 09:53:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:29.032 09:53:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:35:29.032 09:53:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:29.032 09:53:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:35:29.293 09:53:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:29.293 09:53:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:35:29.293 09:53:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:29.293 09:53:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:35:29.554 09:53:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:29.554 09:53:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:35:29.554 09:53:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:35:29.554 09:53:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:29.554 09:53:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:35:29.554 09:53:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:35:29.554 09:53:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:35:29.554 09:53:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:35:29.813 09:53:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:35:29.813 09:53:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3006957 00:35:29.813 09:53:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 3006957 ']' 00:35:29.813 09:53:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 3006957 00:35:29.813 09:53:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:35:29.813 09:53:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:29.813 09:53:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3006957 00:35:29.813 09:53:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:35:29.813 09:53:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:35:29.813 09:53:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3006957' 00:35:29.813 killing process with pid 3006957 00:35:29.813 09:53:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 3006957 00:35:29.813 09:53:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 3006957 00:35:29.813 { 00:35:29.813 "results": [ 00:35:29.813 { 00:35:29.813 "job": "Nvme0n1", 00:35:29.813 "core_mask": "0x4", 00:35:29.813 "workload": "verify", 00:35:29.813 "status": "terminated", 00:35:29.813 "verify_range": { 00:35:29.813 "start": 0, 00:35:29.813 "length": 16384 00:35:29.813 }, 00:35:29.813 "queue_depth": 128, 00:35:29.813 "io_size": 4096, 00:35:29.813 "runtime": 26.732299, 00:35:29.813 "iops": 12118.112250652292, 00:35:29.813 "mibps": 47.336375979110514, 00:35:29.813 "io_failed": 0, 00:35:29.813 "io_timeout": 0, 00:35:29.813 "avg_latency_us": 10543.83861161617, 00:35:29.813 "min_latency_us": 198.82666666666665, 00:35:29.813 "max_latency_us": 3019898.88 00:35:29.813 } 00:35:29.813 ], 00:35:29.814 "core_count": 1 00:35:29.814 } 00:35:30.089 09:53:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3006957 00:35:30.089 09:53:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:35:30.089 [2024-12-09 09:52:37.274254] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:35:30.089 [2024-12-09 09:52:37.274333] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3006957 ] 00:35:30.089 [2024-12-09 09:52:37.339994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:30.089 [2024-12-09 09:52:37.360472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:30.089 Running I/O for 90 seconds... 00:35:30.089 9665.00 IOPS, 37.75 MiB/s [2024-12-09T08:53:05.543Z] 11313.50 IOPS, 44.19 MiB/s [2024-12-09T08:53:05.543Z] 11885.33 IOPS, 46.43 MiB/s [2024-12-09T08:53:05.543Z] 12190.50 IOPS, 47.62 MiB/s [2024-12-09T08:53:05.543Z] 12347.60 IOPS, 48.23 MiB/s [2024-12-09T08:53:05.543Z] 12427.00 IOPS, 48.54 MiB/s [2024-12-09T08:53:05.543Z] 12506.29 IOPS, 48.85 MiB/s [2024-12-09T08:53:05.543Z] 12561.62 IOPS, 49.07 MiB/s [2024-12-09T08:53:05.543Z] 12604.56 IOPS, 49.24 MiB/s [2024-12-09T08:53:05.543Z] 12645.60 IOPS, 49.40 MiB/s [2024-12-09T08:53:05.543Z] 12660.18 IOPS, 49.45 MiB/s [2024-12-09T08:53:05.543Z] [2024-12-09 09:52:50.352124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:24672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.090 [2024-12-09 09:52:50.352159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:35:30.090 [2024-12-09 09:52:50.352192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:24808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.090 [2024-12-09 09:52:50.352200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:35:30.090 [2024-12-09 09:52:50.352211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:24816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.090 [2024-12-09 09:52:50.352216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:35:30.090 [2024-12-09 09:52:50.352227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:24824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.090 [2024-12-09 09:52:50.352233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:35:30.090 [2024-12-09 09:52:50.352244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:24832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.090 [2024-12-09 09:52:50.352249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:35:30.090 [2024-12-09 09:52:50.352259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:24840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.090 [2024-12-09 09:52:50.352266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:35:30.090 [2024-12-09 09:52:50.352277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:24848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.090 [2024-12-09 09:52:50.352282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:30.090 [2024-12-09 09:52:50.352292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:24856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.090 [2024-12-09 09:52:50.352298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:35:30.090 [2024-12-09 09:52:50.352453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:24864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.090 [2024-12-09 09:52:50.352461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:30.090 [2024-12-09 09:52:50.352473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:24872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.090 [2024-12-09 09:52:50.352485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:35:30.090 [2024-12-09 09:52:50.352496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:24880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.090 [2024-12-09 09:52:50.352502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:30.090 [2024-12-09 09:52:50.352513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:24888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.090 [2024-12-09 09:52:50.352519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:35:30.090 [2024-12-09 09:52:50.352531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:24896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.090 [2024-12-09 09:52:50.352536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:30.090 [2024-12-09 09:52:50.352547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:24904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.090 [2024-12-09 09:52:50.352553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:30.090 [2024-12-09 09:52:50.352564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:24912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.090 [2024-12-09 09:52:50.352569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:35:30.090 [2024-12-09 09:52:50.352580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:24920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.090 [2024-12-09 09:52:50.352586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:30.090 [2024-12-09 09:52:50.352596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:24928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.090 [2024-12-09 09:52:50.352602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:35:30.090 [2024-12-09 09:52:50.352614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.090 [2024-12-09 09:52:50.352619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:30.090 [2024-12-09 09:52:50.352630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:24944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.090 [2024-12-09 09:52:50.352635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:30.090 [2024-12-09 09:52:50.352651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:24952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.090 [2024-12-09 09:52:50.352658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:30.090 [2024-12-09 09:52:50.352670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:24960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.090 [2024-12-09 09:52:50.352676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:30.090 [2024-12-09 09:52:50.352687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:24968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.090 [2024-12-09 09:52:50.352693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:30.090 [2024-12-09 09:52:50.352707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:24976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.090 [2024-12-09 09:52:50.352713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:30.090 [2024-12-09 09:52:50.352724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.090 [2024-12-09 09:52:50.352730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:30.090 [2024-12-09 09:52:50.352740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.090 [2024-12-09 09:52:50.352746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:30.090 [2024-12-09 09:52:50.352756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:25000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.090 [2024-12-09 09:52:50.352761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:35:30.090 [2024-12-09 09:52:50.352772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:25008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.090 [2024-12-09 09:52:50.352777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:35:30.090 [2024-12-09 09:52:50.352788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:25016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.090 [2024-12-09 09:52:50.352793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:35:30.090 [2024-12-09 09:52:50.352803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:25024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.090 [2024-12-09 09:52:50.352809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:30.090 [2024-12-09 09:52:50.352819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:25032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.090 [2024-12-09 09:52:50.352824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:30.090 [2024-12-09 09:52:50.352835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:25040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.090 [2024-12-09 09:52:50.352841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:30.090 [2024-12-09 09:52:50.352851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:25048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.090 [2024-12-09 09:52:50.352856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:30.090 [2024-12-09 09:52:50.352908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:25056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.090 [2024-12-09 09:52:50.352914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:35:30.090 [2024-12-09 09:52:50.352928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:25064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.090 [2024-12-09 09:52:50.352933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:30.090 [2024-12-09 09:52:50.352946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:25072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.090 [2024-12-09 09:52:50.352951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:30.090 [2024-12-09 09:52:50.352963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:25080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.090 [2024-12-09 09:52:50.352968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:35:30.090 [2024-12-09 09:52:50.352980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:25088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.090 [2024-12-09 09:52:50.352985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:30.090 [2024-12-09 09:52:50.352997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:25096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.090 [2024-12-09 09:52:50.353003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:35:30.090 [2024-12-09 09:52:50.353015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:25104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.090 [2024-12-09 09:52:50.353021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:30.090 [2024-12-09 09:52:50.353033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.090 [2024-12-09 09:52:50.353038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:35:30.090 [2024-12-09 09:52:50.353050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:25120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.090 [2024-12-09 09:52:50.353056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:30.090 [2024-12-09 09:52:50.353068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:25128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.090 [2024-12-09 09:52:50.353073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:35:30.090 [2024-12-09 09:52:50.353085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:25136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.090 [2024-12-09 09:52:50.353091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:30.090 [2024-12-09 09:52:50.353103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:25144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.090 [2024-12-09 09:52:50.353108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:30.090 [2024-12-09 09:52:50.353119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:25152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.090 [2024-12-09 09:52:50.353125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:30.090 [2024-12-09 09:52:50.353136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:25160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.090 [2024-12-09 09:52:50.353141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:30.091 [2024-12-09 09:52:50.353154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:25168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.091 [2024-12-09 09:52:50.353160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:35:30.091 [2024-12-09 09:52:50.353172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:25176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.091 [2024-12-09 09:52:50.353177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:30.091 [2024-12-09 09:52:50.353221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:25184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.091 [2024-12-09 09:52:50.353228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:35:30.091 [2024-12-09 09:52:50.353241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:25192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.091 [2024-12-09 09:52:50.353246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:30.091 [2024-12-09 09:52:50.353259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:25200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.091 [2024-12-09 09:52:50.353264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:30.091 [2024-12-09 09:52:50.353277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:25208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.091 [2024-12-09 09:52:50.353283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:30.091 [2024-12-09 09:52:50.353295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.091 [2024-12-09 09:52:50.353301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:30.091 [2024-12-09 09:52:50.353316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:25216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.091 [2024-12-09 09:52:50.353321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:30.091 [2024-12-09 09:52:50.353335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:25224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.091 [2024-12-09 09:52:50.353340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:35:30.091 [2024-12-09 09:52:50.353353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:25232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.091 [2024-12-09 09:52:50.353358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:35:30.091 [2024-12-09 09:52:50.353371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:25240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.091 [2024-12-09 09:52:50.353376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:30.091 [2024-12-09 09:52:50.353389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:25248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.091 [2024-12-09 09:52:50.353394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:30.091 [2024-12-09 09:52:50.353407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:25256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.091 [2024-12-09 09:52:50.353417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:30.091 [2024-12-09 09:52:50.353429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:25264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.091 [2024-12-09 09:52:50.353435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:35:30.091 [2024-12-09 09:52:50.353447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:25272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.091 [2024-12-09 09:52:50.353453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:35:30.091 [2024-12-09 09:52:50.353465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:25280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.091 [2024-12-09 09:52:50.353471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:35:30.091 [2024-12-09 09:52:50.353483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:25288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.091 [2024-12-09 09:52:50.353489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:30.091 [2024-12-09 09:52:50.353501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:25296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.091 [2024-12-09 09:52:50.353506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:30.091 [2024-12-09 09:52:50.353519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:25304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.091 [2024-12-09 09:52:50.353524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:30.091 [2024-12-09 09:52:50.353600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.091 [2024-12-09 09:52:50.353608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:30.091 [2024-12-09 09:52:50.353622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:25320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.091 [2024-12-09 09:52:50.353628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:30.091 [2024-12-09 09:52:50.353645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.091 [2024-12-09 09:52:50.353651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:30.091 [2024-12-09 09:52:50.353664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:25336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.091 [2024-12-09 09:52:50.353670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:30.091 [2024-12-09 09:52:50.353683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:25344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.091 [2024-12-09 09:52:50.353689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:30.091 [2024-12-09 09:52:50.353703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:25352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.091 [2024-12-09 09:52:50.353708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:30.091 [2024-12-09 09:52:50.353724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:25360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.091 [2024-12-09 09:52:50.353729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:30.091 [2024-12-09 09:52:50.353743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:25368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.091 [2024-12-09 09:52:50.353748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:30.091 [2024-12-09 09:52:50.353762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:25376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.091 [2024-12-09 09:52:50.353767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:30.091 [2024-12-09 09:52:50.353780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:25384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.091 [2024-12-09 09:52:50.353786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:30.091 [2024-12-09 09:52:50.353800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.091 [2024-12-09 09:52:50.353805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:35:30.091 [2024-12-09 09:52:50.353819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:25400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.091 [2024-12-09 09:52:50.353824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:30.091 [2024-12-09 09:52:50.353838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:25408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.091 [2024-12-09 09:52:50.353843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:30.091 [2024-12-09 09:52:50.353857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:25416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.091 [2024-12-09 09:52:50.353862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:30.091 [2024-12-09 09:52:50.353875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:25424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.091 [2024-12-09 09:52:50.353880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:30.091 [2024-12-09 09:52:50.353894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:25432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.091 [2024-12-09 09:52:50.353900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:30.091 [2024-12-09 09:52:50.353944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.091 [2024-12-09 09:52:50.353950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:30.091 [2024-12-09 09:52:50.353965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:25448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.091 [2024-12-09 09:52:50.353970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:30.091 [2024-12-09 09:52:50.353986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:25456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.091 [2024-12-09 09:52:50.353991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:30.091 [2024-12-09 09:52:50.354006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:25464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.092 [2024-12-09 09:52:50.354011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:30.092 [2024-12-09 09:52:50.354026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.092 [2024-12-09 09:52:50.354031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:30.092 [2024-12-09 09:52:50.354045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:25480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.092 [2024-12-09 09:52:50.354050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:30.092 [2024-12-09 09:52:50.354065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.092 [2024-12-09 09:52:50.354070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:30.092 [2024-12-09 09:52:50.354085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:25496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.092 [2024-12-09 09:52:50.354090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:30.092 [2024-12-09 09:52:50.354187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.092 [2024-12-09 09:52:50.354194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:30.092 [2024-12-09 09:52:50.354209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:25512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.092 [2024-12-09 09:52:50.354214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:35:30.092 [2024-12-09 09:52:50.354229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.092 [2024-12-09 09:52:50.354235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:30.092 [2024-12-09 09:52:50.354249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:25528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.092 [2024-12-09 09:52:50.354254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:30.092 [2024-12-09 09:52:50.354269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:25536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.092 [2024-12-09 09:52:50.354274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:30.092 [2024-12-09 09:52:50.354289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:25544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.092 [2024-12-09 09:52:50.354294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:30.092 [2024-12-09 09:52:50.354308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:25552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.092 [2024-12-09 09:52:50.354315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:30.092 [2024-12-09 09:52:50.354330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:25560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.092 [2024-12-09 09:52:50.354335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:30.092 [2024-12-09 09:52:50.354368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:25568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.092 [2024-12-09 09:52:50.354374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:30.092 [2024-12-09 09:52:50.354390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:25576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.092 [2024-12-09 09:52:50.354395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:30.092 [2024-12-09 09:52:50.354410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:25584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.092 [2024-12-09 09:52:50.354419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:30.092 [2024-12-09 09:52:50.354434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:25592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.092 [2024-12-09 09:52:50.354439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:30.092 [2024-12-09 09:52:50.354455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:25600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.092 [2024-12-09 09:52:50.354460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:30.092 [2024-12-09 09:52:50.354475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:25608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.092 [2024-12-09 09:52:50.354480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:30.092 [2024-12-09 09:52:50.354495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:25616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.092 [2024-12-09 09:52:50.354501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:35:30.092 [2024-12-09 09:52:50.354516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:25624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.092 [2024-12-09 09:52:50.354521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:30.092 [2024-12-09 09:52:50.354683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:25632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.092 [2024-12-09 09:52:50.354690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:30.092 [2024-12-09 09:52:50.354706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:24688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.092 [2024-12-09 09:52:50.354712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:30.092 [2024-12-09 09:52:50.354727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:24696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.092 [2024-12-09 09:52:50.354735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:35:30.092 [2024-12-09 09:52:50.354750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:24704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.092 [2024-12-09 09:52:50.354755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:30.092 [2024-12-09 09:52:50.354771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.092 [2024-12-09 09:52:50.354776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:30.092 [2024-12-09 09:52:50.354792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:24720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.092 [2024-12-09 09:52:50.354797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:30.092 [2024-12-09 09:52:50.354812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.092 [2024-12-09 09:52:50.354817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:30.092 [2024-12-09 09:52:50.354833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:24736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.092 [2024-12-09 09:52:50.354839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.092 [2024-12-09 09:52:50.354854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.092 [2024-12-09 09:52:50.354859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:30.092 [2024-12-09 09:52:50.354875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:24752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.092 [2024-12-09 09:52:50.354880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:30.092 [2024-12-09 09:52:50.354895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:24760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.092 [2024-12-09 09:52:50.354900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:30.092 [2024-12-09 09:52:50.354916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:24768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.092 [2024-12-09 09:52:50.354921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:30.092 [2024-12-09 09:52:50.354937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.092 [2024-12-09 09:52:50.354942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:30.092 [2024-12-09 09:52:50.354957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:24784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.092 [2024-12-09 09:52:50.354962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:35:30.092 [2024-12-09 09:52:50.354978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:24792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.092 [2024-12-09 09:52:50.354983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:35:30.092 [2024-12-09 09:52:50.355000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:24800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.092 [2024-12-09 09:52:50.355005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:35:30.092 [2024-12-09 09:52:50.355021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:25640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.092 [2024-12-09 09:52:50.355026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:35:30.092 [2024-12-09 09:52:50.355071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:25648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.092 [2024-12-09 09:52:50.355077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:35:30.092 [2024-12-09 09:52:50.355093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:25656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.093 [2024-12-09 09:52:50.355099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:35:30.093 [2024-12-09 09:52:50.355115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:25664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.093 [2024-12-09 09:52:50.355121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:35:30.093 [2024-12-09 09:52:50.355137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:25672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.093 [2024-12-09 09:52:50.355141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:35:30.093 [2024-12-09 09:52:50.355157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:25680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.093 [2024-12-09 09:52:50.355163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:35:30.093 [2024-12-09 09:52:50.355179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:25688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.093 [2024-12-09 09:52:50.355184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:30.093 12545.00 IOPS, 49.00 MiB/s [2024-12-09T08:53:05.546Z] 11580.00 IOPS, 45.23 MiB/s [2024-12-09T08:53:05.546Z] 10752.86 IOPS, 42.00 MiB/s [2024-12-09T08:53:05.546Z] 10143.27 IOPS, 39.62 MiB/s [2024-12-09T08:53:05.546Z] 10322.12 IOPS, 40.32 MiB/s [2024-12-09T08:53:05.546Z] 10475.29 IOPS, 40.92 MiB/s [2024-12-09T08:53:05.546Z] 10830.94 IOPS, 42.31 MiB/s [2024-12-09T08:53:05.546Z] 11155.74 IOPS, 43.58 MiB/s [2024-12-09T08:53:05.546Z] 11343.20 IOPS, 44.31 MiB/s [2024-12-09T08:53:05.546Z] 11413.14 IOPS, 44.58 MiB/s [2024-12-09T08:53:05.546Z] 11488.36 IOPS, 44.88 MiB/s [2024-12-09T08:53:05.546Z] 11726.74 IOPS, 45.81 MiB/s [2024-12-09T08:53:05.546Z] 11958.12 IOPS, 46.71 MiB/s [2024-12-09T08:53:05.546Z] [2024-12-09 09:53:03.033958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:7160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.093 [2024-12-09 09:53:03.033994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:35:30.093 [2024-12-09 09:53:03.034012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:7176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.093 [2024-12-09 09:53:03.034018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:35:30.093 [2024-12-09 09:53:03.034029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:7192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.093 [2024-12-09 09:53:03.034034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:35:30.093 [2024-12-09 09:53:03.034049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.093 [2024-12-09 09:53:03.034055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:30.093 [2024-12-09 09:53:03.034065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:7224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.093 [2024-12-09 09:53:03.034070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:30.093 [2024-12-09 09:53:03.034081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:7240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.093 [2024-12-09 09:53:03.034086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:30.093 [2024-12-09 09:53:03.034096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:7256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.093 [2024-12-09 09:53:03.034102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:30.093 [2024-12-09 09:53:03.034112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:7272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.093 [2024-12-09 09:53:03.034117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:30.093 [2024-12-09 09:53:03.034127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:7288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.093 [2024-12-09 09:53:03.034133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:30.093 [2024-12-09 09:53:03.034143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:7304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.093 [2024-12-09 09:53:03.034148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:30.093 [2024-12-09 09:53:03.034158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.093 [2024-12-09 09:53:03.034164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:30.093 [2024-12-09 09:53:03.034174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:7336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.093 [2024-12-09 09:53:03.034179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:30.093 [2024-12-09 09:53:03.034189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.093 [2024-12-09 09:53:03.034195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:30.093 [2024-12-09 09:53:03.034205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.093 [2024-12-09 09:53:03.034210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:30.093 [2024-12-09 09:53:03.034221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.093 [2024-12-09 09:53:03.034226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:30.093 [2024-12-09 09:53:03.034237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:7400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.093 [2024-12-09 09:53:03.034242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:30.093 [2024-12-09 09:53:03.034253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:7416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.093 [2024-12-09 09:53:03.034258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:35:30.093 [2024-12-09 09:53:03.034268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:7432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.093 [2024-12-09 09:53:03.034273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:30.093 [2024-12-09 09:53:03.034283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:7440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.093 [2024-12-09 09:53:03.034288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:30.093 [2024-12-09 09:53:03.034299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:7456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.093 [2024-12-09 09:53:03.034304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:30.093 [2024-12-09 09:53:03.034315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:7472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.093 [2024-12-09 09:53:03.034320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:30.093 [2024-12-09 09:53:03.034330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.093 [2024-12-09 09:53:03.034335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:30.093 [2024-12-09 09:53:03.034345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:7504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.093 [2024-12-09 09:53:03.034351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:30.093 [2024-12-09 09:53:03.034361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:7520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.093 [2024-12-09 09:53:03.034366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:30.093 [2024-12-09 09:53:03.034809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.093 [2024-12-09 09:53:03.034820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:30.093 [2024-12-09 09:53:03.034833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:7544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.093 [2024-12-09 09:53:03.034838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:30.093 [2024-12-09 09:53:03.034848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:7560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.093 [2024-12-09 09:53:03.034853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:30.093 [2024-12-09 09:53:03.034864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:7576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.093 [2024-12-09 09:53:03.034872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:30.093 [2024-12-09 09:53:03.034882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:7592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.093 [2024-12-09 09:53:03.034887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:30.093 [2024-12-09 09:53:03.034897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:7608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.093 [2024-12-09 09:53:03.034902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:30.093 [2024-12-09 09:53:03.034913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:7624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.093 [2024-12-09 09:53:03.034918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:30.093 [2024-12-09 09:53:03.034928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:7640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.093 [2024-12-09 09:53:03.034933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:35:30.094 [2024-12-09 09:53:03.034943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:7656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.094 [2024-12-09 09:53:03.034949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:30.094 [2024-12-09 09:53:03.034959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:7672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.094 [2024-12-09 09:53:03.034965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:30.094 [2024-12-09 09:53:03.034976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:7688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.094 [2024-12-09 09:53:03.034981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:30.094 [2024-12-09 09:53:03.034991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:7704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.094 [2024-12-09 09:53:03.034996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:30.094 [2024-12-09 09:53:03.035007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:7720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.094 [2024-12-09 09:53:03.035012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:30.094 [2024-12-09 09:53:03.035022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:7736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.094 [2024-12-09 09:53:03.035027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:30.094 [2024-12-09 09:53:03.035038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:7752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.094 [2024-12-09 09:53:03.035043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:30.094 [2024-12-09 09:53:03.035053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:7768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.094 [2024-12-09 09:53:03.035060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:30.094 [2024-12-09 09:53:03.035070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:7056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.094 [2024-12-09 09:53:03.035076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:30.094 [2024-12-09 09:53:03.035086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:7792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.094 [2024-12-09 09:53:03.035091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:30.094 [2024-12-09 09:53:03.035101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:7808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.094 [2024-12-09 09:53:03.035106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:30.094 [2024-12-09 09:53:03.035116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:7824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.094 [2024-12-09 09:53:03.035122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:30.094 [2024-12-09 09:53:03.035133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:7840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.094 [2024-12-09 09:53:03.035138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:35:30.094 [2024-12-09 09:53:03.035148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:7856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.094 [2024-12-09 09:53:03.035153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:30.094 [2024-12-09 09:53:03.035163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:7872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.094 [2024-12-09 09:53:03.035168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:30.094 [2024-12-09 09:53:03.035179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:7888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.094 [2024-12-09 09:53:03.035184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:30.094 [2024-12-09 09:53:03.035194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:7904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.094 [2024-12-09 09:53:03.035199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:35:30.094 [2024-12-09 09:53:03.035210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:7088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.094 [2024-12-09 09:53:03.035215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:30.094 [2024-12-09 09:53:03.035225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:7120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.094 [2024-12-09 09:53:03.035230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:30.094 [2024-12-09 09:53:03.035241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:7152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.094 [2024-12-09 09:53:03.035246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:30.094 [2024-12-09 09:53:03.035258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:7184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.094 [2024-12-09 09:53:03.035263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:30.094 [2024-12-09 09:53:03.035273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:7216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.094 [2024-12-09 09:53:03.035279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.094 [2024-12-09 09:53:03.035289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:7248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.094 [2024-12-09 09:53:03.035295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:30.094 [2024-12-09 09:53:03.035305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:7280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.094 [2024-12-09 09:53:03.035310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:30.094 [2024-12-09 09:53:03.035321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:7312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.094 [2024-12-09 09:53:03.035326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:30.094 [2024-12-09 09:53:03.035336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:7344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.094 [2024-12-09 09:53:03.035341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:30.094 [2024-12-09 09:53:03.035352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:7376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.094 [2024-12-09 09:53:03.035357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:30.094 [2024-12-09 09:53:03.035619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:7912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.094 [2024-12-09 09:53:03.035627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:35:30.094 [2024-12-09 09:53:03.035644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:7928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.094 [2024-12-09 09:53:03.035650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:35:30.094 [2024-12-09 09:53:03.035660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:7944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.094 [2024-12-09 09:53:03.035665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:35:30.094 [2024-12-09 09:53:03.035676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:7960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.094 [2024-12-09 09:53:03.035681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:35:30.094 [2024-12-09 09:53:03.035691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.094 [2024-12-09 09:53:03.035696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:35:30.094 [2024-12-09 09:53:03.035708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:7992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.094 [2024-12-09 09:53:03.035713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:35:30.094 [2024-12-09 09:53:03.035724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:8008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.095 [2024-12-09 09:53:03.035730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:35:30.095 [2024-12-09 09:53:03.035740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:7064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.095 [2024-12-09 09:53:03.035746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:35:30.095 [2024-12-09 09:53:03.035756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:7096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.095 [2024-12-09 09:53:03.035761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:35:30.095 [2024-12-09 09:53:03.035772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:7128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.095 [2024-12-09 09:53:03.035778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:30.095 [2024-12-09 09:53:03.036742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:8024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.095 [2024-12-09 09:53:03.036754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:35:30.095 [2024-12-09 09:53:03.036766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.095 [2024-12-09 09:53:03.036772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:35:30.095 [2024-12-09 09:53:03.036782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:7424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.095 [2024-12-09 09:53:03.036788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:35:30.095 [2024-12-09 09:53:03.036798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:7464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.095 [2024-12-09 09:53:03.036804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:35:30.095 [2024-12-09 09:53:03.036815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.095 [2024-12-09 09:53:03.036820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:35:30.095 [2024-12-09 09:53:03.036830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:8056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.095 [2024-12-09 09:53:03.036835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:35:30.095 [2024-12-09 09:53:03.036846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:8072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.095 [2024-12-09 09:53:03.036851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:30.095 [2024-12-09 09:53:03.036861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:7176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.095 [2024-12-09 09:53:03.036869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:35:30.095 [2024-12-09 09:53:03.036879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:7208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.095 [2024-12-09 09:53:03.036884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:30.095 [2024-12-09 09:53:03.036895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:7240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.095 [2024-12-09 09:53:03.036900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:35:30.095 [2024-12-09 09:53:03.036911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:7272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.095 [2024-12-09 09:53:03.036916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:30.095 [2024-12-09 09:53:03.036926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.095 [2024-12-09 09:53:03.036931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:35:30.095 [2024-12-09 09:53:03.036942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.095 [2024-12-09 09:53:03.036947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:30.095 [2024-12-09 09:53:03.036957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.095 [2024-12-09 09:53:03.036963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:30.095 [2024-12-09 09:53:03.036973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:7400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.095 [2024-12-09 09:53:03.036978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:35:30.095 [2024-12-09 09:53:03.036988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.095 [2024-12-09 09:53:03.036993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:30.095 [2024-12-09 09:53:03.037003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:7456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.095 [2024-12-09 09:53:03.037008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:35:30.095 [2024-12-09 09:53:03.037019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:7488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.095 [2024-12-09 09:53:03.037024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:30.095 [2024-12-09 09:53:03.037035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:7520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.095 [2024-12-09 09:53:03.037040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:30.095 [2024-12-09 09:53:03.037050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.095 [2024-12-09 09:53:03.037055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:30.095 [2024-12-09 09:53:03.037066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.095 [2024-12-09 09:53:03.037072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:30.095 [2024-12-09 09:53:03.037082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:7608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.095 [2024-12-09 09:53:03.037087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:30.095 [2024-12-09 09:53:03.037097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:7640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.095 [2024-12-09 09:53:03.037102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:30.095 [2024-12-09 09:53:03.037113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:7672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.095 [2024-12-09 09:53:03.037118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:30.095 [2024-12-09 09:53:03.037128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:7704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.095 [2024-12-09 09:53:03.037134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:30.095 [2024-12-09 09:53:03.037144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:7736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.095 [2024-12-09 09:53:03.037149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:35:30.095 [2024-12-09 09:53:03.037160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:7768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.095 [2024-12-09 09:53:03.037165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:35:30.095 [2024-12-09 09:53:03.037175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:7792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.095 [2024-12-09 09:53:03.037180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:35:30.095 [2024-12-09 09:53:03.037191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:7824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.095 [2024-12-09 09:53:03.037196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:30.095 [2024-12-09 09:53:03.037207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:7856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.095 [2024-12-09 09:53:03.037212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:30.095 [2024-12-09 09:53:03.037222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:7888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.095 [2024-12-09 09:53:03.037227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:30.095 [2024-12-09 09:53:03.037238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:7088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.095 [2024-12-09 09:53:03.037243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:30.095 [2024-12-09 09:53:03.037255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:7152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.095 [2024-12-09 09:53:03.037260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:35:30.095 [2024-12-09 09:53:03.037270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:7216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.095 [2024-12-09 09:53:03.037275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:30.095 [2024-12-09 09:53:03.037286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:7280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.095 [2024-12-09 09:53:03.037291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:30.095 [2024-12-09 09:53:03.037302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:7344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.096 [2024-12-09 09:53:03.037307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:35:30.096 [2024-12-09 09:53:03.038112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:7552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.096 [2024-12-09 09:53:03.038125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:30.096 [2024-12-09 09:53:03.038137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:7584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.096 [2024-12-09 09:53:03.038142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:35:30.096 [2024-12-09 09:53:03.038153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:7616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.096 [2024-12-09 09:53:03.038158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:30.096 [2024-12-09 09:53:03.038169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:7648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.096 [2024-12-09 09:53:03.038175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:35:30.096 [2024-12-09 09:53:03.038185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:7680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.096 [2024-12-09 09:53:03.038191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:30.096 [2024-12-09 09:53:03.038201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.096 [2024-12-09 09:53:03.038207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:35:30.096 [2024-12-09 09:53:03.038217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:7744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.096 [2024-12-09 09:53:03.038222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:30.096 [2024-12-09 09:53:03.038233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:7776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.096 [2024-12-09 09:53:03.038238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:30.096 [2024-12-09 09:53:03.038249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:7800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.096 [2024-12-09 09:53:03.038259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:30.096 [2024-12-09 09:53:03.038269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:7832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.096 [2024-12-09 09:53:03.038274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:30.096 [2024-12-09 09:53:03.038285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:7864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.096 [2024-12-09 09:53:03.038290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:35:30.096 [2024-12-09 09:53:03.038300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:7896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.096 [2024-12-09 09:53:03.038305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:30.096 [2024-12-09 09:53:03.038316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:7928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.096 [2024-12-09 09:53:03.038321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:35:30.096 [2024-12-09 09:53:03.038331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:7960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.096 [2024-12-09 09:53:03.038336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:30.096 [2024-12-09 09:53:03.038349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:7992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.096 [2024-12-09 09:53:03.038355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:30.096 [2024-12-09 09:53:03.038365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.096 [2024-12-09 09:53:03.038371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:30.096 [2024-12-09 09:53:03.038381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:7128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.096 [2024-12-09 09:53:03.038387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:30.096 [2024-12-09 09:53:03.038397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:7936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.096 [2024-12-09 09:53:03.038403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:30.096 [2024-12-09 09:53:03.038414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:7968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.096 [2024-12-09 09:53:03.038419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:35:30.096 [2024-12-09 09:53:03.038430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:8000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.096 [2024-12-09 09:53:03.038435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:35:30.096 [2024-12-09 09:53:03.038445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:8080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.096 [2024-12-09 09:53:03.038452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:30.096 [2024-12-09 09:53:03.038462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:8096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.096 [2024-12-09 09:53:03.038468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:30.096 [2024-12-09 09:53:03.038478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:8112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.096 [2024-12-09 09:53:03.038484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:30.096 [2024-12-09 09:53:03.038494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:8128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.096 [2024-12-09 09:53:03.038499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:35:30.096 [2024-12-09 09:53:03.038510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:8144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.096 [2024-12-09 09:53:03.038515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:35:30.096 [2024-12-09 09:53:03.038526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:8160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.096 [2024-12-09 09:53:03.038532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:35:30.096 [2024-12-09 09:53:03.038542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:8048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.096 [2024-12-09 09:53:03.038548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:30.096 [2024-12-09 09:53:03.038558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:8040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.096 [2024-12-09 09:53:03.038563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:30.096 [2024-12-09 09:53:03.038575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:7464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.096 [2024-12-09 09:53:03.038581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:30.096 [2024-12-09 09:53:03.038831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:8056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.096 [2024-12-09 09:53:03.038840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:30.096 [2024-12-09 09:53:03.038851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:7176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.096 [2024-12-09 09:53:03.038857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:30.096 [2024-12-09 09:53:03.038867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:7240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.096 [2024-12-09 09:53:03.038873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:30.096 [2024-12-09 09:53:03.038883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:7304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.096 [2024-12-09 09:53:03.038888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:30.096 [2024-12-09 09:53:03.038901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:7368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.096 [2024-12-09 09:53:03.038906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:30.096 [2024-12-09 09:53:03.038916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:7432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.096 [2024-12-09 09:53:03.038921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:30.096 [2024-12-09 09:53:03.038932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:7488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.096 [2024-12-09 09:53:03.038937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:30.096 [2024-12-09 09:53:03.038948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:7544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.096 [2024-12-09 09:53:03.038953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:30.096 [2024-12-09 09:53:03.038964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:7608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.096 [2024-12-09 09:53:03.038969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:30.097 [2024-12-09 09:53:03.038979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:7672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.097 [2024-12-09 09:53:03.038984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:30.097 [2024-12-09 09:53:03.038995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:7736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.097 [2024-12-09 09:53:03.039000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:35:30.097 [2024-12-09 09:53:03.039010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:7792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.097 [2024-12-09 09:53:03.039016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:30.097 [2024-12-09 09:53:03.039026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:7856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.097 [2024-12-09 09:53:03.039031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:30.097 [2024-12-09 09:53:03.039041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:7088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.097 [2024-12-09 09:53:03.039047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:30.097 [2024-12-09 09:53:03.039057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:7216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.097 [2024-12-09 09:53:03.039063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:30.097 [2024-12-09 09:53:03.039073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:7344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.097 [2024-12-09 09:53:03.039079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:30.097 [2024-12-09 09:53:03.040470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:7160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.097 [2024-12-09 09:53:03.040483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:30.097 [2024-12-09 09:53:03.040496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:7224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.097 [2024-12-09 09:53:03.040501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:30.097 [2024-12-09 09:53:03.040513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.097 [2024-12-09 09:53:03.040519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:30.097 [2024-12-09 09:53:03.040530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:8176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.097 [2024-12-09 09:53:03.040535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:30.097 [2024-12-09 09:53:03.040546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:7320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.097 [2024-12-09 09:53:03.040551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:30.097 [2024-12-09 09:53:03.040562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:7384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.097 [2024-12-09 09:53:03.040567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:30.097 [2024-12-09 09:53:03.040578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.097 [2024-12-09 09:53:03.040583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:30.097 [2024-12-09 09:53:03.040593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:7504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.097 [2024-12-09 09:53:03.040599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:30.097 [2024-12-09 09:53:03.040610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:8192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.097 [2024-12-09 09:53:03.040615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:30.097 [2024-12-09 09:53:03.040625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:8208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.097 [2024-12-09 09:53:03.040630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:35:30.097 [2024-12-09 09:53:03.040644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:7560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.097 [2024-12-09 09:53:03.040650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:30.097 [2024-12-09 09:53:03.040660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.097 [2024-12-09 09:53:03.040666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:30.097 [2024-12-09 09:53:03.040677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:7688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.097 [2024-12-09 09:53:03.040685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:30.097 [2024-12-09 09:53:03.040696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.097 [2024-12-09 09:53:03.040701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:30.097 [2024-12-09 09:53:03.040711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:7584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.097 [2024-12-09 09:53:03.040717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:30.097 [2024-12-09 09:53:03.040727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:7648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.097 [2024-12-09 09:53:03.040733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:30.097 [2024-12-09 09:53:03.040743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:7712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.097 [2024-12-09 09:53:03.040748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:30.097 [2024-12-09 09:53:03.040759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:7776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.097 [2024-12-09 09:53:03.040764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:30.097 [2024-12-09 09:53:03.040774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:7832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.097 [2024-12-09 09:53:03.040780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:30.097 [2024-12-09 09:53:03.040791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:7896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.097 [2024-12-09 09:53:03.040796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:30.097 [2024-12-09 09:53:03.040806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:7960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.097 [2024-12-09 09:53:03.040813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:30.097 [2024-12-09 09:53:03.040823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:7064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.097 [2024-12-09 09:53:03.040829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:30.097 [2024-12-09 09:53:03.040840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:7936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.097 [2024-12-09 09:53:03.040845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:35:30.097 [2024-12-09 09:53:03.040856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:8000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.097 [2024-12-09 09:53:03.040861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:30.097 [2024-12-09 09:53:03.040872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:8096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.097 [2024-12-09 09:53:03.040878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:30.097 [2024-12-09 09:53:03.040889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:8128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.097 [2024-12-09 09:53:03.040894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:30.097 [2024-12-09 09:53:03.040904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:8160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.097 [2024-12-09 09:53:03.040909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:35:30.097 [2024-12-09 09:53:03.040920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:8040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.097 [2024-12-09 09:53:03.040925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:30.097 [2024-12-09 09:53:03.040935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:7872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.097 [2024-12-09 09:53:03.040941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:30.097 [2024-12-09 09:53:03.040951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:8056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.097 [2024-12-09 09:53:03.040957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:30.097 [2024-12-09 09:53:03.040967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.097 [2024-12-09 09:53:03.040972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:30.097 [2024-12-09 09:53:03.040982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:7368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.097 [2024-12-09 09:53:03.040988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.098 [2024-12-09 09:53:03.040998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:7488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.098 [2024-12-09 09:53:03.041003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:30.098 [2024-12-09 09:53:03.041014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:7608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.098 [2024-12-09 09:53:03.041019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:30.098 [2024-12-09 09:53:03.041030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:7736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.098 [2024-12-09 09:53:03.041035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:30.098 [2024-12-09 09:53:03.041045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:7856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.098 [2024-12-09 09:53:03.041050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:30.098 [2024-12-09 09:53:03.041061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:7216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.098 [2024-12-09 09:53:03.041066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:30.098 [2024-12-09 09:53:03.041077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:7912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.098 [2024-12-09 09:53:03.041083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:35:30.098 [2024-12-09 09:53:03.041094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:7976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.098 [2024-12-09 09:53:03.041099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:35:30.098 [2024-12-09 09:53:03.041110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:8224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.098 [2024-12-09 09:53:03.041116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:35:30.098 [2024-12-09 09:53:03.041126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:8104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.098 [2024-12-09 09:53:03.041131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:35:30.098 [2024-12-09 09:53:03.041141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:8136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.098 [2024-12-09 09:53:03.041147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:35:30.098 [2024-12-09 09:53:03.041158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:8024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.098 [2024-12-09 09:53:03.041163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:35:30.098 [2024-12-09 09:53:03.043024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:8232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.098 [2024-12-09 09:53:03.043039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:35:30.098 [2024-12-09 09:53:03.043052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:8248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.098 [2024-12-09 09:53:03.043057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:35:30.098 [2024-12-09 09:53:03.043068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:8264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.098 [2024-12-09 09:53:03.043074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:35:30.098 [2024-12-09 09:53:03.043084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:8280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.098 [2024-12-09 09:53:03.043090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:30.098 [2024-12-09 09:53:03.043100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:8296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.098 [2024-12-09 09:53:03.043106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:35:30.098 [2024-12-09 09:53:03.043116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:8312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.098 [2024-12-09 09:53:03.043122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:35:30.098 [2024-12-09 09:53:03.043135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:8328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.098 [2024-12-09 09:53:03.043141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:35:30.098 [2024-12-09 09:53:03.043151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:8344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.098 [2024-12-09 09:53:03.043156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:35:30.098 [2024-12-09 09:53:03.043166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:8360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.098 [2024-12-09 09:53:03.043173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:35:30.098 [2024-12-09 09:53:03.043184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:7208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.098 [2024-12-09 09:53:03.043189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:35:30.098 [2024-12-09 09:53:03.043200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.098 [2024-12-09 09:53:03.043206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:30.098 [2024-12-09 09:53:03.043216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:7456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.098 [2024-12-09 09:53:03.043222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:35:30.098 [2024-12-09 09:53:03.043232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:7576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.098 [2024-12-09 09:53:03.043238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:30.098 [2024-12-09 09:53:03.043248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:8376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.098 [2024-12-09 09:53:03.043254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:35:30.098 [2024-12-09 09:53:03.043264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:8392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.098 [2024-12-09 09:53:03.043269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:30.098 [2024-12-09 09:53:03.043279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:8408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.098 [2024-12-09 09:53:03.043285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:35:30.098 [2024-12-09 09:53:03.043295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:8424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.098 [2024-12-09 09:53:03.043301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:30.098 [2024-12-09 09:53:03.043311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:7768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.098 [2024-12-09 09:53:03.043316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:30.098 [2024-12-09 09:53:03.043327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.098 [2024-12-09 09:53:03.052194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:35:30.098 [2024-12-09 09:53:03.052232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:7224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.098 [2024-12-09 09:53:03.052239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:30.098 [2024-12-09 09:53:03.052250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:8176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.098 [2024-12-09 09:53:03.052256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:35:30.098 [2024-12-09 09:53:03.052266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:7384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.098 [2024-12-09 09:53:03.052271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:30.098 [2024-12-09 09:53:03.052282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:7504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.098 [2024-12-09 09:53:03.052287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:30.099 [2024-12-09 09:53:03.052298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:8208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.099 [2024-12-09 09:53:03.052303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:30.099 [2024-12-09 09:53:03.052314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:7624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.099 [2024-12-09 09:53:03.052320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:30.099 [2024-12-09 09:53:03.052330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:7752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.099 [2024-12-09 09:53:03.052336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:30.099 [2024-12-09 09:53:03.052346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:7648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.099 [2024-12-09 09:53:03.052351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:30.099 [2024-12-09 09:53:03.052362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:7776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.099 [2024-12-09 09:53:03.052367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:30.099 [2024-12-09 09:53:03.052377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:7896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.099 [2024-12-09 09:53:03.052382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:30.099 [2024-12-09 09:53:03.052393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:7064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.099 [2024-12-09 09:53:03.052398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:35:30.099 [2024-12-09 09:53:03.052409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:8000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.099 [2024-12-09 09:53:03.052417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:35:30.099 [2024-12-09 09:53:03.052428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:8128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.099 [2024-12-09 09:53:03.052434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:35:30.099 [2024-12-09 09:53:03.052444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:8040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.099 [2024-12-09 09:53:03.052449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:30.099 [2024-12-09 09:53:03.052460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:8056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.099 [2024-12-09 09:53:03.052465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:30.099 [2024-12-09 09:53:03.052475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:7368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.099 [2024-12-09 09:53:03.052480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:30.099 [2024-12-09 09:53:03.052491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:7608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.099 [2024-12-09 09:53:03.052495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:30.099 [2024-12-09 09:53:03.052506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:7856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.099 [2024-12-09 09:53:03.052511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:35:30.099 [2024-12-09 09:53:03.052522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:7912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.099 [2024-12-09 09:53:03.052527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:30.099 [2024-12-09 09:53:03.052537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:8224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.099 [2024-12-09 09:53:03.052542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:30.099 [2024-12-09 09:53:03.052553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:8136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.099 [2024-12-09 09:53:03.052558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:35:30.099 [2024-12-09 09:53:03.052568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:8168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.099 [2024-12-09 09:53:03.052573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:30.099 [2024-12-09 09:53:03.052584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:8200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.099 [2024-12-09 09:53:03.052589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:35:30.099 [2024-12-09 09:53:03.052599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:7928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.099 [2024-12-09 09:53:03.052604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:30.099 [2024-12-09 09:53:03.052616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:8080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.099 [2024-12-09 09:53:03.052621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:35:30.099 [2024-12-09 09:53:03.052631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:8440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.099 [2024-12-09 09:53:03.052644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:30.099 [2024-12-09 09:53:03.052654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:8144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.099 [2024-12-09 09:53:03.052660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:35:30.099 [2024-12-09 09:53:03.052670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:7304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.099 [2024-12-09 09:53:03.052675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:30.099 [2024-12-09 09:53:03.052686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:7544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.099 [2024-12-09 09:53:03.052691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:30.099 [2024-12-09 09:53:03.052701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.099 [2024-12-09 09:53:03.052706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:30.099 [2024-12-09 09:53:03.053243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:8456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.099 [2024-12-09 09:53:03.053256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:30.099 [2024-12-09 09:53:03.053268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:8472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.099 [2024-12-09 09:53:03.053274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:35:30.099 [2024-12-09 09:53:03.053284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:8488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.099 [2024-12-09 09:53:03.053290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:30.099 [2024-12-09 09:53:03.053300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:8504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.099 [2024-12-09 09:53:03.053305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:35:30.099 [2024-12-09 09:53:03.053315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:8520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.099 [2024-12-09 09:53:03.053321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:30.099 [2024-12-09 09:53:03.053331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:8536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.099 [2024-12-09 09:53:03.053337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:30.099 [2024-12-09 09:53:03.053349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.099 [2024-12-09 09:53:03.053355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:30.099 [2024-12-09 09:53:03.053365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:8568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.099 [2024-12-09 09:53:03.053370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:30.099 [2024-12-09 09:53:03.053380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.099 [2024-12-09 09:53:03.053385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:30.099 [2024-12-09 09:53:03.053395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:8600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.099 [2024-12-09 09:53:03.053401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:35:30.099 [2024-12-09 09:53:03.053411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.099 [2024-12-09 09:53:03.053416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:35:30.099 [2024-12-09 09:53:03.053427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:8632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.099 [2024-12-09 09:53:03.053432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:30.099 [2024-12-09 09:53:03.053442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:8648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.099 [2024-12-09 09:53:03.053447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:30.100 [2024-12-09 09:53:03.053457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:8664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.100 [2024-12-09 09:53:03.053462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:30.100 [2024-12-09 09:53:03.053473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.100 [2024-12-09 09:53:03.053478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:35:30.100 [2024-12-09 09:53:03.053865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:8256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.100 [2024-12-09 09:53:03.053877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:35:30.100 [2024-12-09 09:53:03.053889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:8288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.100 [2024-12-09 09:53:03.053895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:35:30.100 [2024-12-09 09:53:03.053905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:8320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.100 [2024-12-09 09:53:03.053910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:30.100 [2024-12-09 09:53:03.053921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.100 [2024-12-09 09:53:03.053929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:30.100 [2024-12-09 09:53:03.053939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:8280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.100 [2024-12-09 09:53:03.053944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:30.100 [2024-12-09 09:53:03.053955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:8312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.100 [2024-12-09 09:53:03.053960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:30.100 [2024-12-09 09:53:03.053970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:8344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.100 [2024-12-09 09:53:03.053975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:30.100 [2024-12-09 09:53:03.053986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:7208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.100 [2024-12-09 09:53:03.053991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:30.100 [2024-12-09 09:53:03.054002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:7456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.100 [2024-12-09 09:53:03.054007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:30.100 [2024-12-09 09:53:03.054017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:8376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.100 [2024-12-09 09:53:03.054022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:30.100 [2024-12-09 09:53:03.054032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:8408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.100 [2024-12-09 09:53:03.054037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:30.100 [2024-12-09 09:53:03.054048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:7768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.100 [2024-12-09 09:53:03.054053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:30.100 [2024-12-09 09:53:03.054063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:7224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.100 [2024-12-09 09:53:03.054068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:30.100 [2024-12-09 09:53:03.054079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:7384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.100 [2024-12-09 09:53:03.054085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:30.100 [2024-12-09 09:53:03.054095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:8208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.100 [2024-12-09 09:53:03.054100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:30.100 [2024-12-09 09:53:03.054110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:7752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.100 [2024-12-09 09:53:03.054115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:35:30.100 [2024-12-09 09:53:03.054127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:7776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.100 [2024-12-09 09:53:03.054132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:30.100 [2024-12-09 09:53:03.054143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.100 [2024-12-09 09:53:03.054148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:30.100 [2024-12-09 09:53:03.055318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:8128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.100 [2024-12-09 09:53:03.055329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:30.100 [2024-12-09 09:53:03.055341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:8056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.100 [2024-12-09 09:53:03.055347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:30.100 [2024-12-09 09:53:03.055358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.100 [2024-12-09 09:53:03.055363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:30.100 [2024-12-09 09:53:03.055374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:7912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.100 [2024-12-09 09:53:03.055379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:30.100 [2024-12-09 09:53:03.055390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:8136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.100 [2024-12-09 09:53:03.055395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:30.100 [2024-12-09 09:53:03.055405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:8200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.100 [2024-12-09 09:53:03.055410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:30.100 [2024-12-09 09:53:03.055420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:8080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.100 [2024-12-09 09:53:03.055426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:30.100 [2024-12-09 09:53:03.055436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:8144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.100 [2024-12-09 09:53:03.055441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:30.100 [2024-12-09 09:53:03.055452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:7544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.100 [2024-12-09 09:53:03.055457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:30.100 [2024-12-09 09:53:03.055467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:8352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.100 [2024-12-09 09:53:03.055473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:30.100 [2024-12-09 09:53:03.055485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:8384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.100 [2024-12-09 09:53:03.055490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:30.100 [2024-12-09 09:53:03.055501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:8416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.100 [2024-12-09 09:53:03.055506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:30.100 [2024-12-09 09:53:03.055516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.100 [2024-12-09 09:53:03.055521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:35:30.100 [2024-12-09 09:53:03.055532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:8504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.100 [2024-12-09 09:53:03.055537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:30.100 [2024-12-09 09:53:03.055547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:8536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.100 [2024-12-09 09:53:03.055553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:30.100 [2024-12-09 09:53:03.055563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:8568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.100 [2024-12-09 09:53:03.055568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:30.100 [2024-12-09 09:53:03.055578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.100 [2024-12-09 09:53:03.055584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:30.100 [2024-12-09 09:53:03.055594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:8632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.100 [2024-12-09 09:53:03.055599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:30.100 [2024-12-09 09:53:03.055610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.100 [2024-12-09 09:53:03.055615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:30.101 [2024-12-09 09:53:03.055625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:8192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.101 [2024-12-09 09:53:03.055631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:30.101 [2024-12-09 09:53:03.055644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.101 [2024-12-09 09:53:03.055649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:30.101 [2024-12-09 09:53:03.055660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:8160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.101 [2024-12-09 09:53:03.055665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:30.101 [2024-12-09 09:53:03.055676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:7488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.101 [2024-12-09 09:53:03.055684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:30.101 [2024-12-09 09:53:03.055695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.101 [2024-12-09 09:53:03.055701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:30.101 [2024-12-09 09:53:03.055712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:8248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.101 [2024-12-09 09:53:03.055717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:30.101 [2024-12-09 09:53:03.055727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:8312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.101 [2024-12-09 09:53:03.055732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:35:30.101 [2024-12-09 09:53:03.055742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:7208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.101 [2024-12-09 09:53:03.055747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:30.101 [2024-12-09 09:53:03.055758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:8376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.101 [2024-12-09 09:53:03.055763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:30.101 [2024-12-09 09:53:03.055773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:7768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.101 [2024-12-09 09:53:03.055778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:30.101 [2024-12-09 09:53:03.055789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:7384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.101 [2024-12-09 09:53:03.055794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:35:30.101 [2024-12-09 09:53:03.055804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:7752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.101 [2024-12-09 09:53:03.055809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:30.101 [2024-12-09 09:53:03.055820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:7064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.101 [2024-12-09 09:53:03.055825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:30.101 [2024-12-09 09:53:03.057683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:8696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.101 [2024-12-09 09:53:03.057696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:30.101 [2024-12-09 09:53:03.057709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:8712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.101 [2024-12-09 09:53:03.057714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:30.101 [2024-12-09 09:53:03.057725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:8728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.101 [2024-12-09 09:53:03.057733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.101 [2024-12-09 09:53:03.057744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:8744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.101 [2024-12-09 09:53:03.057749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:30.101 [2024-12-09 09:53:03.057759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:8760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.101 [2024-12-09 09:53:03.057765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:30.101 [2024-12-09 09:53:03.057775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.101 [2024-12-09 09:53:03.057780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:30.101 [2024-12-09 09:53:03.057790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:8792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.101 [2024-12-09 09:53:03.057795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:30.101 [2024-12-09 09:53:03.057806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:8808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.101 [2024-12-09 09:53:03.057811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:30.101 [2024-12-09 09:53:03.057822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:8448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.101 [2024-12-09 09:53:03.057827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:35:30.101 [2024-12-09 09:53:03.057837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:8480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.101 [2024-12-09 09:53:03.057842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:35:30.101 [2024-12-09 09:53:03.057853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:8512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.101 [2024-12-09 09:53:03.057858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:35:30.101 [2024-12-09 09:53:03.057868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.101 [2024-12-09 09:53:03.057873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:35:30.101 [2024-12-09 09:53:03.057884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:8576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.101 [2024-12-09 09:53:03.057889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:35:30.101 [2024-12-09 09:53:03.057899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:8608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.101 [2024-12-09 09:53:03.057904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:35:30.101 [2024-12-09 09:53:03.057914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:8640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.101 [2024-12-09 09:53:03.057920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:35:30.101 [2024-12-09 09:53:03.057932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:8672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.101 [2024-12-09 09:53:03.057937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:35:30.101 [2024-12-09 09:53:03.057947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.101 [2024-12-09 09:53:03.057952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:35:30.101 [2024-12-09 09:53:03.057962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:8848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.101 [2024-12-09 09:53:03.057968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:30.101 [2024-12-09 09:53:03.057978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:8864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.101 [2024-12-09 09:53:03.057983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:35:30.101 [2024-12-09 09:53:03.057994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:8880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.101 [2024-12-09 09:53:03.057998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:35:30.101 [2024-12-09 09:53:03.058009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:8896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.101 [2024-12-09 09:53:03.058014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:35:30.101 [2024-12-09 09:53:03.058024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:8912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.101 [2024-12-09 09:53:03.058029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:35:30.101 [2024-12-09 09:53:03.058039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:8056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.101 [2024-12-09 09:53:03.058045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:35:30.101 [2024-12-09 09:53:03.058055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:7912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.101 [2024-12-09 09:53:03.058060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:35:30.101 [2024-12-09 09:53:03.058070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:8200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.101 [2024-12-09 09:53:03.058075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:30.101 [2024-12-09 09:53:03.058086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:8144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.101 [2024-12-09 09:53:03.058091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:35:30.102 [2024-12-09 09:53:03.058101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:8352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.102 [2024-12-09 09:53:03.058106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:30.102 [2024-12-09 09:53:03.058118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:8416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.102 [2024-12-09 09:53:03.058123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:35:30.102 [2024-12-09 09:53:03.058133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:8504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.102 [2024-12-09 09:53:03.058139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:30.102 [2024-12-09 09:53:03.058149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:8568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.102 [2024-12-09 09:53:03.058155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:35:30.102 [2024-12-09 09:53:03.058165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:8632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.102 [2024-12-09 09:53:03.058170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:30.102 [2024-12-09 09:53:03.058180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:8192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.102 [2024-12-09 09:53:03.058185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:30.102 [2024-12-09 09:53:03.058196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:8160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.102 [2024-12-09 09:53:03.058201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:35:30.102 [2024-12-09 09:53:03.058211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.102 [2024-12-09 09:53:03.058216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:30.102 [2024-12-09 09:53:03.058227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:8312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.102 [2024-12-09 09:53:03.058232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:35:30.102 [2024-12-09 09:53:03.058242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.102 [2024-12-09 09:53:03.058247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:30.102 [2024-12-09 09:53:03.058258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:7384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.102 [2024-12-09 09:53:03.058263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:30.102 [2024-12-09 09:53:03.058273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.102 [2024-12-09 09:53:03.058278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:30.102 [2024-12-09 09:53:03.058289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:8264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.102 [2024-12-09 09:53:03.058294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:30.102 [2024-12-09 09:53:03.058304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:8328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.102 [2024-12-09 09:53:03.058310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:30.102 [2024-12-09 09:53:03.058321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:8392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.102 [2024-12-09 09:53:03.058326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:30.102 [2024-12-09 09:53:03.058336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.102 [2024-12-09 09:53:03.058341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:30.102 [2024-12-09 09:53:03.058352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:7368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.102 [2024-12-09 09:53:03.058357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:30.102 [2024-12-09 09:53:03.058367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:8224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.102 [2024-12-09 09:53:03.058372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:35:30.102 [2024-12-09 09:53:03.058382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:8456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.102 [2024-12-09 09:53:03.058388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:35:30.102 [2024-12-09 09:53:03.058398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:8920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.102 [2024-12-09 09:53:03.058403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:35:30.102 [2024-12-09 09:53:03.058413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:8936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.102 [2024-12-09 09:53:03.058419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:30.102 [2024-12-09 09:53:03.058429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:8952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.102 [2024-12-09 09:53:03.058434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:30.102 [2024-12-09 09:53:03.058444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:8968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.102 [2024-12-09 09:53:03.058449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:30.102 [2024-12-09 09:53:03.058460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:8552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.102 [2024-12-09 09:53:03.058465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:30.102 [2024-12-09 09:53:03.058475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:8616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.102 [2024-12-09 09:53:03.058480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:35:30.102 [2024-12-09 09:53:03.058491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:8680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.102 [2024-12-09 09:53:03.058497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:30.102 [2024-12-09 09:53:03.059009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:8280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.102 [2024-12-09 09:53:03.059019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:30.102 [2024-12-09 09:53:03.059031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:8984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.102 [2024-12-09 09:53:03.059036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:35:30.102 [2024-12-09 09:53:03.059047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:9000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.102 [2024-12-09 09:53:03.059052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:30.102 [2024-12-09 09:53:03.059062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:9016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.102 [2024-12-09 09:53:03.059068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:35:30.102 [2024-12-09 09:53:03.059078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.102 [2024-12-09 09:53:03.059083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:30.102 [2024-12-09 09:53:03.059093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:9048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.102 [2024-12-09 09:53:03.059098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:35:30.102 [2024-12-09 09:53:03.059110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:8344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.102 [2024-12-09 09:53:03.059116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:30.102 [2024-12-09 09:53:03.059126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:8208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.102 [2024-12-09 09:53:03.059132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:35:30.102 [2024-12-09 09:53:03.059142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:9072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.102 [2024-12-09 09:53:03.059148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:30.102 [2024-12-09 09:53:03.059158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:9088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.102 [2024-12-09 09:53:03.059163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:30.102 [2024-12-09 09:53:03.059174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:9104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.102 [2024-12-09 09:53:03.059179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:30.102 [2024-12-09 09:53:03.059190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:9120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.102 [2024-12-09 09:53:03.059195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:30.102 [2024-12-09 09:53:03.059208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:8720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.102 [2024-12-09 09:53:03.059213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:35:30.103 [2024-12-09 09:53:03.059224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:8752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.103 [2024-12-09 09:53:03.059229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:30.103 [2024-12-09 09:53:03.059240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:8784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.103 [2024-12-09 09:53:03.059245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:35:30.103 [2024-12-09 09:53:03.060340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.103 [2024-12-09 09:53:03.060352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:30.103 [2024-12-09 09:53:03.060364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.103 [2024-12-09 09:53:03.060369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:30.103 [2024-12-09 09:53:03.060380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:8872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.103 [2024-12-09 09:53:03.060385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:30.103 [2024-12-09 09:53:03.060396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:8904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.103 [2024-12-09 09:53:03.060402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:30.103 [2024-12-09 09:53:03.060413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:8712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.103 [2024-12-09 09:53:03.060418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:30.103 [2024-12-09 09:53:03.060428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:8744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.103 [2024-12-09 09:53:03.060434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:35:30.103 [2024-12-09 09:53:03.060444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:8776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.103 [2024-12-09 09:53:03.060450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:35:30.103 [2024-12-09 09:53:03.060460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:8808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.103 [2024-12-09 09:53:03.060465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:30.103 [2024-12-09 09:53:03.060476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:8480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.103 [2024-12-09 09:53:03.060481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:30.103 [2024-12-09 09:53:03.060494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:8544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.103 [2024-12-09 09:53:03.060500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:30.103 [2024-12-09 09:53:03.060510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:8608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.103 [2024-12-09 09:53:03.060516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:35:30.103 [2024-12-09 09:53:03.060527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:8672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.103 [2024-12-09 09:53:03.060533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:35:30.103 [2024-12-09 09:53:03.060543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:8848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.103 [2024-12-09 09:53:03.060548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:35:30.103 [2024-12-09 09:53:03.060559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:8880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.103 [2024-12-09 09:53:03.060564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:30.103 [2024-12-09 09:53:03.060575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.103 [2024-12-09 09:53:03.060580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:30.103 [2024-12-09 09:53:03.060590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:7912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.103 [2024-12-09 09:53:03.060596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:30.103 [2024-12-09 09:53:03.060606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:8144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.103 [2024-12-09 09:53:03.060612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:30.103 [2024-12-09 09:53:03.060622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:8416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.103 [2024-12-09 09:53:03.060627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:30.103 [2024-12-09 09:53:03.060642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.103 [2024-12-09 09:53:03.060648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:30.103 [2024-12-09 09:53:03.060658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:8192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.103 [2024-12-09 09:53:03.060663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:30.103 [2024-12-09 09:53:03.060673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:8288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.103 [2024-12-09 09:53:03.060679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:30.103 [2024-12-09 09:53:03.060689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:8376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.103 [2024-12-09 09:53:03.060696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:30.103 [2024-12-09 09:53:03.060706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.103 [2024-12-09 09:53:03.060712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:30.103 [2024-12-09 09:53:03.060722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:8328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.103 [2024-12-09 09:53:03.060727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:30.103 [2024-12-09 09:53:03.060738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:8176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.103 [2024-12-09 09:53:03.060743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:30.103 [2024-12-09 09:53:03.060754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.103 [2024-12-09 09:53:03.060759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:30.103 [2024-12-09 09:53:03.060769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:8920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.103 [2024-12-09 09:53:03.060774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:35:30.103 [2024-12-09 09:53:03.060785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:8952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.103 [2024-12-09 09:53:03.060790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:30.103 [2024-12-09 09:53:03.060800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:8552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.103 [2024-12-09 09:53:03.060806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:30.103 [2024-12-09 09:53:03.060817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:8680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.103 [2024-12-09 09:53:03.060822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:30.103 [2024-12-09 09:53:03.060833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:8472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.103 [2024-12-09 09:53:03.060838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:30.103 [2024-12-09 09:53:03.060849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:8600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.103 [2024-12-09 09:53:03.060854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:30.103 [2024-12-09 09:53:03.060865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:8248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.103 [2024-12-09 09:53:03.060870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:30.103 [2024-12-09 09:53:03.060880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:8984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.104 [2024-12-09 09:53:03.060887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:30.104 [2024-12-09 09:53:03.060898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:9016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.104 [2024-12-09 09:53:03.060903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:30.104 [2024-12-09 09:53:03.060913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:9048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.104 [2024-12-09 09:53:03.060919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:30.104 [2024-12-09 09:53:03.060929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:8208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.104 [2024-12-09 09:53:03.060934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:30.104 [2024-12-09 09:53:03.060944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:9088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.104 [2024-12-09 09:53:03.060950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:30.104 [2024-12-09 09:53:03.060960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:9120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.104 [2024-12-09 09:53:03.060965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:30.104 [2024-12-09 09:53:03.060976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:8752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.104 [2024-12-09 09:53:03.060982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:30.104 [2024-12-09 09:53:03.061981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:8928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.104 [2024-12-09 09:53:03.061994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:30.104 [2024-12-09 09:53:03.062006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:9128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.104 [2024-12-09 09:53:03.062012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:35:30.104 [2024-12-09 09:53:03.062023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:9144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.104 [2024-12-09 09:53:03.062028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:30.104 [2024-12-09 09:53:03.062039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.104 [2024-12-09 09:53:03.062044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:30.104 [2024-12-09 09:53:03.062055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:9176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.104 [2024-12-09 09:53:03.062060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:30.104 [2024-12-09 09:53:03.062071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:8960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.104 [2024-12-09 09:53:03.062076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:30.104 [2024-12-09 09:53:03.062089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:8992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.104 [2024-12-09 09:53:03.062095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:30.104 [2024-12-09 09:53:03.062105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:9024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.104 [2024-12-09 09:53:03.062111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:30.104 [2024-12-09 09:53:03.062121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:9056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.104 [2024-12-09 09:53:03.062126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:30.104 [2024-12-09 09:53:03.062137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:9192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.104 [2024-12-09 09:53:03.062142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:30.104 [2024-12-09 09:53:03.062152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:9208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.104 [2024-12-09 09:53:03.062158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:30.104 [2024-12-09 09:53:03.062168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:9224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.104 [2024-12-09 09:53:03.062173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:30.104 [2024-12-09 09:53:03.062183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:9240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.104 [2024-12-09 09:53:03.062188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:30.104 [2024-12-09 09:53:03.062199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:9256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.104 [2024-12-09 09:53:03.062204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:30.104 [2024-12-09 09:53:03.062214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.104 [2024-12-09 09:53:03.062220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:35:30.104 [2024-12-09 09:53:03.062235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:9288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.104 [2024-12-09 09:53:03.062242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:30.104 [2024-12-09 09:53:03.062252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:9304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.104 [2024-12-09 09:53:03.062257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:30.104 [2024-12-09 09:53:03.062267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:9320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.104 [2024-12-09 09:53:03.062273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:30.104 [2024-12-09 09:53:03.062283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:9096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.104 [2024-12-09 09:53:03.062290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:35:30.104 [2024-12-09 09:53:03.062300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:8840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.104 [2024-12-09 09:53:03.062305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:30.104 [2024-12-09 09:53:03.062315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:8904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.104 [2024-12-09 09:53:03.062321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:30.104 [2024-12-09 09:53:03.063310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:8744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.104 [2024-12-09 09:53:03.063321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:30.104 [2024-12-09 09:53:03.063333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:8808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.104 [2024-12-09 09:53:03.063338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:30.104 [2024-12-09 09:53:03.063349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:8544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.104 [2024-12-09 09:53:03.063355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.104 [2024-12-09 09:53:03.063365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:8672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.104 [2024-12-09 09:53:03.063370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:30.104 [2024-12-09 09:53:03.063380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:8880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.104 [2024-12-09 09:53:03.063385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:30.104 [2024-12-09 09:53:03.063396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:7912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.104 [2024-12-09 09:53:03.063401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:30.104 [2024-12-09 09:53:03.063411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:8416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.104 [2024-12-09 09:53:03.063417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:30.104 [2024-12-09 09:53:03.063427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:8192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.104 [2024-12-09 09:53:03.063432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:30.104 [2024-12-09 09:53:03.063442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:8376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.104 [2024-12-09 09:53:03.063447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:35:30.104 [2024-12-09 09:53:03.063458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:8328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.104 [2024-12-09 09:53:03.063465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:35:30.104 [2024-12-09 09:53:03.063476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:8224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.104 [2024-12-09 09:53:03.063481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:35:30.104 [2024-12-09 09:53:03.063491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:8952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.105 [2024-12-09 09:53:03.063496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:35:30.105 [2024-12-09 09:53:03.063507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.105 [2024-12-09 09:53:03.063512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:35:30.105 [2024-12-09 09:53:03.063522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:8600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.105 [2024-12-09 09:53:03.063527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:35:30.105 [2024-12-09 09:53:03.063538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:8984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.105 [2024-12-09 09:53:03.063543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:35:30.105 [2024-12-09 09:53:03.063554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.105 [2024-12-09 09:53:03.063559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:35:30.105 [2024-12-09 09:53:03.063569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:9088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.105 [2024-12-09 09:53:03.063574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:35:30.105 [2024-12-09 09:53:03.063585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:8752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.105 [2024-12-09 09:53:03.063590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:30.105 [2024-12-09 09:53:03.063600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:8728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.105 [2024-12-09 09:53:03.063605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:35:30.105 [2024-12-09 09:53:03.063615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.105 [2024-12-09 09:53:03.063621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:35:30.105 [2024-12-09 09:53:03.063631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:8864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.105 [2024-12-09 09:53:03.063636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:35:30.105 [2024-12-09 09:53:03.063651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:9328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.105 [2024-12-09 09:53:03.063656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:35:30.105 [2024-12-09 09:53:03.063668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:9344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.105 [2024-12-09 09:53:03.063673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:35:30.105 [2024-12-09 09:53:03.063683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.105 [2024-12-09 09:53:03.063688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:35:30.105 [2024-12-09 09:53:03.063698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:9376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.105 [2024-12-09 09:53:03.063703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:30.105 [2024-12-09 09:53:03.063714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:9392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.105 [2024-12-09 09:53:03.063719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:35:30.105 [2024-12-09 09:53:03.063729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:8504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.105 [2024-12-09 09:53:03.063734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:30.105 [2024-12-09 09:53:03.063744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:8312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.105 [2024-12-09 09:53:03.063750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:35:30.105 [2024-12-09 09:53:03.063760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:8968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.105 [2024-12-09 09:53:03.063765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:30.105 [2024-12-09 09:53:03.063776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:9032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.105 [2024-12-09 09:53:03.063781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:35:30.105 [2024-12-09 09:53:03.063791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:9128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.105 [2024-12-09 09:53:03.063797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:30.105 [2024-12-09 09:53:03.063807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:9160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.105 [2024-12-09 09:53:03.063812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:30.105 [2024-12-09 09:53:03.063822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:8960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.105 [2024-12-09 09:53:03.063827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:35:30.105 [2024-12-09 09:53:03.063838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:9024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.105 [2024-12-09 09:53:03.063843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:30.105 [2024-12-09 09:53:03.063854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:9192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.105 [2024-12-09 09:53:03.063860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:35:30.105 [2024-12-09 09:53:03.063870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:9224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.105 [2024-12-09 09:53:03.063875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:30.105 [2024-12-09 09:53:03.063885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.105 [2024-12-09 09:53:03.063891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:30.105 [2024-12-09 09:53:03.063901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:9288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.105 [2024-12-09 09:53:03.063906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:30.105 [2024-12-09 09:53:03.063916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:9320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.105 [2024-12-09 09:53:03.063921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:30.105 [2024-12-09 09:53:03.063932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:8840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.105 [2024-12-09 09:53:03.063937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:30.105 [2024-12-09 09:53:03.064739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:9400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.105 [2024-12-09 09:53:03.064752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:30.105 [2024-12-09 09:53:03.064763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:9416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.105 [2024-12-09 09:53:03.064768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:30.105 [2024-12-09 09:53:03.064779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:9432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.105 [2024-12-09 09:53:03.064784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:30.105 [2024-12-09 09:53:03.064795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:9448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.105 [2024-12-09 09:53:03.064800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:35:30.105 [2024-12-09 09:53:03.064810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:9072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.105 [2024-12-09 09:53:03.064816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:35:30.105 [2024-12-09 09:53:03.064826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:9464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.105 [2024-12-09 09:53:03.064831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:35:30.105 [2024-12-09 09:53:03.064841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:9480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.105 [2024-12-09 09:53:03.064851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:30.105 [2024-12-09 09:53:03.064861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.105 [2024-12-09 09:53:03.064866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:30.105 [2024-12-09 09:53:03.064877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:9512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.105 [2024-12-09 09:53:03.064882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:30.105 [2024-12-09 09:53:03.064892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:9528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.105 [2024-12-09 09:53:03.064897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:30.105 [2024-12-09 09:53:03.064907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:9544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.106 [2024-12-09 09:53:03.064912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:35:30.106 [2024-12-09 09:53:03.066189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:9560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.106 [2024-12-09 09:53:03.066201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:30.106 [2024-12-09 09:53:03.066213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:9576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.106 [2024-12-09 09:53:03.066219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:30.106 [2024-12-09 09:53:03.066229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:9136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.106 [2024-12-09 09:53:03.066234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:35:30.106 [2024-12-09 09:53:03.066245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:9168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.106 [2024-12-09 09:53:03.066250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:30.106 [2024-12-09 09:53:03.066261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:9200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.106 [2024-12-09 09:53:03.066266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:35:30.106 [2024-12-09 09:53:03.066277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:9232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.106 [2024-12-09 09:53:03.066284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:30.106 [2024-12-09 09:53:03.066296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:8808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.106 [2024-12-09 09:53:03.066302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:35:30.106 [2024-12-09 09:53:03.066313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:8672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.106 [2024-12-09 09:53:03.066322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:30.106 [2024-12-09 09:53:03.066333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:7912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.106 [2024-12-09 09:53:03.066340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:35:30.106 [2024-12-09 09:53:03.066352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.106 [2024-12-09 09:53:03.066358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:30.106 [2024-12-09 09:53:03.066370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:8328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.106 [2024-12-09 09:53:03.066376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:30.106 [2024-12-09 09:53:03.066388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:8952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.106 [2024-12-09 09:53:03.066395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:30.106 [2024-12-09 09:53:03.066406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:8600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.106 [2024-12-09 09:53:03.066412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:30.106 [2024-12-09 09:53:03.066422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:9048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.106 [2024-12-09 09:53:03.066427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:35:30.106 [2024-12-09 09:53:03.066437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:8752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.106 [2024-12-09 09:53:03.066443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:30.106 [2024-12-09 09:53:03.066454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:8792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.106 [2024-12-09 09:53:03.066459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:35:30.106 [2024-12-09 09:53:03.066469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:9328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.106 [2024-12-09 09:53:03.066475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:30.106 [2024-12-09 09:53:03.066485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:9360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.106 [2024-12-09 09:53:03.066490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:30.106 [2024-12-09 09:53:03.066501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:9392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.106 [2024-12-09 09:53:03.066506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:30.106 [2024-12-09 09:53:03.066516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:8312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.106 [2024-12-09 09:53:03.066521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:30.106 [2024-12-09 09:53:03.066533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:9032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.106 [2024-12-09 09:53:03.066538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:30.106 [2024-12-09 09:53:03.066548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:9160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.106 [2024-12-09 09:53:03.066553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:35:30.106 [2024-12-09 09:53:03.066564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.106 [2024-12-09 09:53:03.066569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:35:30.106 [2024-12-09 09:53:03.066579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:9224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.106 [2024-12-09 09:53:03.066584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:30.106 [2024-12-09 09:53:03.066594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:9288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.106 [2024-12-09 09:53:03.066599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:30.106 [2024-12-09 09:53:03.066609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:8840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.106 [2024-12-09 09:53:03.066615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:30.106 [2024-12-09 09:53:03.066625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.106 [2024-12-09 09:53:03.066630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:35:30.106 [2024-12-09 09:53:03.066644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:9296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.106 [2024-12-09 09:53:03.066649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:35:30.106 [2024-12-09 09:53:03.066660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:8712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.106 [2024-12-09 09:53:03.066665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:35:30.106 [2024-12-09 09:53:03.066675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:8848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.106 [2024-12-09 09:53:03.066680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:30.106 [2024-12-09 09:53:03.066690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.106 [2024-12-09 09:53:03.066695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:30.106 [2024-12-09 09:53:03.066705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:9616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.106 [2024-12-09 09:53:03.066711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:30.106 [2024-12-09 09:53:03.066722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:8912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.106 [2024-12-09 09:53:03.066727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:30.106 [2024-12-09 09:53:03.066738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.106 [2024-12-09 09:53:03.066743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:30.106 [2024-12-09 09:53:03.066753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:9120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.106 [2024-12-09 09:53:03.066759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:30.106 [2024-12-09 09:53:03.066769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:9416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.106 [2024-12-09 09:53:03.066774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:30.106 [2024-12-09 09:53:03.066784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:9448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.106 [2024-12-09 09:53:03.066789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:30.106 [2024-12-09 09:53:03.066799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:9464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.106 [2024-12-09 09:53:03.066804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:30.107 [2024-12-09 09:53:03.066814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.107 [2024-12-09 09:53:03.066819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:30.107 [2024-12-09 09:53:03.066830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:9528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.107 [2024-12-09 09:53:03.066835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:30.107 [2024-12-09 09:53:03.066845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:9336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.107 [2024-12-09 09:53:03.066850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:30.107 [2024-12-09 09:53:03.066860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:9368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.107 [2024-12-09 09:53:03.066866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:30.107 [2024-12-09 09:53:03.066876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:9640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.107 [2024-12-09 09:53:03.066881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:35:30.107 [2024-12-09 09:53:03.066891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:9656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.107 [2024-12-09 09:53:03.066896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:30.107 [2024-12-09 09:53:03.066907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:9672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.107 [2024-12-09 09:53:03.066913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:30.107 [2024-12-09 09:53:03.066923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:9144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.107 [2024-12-09 09:53:03.066929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:30.107 [2024-12-09 09:53:03.067912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:9208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.107 [2024-12-09 09:53:03.067925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:30.107 [2024-12-09 09:53:03.067937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:9272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.107 [2024-12-09 09:53:03.067943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:30.107 [2024-12-09 09:53:03.067953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:9680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.107 [2024-12-09 09:53:03.067958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:30.107 [2024-12-09 09:53:03.067968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:9696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.107 [2024-12-09 09:53:03.067975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:30.107 [2024-12-09 09:53:03.067985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:9712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.107 [2024-12-09 09:53:03.067990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:30.107 [2024-12-09 09:53:03.068000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:9728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.107 [2024-12-09 09:53:03.068006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:30.107 [2024-12-09 09:53:03.068016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:9744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.107 [2024-12-09 09:53:03.068021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:30.107 [2024-12-09 09:53:03.068032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:9760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.107 [2024-12-09 09:53:03.068037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:30.107 [2024-12-09 09:53:03.068047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:9776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.107 [2024-12-09 09:53:03.068052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:30.107 [2024-12-09 09:53:03.068063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:9792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.107 [2024-12-09 09:53:03.068068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:30.107 [2024-12-09 09:53:03.068078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:9808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.107 [2024-12-09 09:53:03.068086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:30.107 [2024-12-09 09:53:03.068096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.107 [2024-12-09 09:53:03.068101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:35:30.107 [2024-12-09 09:53:03.068112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.107 [2024-12-09 09:53:03.068117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:30.107 [2024-12-09 09:53:03.068128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.107 [2024-12-09 09:53:03.068133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:30.107 [2024-12-09 09:53:03.068144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:9456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.107 [2024-12-09 09:53:03.068149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:30.107 [2024-12-09 09:53:03.068159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:9488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.107 [2024-12-09 09:53:03.068165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:30.107 [2024-12-09 09:53:03.068175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:9520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.107 [2024-12-09 09:53:03.068180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:30.107 [2024-12-09 09:53:03.068191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:9552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.107 [2024-12-09 09:53:03.068196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:30.107 [2024-12-09 09:53:03.068207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:9584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.107 [2024-12-09 09:53:03.068212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:30.107 [2024-12-09 09:53:03.068957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:9576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.107 [2024-12-09 09:53:03.068967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:30.107 [2024-12-09 09:53:03.068979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:9168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.107 [2024-12-09 09:53:03.068984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:30.107 [2024-12-09 09:53:03.068994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:9232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.107 [2024-12-09 09:53:03.069000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:30.107 [2024-12-09 09:53:03.069010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:8672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.107 [2024-12-09 09:53:03.069015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:30.107 [2024-12-09 09:53:03.069028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:8192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.107 [2024-12-09 09:53:03.069033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:30.107 [2024-12-09 09:53:03.069044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:8952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.107 [2024-12-09 09:53:03.069049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:35:30.107 [2024-12-09 09:53:03.069059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:9048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.107 [2024-12-09 09:53:03.069065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:30.107 [2024-12-09 09:53:03.069075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.107 [2024-12-09 09:53:03.069080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:30.108 [2024-12-09 09:53:03.069090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:9360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.108 [2024-12-09 09:53:03.069096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:30.108 [2024-12-09 09:53:03.069106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:8312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.108 [2024-12-09 09:53:03.069112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:35:30.108 [2024-12-09 09:53:03.069122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:9160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.108 [2024-12-09 09:53:03.069127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:30.108 [2024-12-09 09:53:03.069137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:9224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.108 [2024-12-09 09:53:03.069143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:30.108 [2024-12-09 09:53:03.069153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.108 [2024-12-09 09:53:03.069158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:30.108 [2024-12-09 09:53:03.069169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:9296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.108 [2024-12-09 09:53:03.069174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:30.108 [2024-12-09 09:53:03.069184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:8848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.108 [2024-12-09 09:53:03.069190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.108 [2024-12-09 09:53:03.069200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.108 [2024-12-09 09:53:03.069205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:30.108 [2024-12-09 09:53:03.069217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:8920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.108 [2024-12-09 09:53:03.069222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:30.108 [2024-12-09 09:53:03.069233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:9416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.108 [2024-12-09 09:53:03.069239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:30.108 [2024-12-09 09:53:03.069249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:9464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.108 [2024-12-09 09:53:03.069254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:30.108 [2024-12-09 09:53:03.069265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.108 [2024-12-09 09:53:03.069270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:30.108 [2024-12-09 09:53:03.069280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:9368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.108 [2024-12-09 09:53:03.069285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:35:30.108 [2024-12-09 09:53:03.069296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:9656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.108 [2024-12-09 09:53:03.069301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:35:30.108 [2024-12-09 09:53:03.069311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:9144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.108 [2024-12-09 09:53:03.069317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:35:30.108 [2024-12-09 09:53:03.069327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.108 [2024-12-09 09:53:03.069332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:35:30.108 [2024-12-09 09:53:03.069343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:8984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.108 [2024-12-09 09:53:03.069348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:35:30.108 [2024-12-09 09:53:03.069358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:9344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.108 [2024-12-09 09:53:03.069364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:35:30.108 [2024-12-09 09:53:03.069374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:9128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.108 [2024-12-09 09:53:03.069380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:35:30.108 [2024-12-09 09:53:03.069391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:9256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.108 [2024-12-09 09:53:03.069396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:35:30.108 [2024-12-09 09:53:03.069406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:9856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.108 [2024-12-09 09:53:03.069413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:35:30.108 [2024-12-09 09:53:03.069424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:9872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.108 [2024-12-09 09:53:03.069429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:30.108 [2024-12-09 09:53:03.069439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:9888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.108 [2024-12-09 09:53:03.069445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:35:30.108 [2024-12-09 09:53:03.069455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:9904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.108 [2024-12-09 09:53:03.069460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:35:30.108 [2024-12-09 09:53:03.069471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:9608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.108 [2024-12-09 09:53:03.069476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:35:30.108 [2024-12-09 09:53:03.069486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:9400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.108 [2024-12-09 09:53:03.069491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:35:30.108 [2024-12-09 09:53:03.069502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:9480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.108 [2024-12-09 09:53:03.069507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:35:30.108 [2024-12-09 09:53:03.069517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:9544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.108 [2024-12-09 09:53:03.069523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:35:30.108 [2024-12-09 09:53:03.069533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:9272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.108 [2024-12-09 09:53:03.069539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:30.108 [2024-12-09 09:53:03.069549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:9696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.108 [2024-12-09 09:53:03.069554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:35:30.108 [2024-12-09 09:53:03.069565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:9728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.108 [2024-12-09 09:53:03.069570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:30.108 [2024-12-09 09:53:03.074103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:9760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.108 [2024-12-09 09:53:03.074124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:35:30.108 [2024-12-09 09:53:03.074613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:9792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.108 [2024-12-09 09:53:03.074626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:30.108 [2024-12-09 09:53:03.074649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:9824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.108 [2024-12-09 09:53:03.074655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:35:30.108 [2024-12-09 09:53:03.074666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.108 [2024-12-09 09:53:03.074671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:30.108 [2024-12-09 09:53:03.074682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.108 [2024-12-09 09:53:03.074687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:30.108 [2024-12-09 09:53:03.074698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:9552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.108 [2024-12-09 09:53:03.074704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:35:30.108 [2024-12-09 09:53:03.075745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:9912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.108 [2024-12-09 09:53:03.075758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:30.108 [2024-12-09 09:53:03.075771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:9928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.108 [2024-12-09 09:53:03.075776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:35:30.109 [2024-12-09 09:53:03.075786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:9944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.109 [2024-12-09 09:53:03.075791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:30.109 [2024-12-09 09:53:03.075802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:9960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.109 [2024-12-09 09:53:03.075807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:30.109 [2024-12-09 09:53:03.075817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:9976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.109 [2024-12-09 09:53:03.075822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:30.109 [2024-12-09 09:53:03.075832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:9648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.109 [2024-12-09 09:53:03.075837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:30.109 [2024-12-09 09:53:03.075848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:9984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.109 [2024-12-09 09:53:03.075853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:30.109 [2024-12-09 09:53:03.075863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:10000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.109 [2024-12-09 09:53:03.075868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:30.109 [2024-12-09 09:53:03.075888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:10016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.109 [2024-12-09 09:53:03.075893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:30.109 [2024-12-09 09:53:03.075903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:9688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.109 [2024-12-09 09:53:03.075908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:30.109 [2024-12-09 09:53:03.075919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.109 [2024-12-09 09:53:03.075924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:35:30.109 [2024-12-09 09:53:03.075934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:9752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.109 [2024-12-09 09:53:03.075939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:35:30.109 [2024-12-09 09:53:03.075949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:9784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.109 [2024-12-09 09:53:03.075954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:35:30.109 [2024-12-09 09:53:03.075965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:9168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.109 [2024-12-09 09:53:03.075970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:30.109 [2024-12-09 09:53:03.075980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:8672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.109 [2024-12-09 09:53:03.075986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:30.109 [2024-12-09 09:53:03.075996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:8952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.109 [2024-12-09 09:53:03.076002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:30.109 [2024-12-09 09:53:03.076012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:8792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.109 [2024-12-09 09:53:03.076018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:30.109 [2024-12-09 09:53:03.076028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:8312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.109 [2024-12-09 09:53:03.076033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:35:30.109 [2024-12-09 09:53:03.076043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:9224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.109 [2024-12-09 09:53:03.076049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:30.109 [2024-12-09 09:53:03.076059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:9296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.109 [2024-12-09 09:53:03.076064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:30.109 [2024-12-09 09:53:03.076075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:9616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.109 [2024-12-09 09:53:03.076081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:35:30.109 [2024-12-09 09:53:03.076091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:9416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.109 [2024-12-09 09:53:03.076097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:30.109 [2024-12-09 09:53:03.076107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:9528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.109 [2024-12-09 09:53:03.076112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:35:30.109 [2024-12-09 09:53:03.076122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:9656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.109 [2024-12-09 09:53:03.076128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:30.109 [2024-12-09 09:53:03.076138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:8376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.109 [2024-12-09 09:53:03.076143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:35:30.109 [2024-12-09 09:53:03.076154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:9344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.109 [2024-12-09 09:53:03.076159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:30.109 [2024-12-09 09:53:03.076169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:9256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.109 [2024-12-09 09:53:03.076174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:35:30.109 [2024-12-09 09:53:03.076185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:9872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.109 [2024-12-09 09:53:03.076190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:30.109 [2024-12-09 09:53:03.077038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:9904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.109 [2024-12-09 09:53:03.077051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:30.109 [2024-12-09 09:53:03.077063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.109 [2024-12-09 09:53:03.077068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:30.109 [2024-12-09 09:53:03.077079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:9544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.109 [2024-12-09 09:53:03.077084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:30.109 [2024-12-09 09:53:03.077094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:9696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.109 [2024-12-09 09:53:03.077099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:35:30.109 [2024-12-09 09:53:03.077110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:9760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.109 [2024-12-09 09:53:03.077118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:30.109 [2024-12-09 09:53:03.077128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.109 [2024-12-09 09:53:03.077133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:35:30.109 [2024-12-09 09:53:03.077144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:9560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.109 [2024-12-09 09:53:03.077149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:30.109 [2024-12-09 09:53:03.077159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:9824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.109 [2024-12-09 09:53:03.077165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:30.109 [2024-12-09 09:53:03.077175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.109 [2024-12-09 09:53:03.077181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:30.109 [2024-12-09 09:53:03.077191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.109 [2024-12-09 09:53:03.077197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:30.109 [2024-12-09 09:53:03.077207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.109 [2024-12-09 09:53:03.077213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:30.109 [2024-12-09 09:53:03.077223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:10032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.109 [2024-12-09 09:53:03.077228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:35:30.109 [2024-12-09 09:53:03.077239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:9600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.109 [2024-12-09 09:53:03.077244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:35:30.110 [2024-12-09 09:53:03.077254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:9496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.110 [2024-12-09 09:53:03.077260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:30.110 [2024-12-09 09:53:03.077270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.110 [2024-12-09 09:53:03.077276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:30.110 [2024-12-09 09:53:03.077286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:9864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.110 [2024-12-09 09:53:03.077291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:30.110 [2024-12-09 09:53:03.077302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:10048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.110 [2024-12-09 09:53:03.077307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:35:30.110 [2024-12-09 09:53:03.077318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:10064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.110 [2024-12-09 09:53:03.077324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:35:30.110 [2024-12-09 09:53:03.077334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:10080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.110 [2024-12-09 09:53:03.077339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:35:30.110 [2024-12-09 09:53:03.077350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:10096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.110 [2024-12-09 09:53:03.077355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:30.110 [2024-12-09 09:53:03.077365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:10112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.110 [2024-12-09 09:53:03.077370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:30.110 [2024-12-09 09:53:03.077381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:10128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.110 [2024-12-09 09:53:03.077386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:30.110 [2024-12-09 09:53:03.077397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.110 [2024-12-09 09:53:03.077402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:30.110 [2024-12-09 09:53:03.077775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:10160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.110 [2024-12-09 09:53:03.077785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:30.110 [2024-12-09 09:53:03.077796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:10176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.110 [2024-12-09 09:53:03.077802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:30.110 [2024-12-09 09:53:03.077812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:9680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.110 [2024-12-09 09:53:03.077818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:30.110 [2024-12-09 09:53:03.077828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:9744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.110 [2024-12-09 09:53:03.077833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:30.110 [2024-12-09 09:53:03.077844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:9808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.110 [2024-12-09 09:53:03.077849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:30.110 [2024-12-09 09:53:03.077859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:9928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.110 [2024-12-09 09:53:03.077865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:30.110 [2024-12-09 09:53:03.077878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:9960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.110 [2024-12-09 09:53:03.077884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:30.110 [2024-12-09 09:53:03.077894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:9648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.110 [2024-12-09 09:53:03.077899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:30.110 [2024-12-09 09:53:03.077910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:10000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.110 [2024-12-09 09:53:03.077915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:30.110 [2024-12-09 09:53:03.077925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:9688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.110 [2024-12-09 09:53:03.077930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:35:30.110 [2024-12-09 09:53:03.077941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:9752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.110 [2024-12-09 09:53:03.077946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:30.110 [2024-12-09 09:53:03.077956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:9168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.110 [2024-12-09 09:53:03.077962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:30.110 [2024-12-09 09:53:03.077972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:8952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.110 [2024-12-09 09:53:03.077978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:30.110 [2024-12-09 09:53:03.077988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:8312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.110 [2024-12-09 09:53:03.077994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:30.110 [2024-12-09 09:53:03.078004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:9296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.110 [2024-12-09 09:53:03.078009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:30.110 [2024-12-09 09:53:03.078020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:9416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.110 [2024-12-09 09:53:03.078025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:30.110 [2024-12-09 09:53:03.078036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:9656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.110 [2024-12-09 09:53:03.078041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:30.110 [2024-12-09 09:53:03.078052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:9344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.110 [2024-12-09 09:53:03.078057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:30.110 [2024-12-09 09:53:03.078068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:9872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.110 [2024-12-09 09:53:03.078076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:30.110 [2024-12-09 09:53:03.079231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:10192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.110 [2024-12-09 09:53:03.079245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:30.110 [2024-12-09 09:53:03.079257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:10208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.110 [2024-12-09 09:53:03.079262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:30.110 [2024-12-09 09:53:03.079272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:10224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.110 [2024-12-09 09:53:03.079278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:30.110 [2024-12-09 09:53:03.079288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.110 [2024-12-09 09:53:03.079293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:30.110 [2024-12-09 09:53:03.079303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:10256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.110 [2024-12-09 09:53:03.079308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:30.110 [2024-12-09 09:53:03.079319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:10272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.110 [2024-12-09 09:53:03.079324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:35:30.110 [2024-12-09 09:53:03.079334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:10288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.110 [2024-12-09 09:53:03.079339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:30.110 [2024-12-09 09:53:03.079349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:10304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.110 [2024-12-09 09:53:03.079354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:30.110 [2024-12-09 09:53:03.079364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:9936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.110 [2024-12-09 09:53:03.079369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:30.110 [2024-12-09 09:53:03.079380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:9968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.110 [2024-12-09 09:53:03.079385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:30.111 [2024-12-09 09:53:03.079395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:10008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.111 [2024-12-09 09:53:03.079400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:30.111 [2024-12-09 09:53:03.079410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:9576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.111 [2024-12-09 09:53:03.079419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:30.111 [2024-12-09 09:53:03.079429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.111 [2024-12-09 09:53:03.079434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:30.111 [2024-12-09 09:53:03.079445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:9696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.111 [2024-12-09 09:53:03.079450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:30.111 [2024-12-09 09:53:03.079460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:9816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.111 [2024-12-09 09:53:03.079465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:30.111 [2024-12-09 09:53:03.079476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:9824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.111 [2024-12-09 09:53:03.079481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:30.111 [2024-12-09 09:53:03.079491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:8808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.111 [2024-12-09 09:53:03.079496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:30.111 [2024-12-09 09:53:03.079507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:10032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.111 [2024-12-09 09:53:03.079512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:30.111 [2024-12-09 09:53:03.079522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:9496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.111 [2024-12-09 09:53:03.079528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:35:30.111 [2024-12-09 09:53:03.079538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:9864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.111 [2024-12-09 09:53:03.079544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:30.111 [2024-12-09 09:53:03.079554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.111 [2024-12-09 09:53:03.079559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:30.111 [2024-12-09 09:53:03.079570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:10096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.111 [2024-12-09 09:53:03.079575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:30.111 [2024-12-09 09:53:03.079585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.111 [2024-12-09 09:53:03.079591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:35:30.111 [2024-12-09 09:53:03.079601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:9048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.111 [2024-12-09 09:53:03.079606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:30.111 [2024-12-09 09:53:03.079618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.111 [2024-12-09 09:53:03.079623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:30.111 [2024-12-09 09:53:03.079634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.111 [2024-12-09 09:53:03.079644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:30.111 [2024-12-09 09:53:03.079654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:10176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.111 [2024-12-09 09:53:03.079660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:30.111 [2024-12-09 09:53:03.079670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:9744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.111 [2024-12-09 09:53:03.079676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.111 [2024-12-09 09:53:03.079686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:9928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.111 [2024-12-09 09:53:03.079692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:30.111 [2024-12-09 09:53:03.079702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:9648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.111 [2024-12-09 09:53:03.079707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:30.111 [2024-12-09 09:53:03.079718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:9688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.111 [2024-12-09 09:53:03.079723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:30.111 [2024-12-09 09:53:03.079733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:9168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.111 [2024-12-09 09:53:03.079739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:30.111 [2024-12-09 09:53:03.079749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:8312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.111 [2024-12-09 09:53:03.079754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:30.111 [2024-12-09 09:53:03.079764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:9416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.111 [2024-12-09 09:53:03.079770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:35:30.111 [2024-12-09 09:53:03.079780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:9344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.111 [2024-12-09 09:53:03.079785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:35:30.111 [2024-12-09 09:53:03.079796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:9728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.111 [2024-12-09 09:53:03.079801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:35:30.111 [2024-12-09 09:53:03.079813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:10320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.111 [2024-12-09 09:53:03.079818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:35:30.111 [2024-12-09 09:53:03.079829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:10336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.111 [2024-12-09 09:53:03.079834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:35:30.111 [2024-12-09 09:53:03.079845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:9792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.111 [2024-12-09 09:53:03.079850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:35:30.111 [2024-12-09 09:53:03.081845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:10352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.111 [2024-12-09 09:53:03.081860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:35:30.111 [2024-12-09 09:53:03.081872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:10368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.111 [2024-12-09 09:53:03.081877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:35:30.111 [2024-12-09 09:53:03.081888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:10384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.111 [2024-12-09 09:53:03.081893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:35:30.111 [2024-12-09 09:53:03.081903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:10400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.111 [2024-12-09 09:53:03.081908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:30.111 [2024-12-09 09:53:03.081919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:10416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.111 [2024-12-09 09:53:03.081924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:35:30.111 [2024-12-09 09:53:03.081934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:10432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.111 [2024-12-09 09:53:03.081940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:35:30.111 [2024-12-09 09:53:03.081950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:10448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.112 [2024-12-09 09:53:03.081955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:35:30.112 [2024-12-09 09:53:03.081965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:10464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.112 [2024-12-09 09:53:03.081970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:35:30.112 [2024-12-09 09:53:03.081980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:10480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.112 [2024-12-09 09:53:03.081985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:35:30.112 [2024-12-09 09:53:03.081995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.112 [2024-12-09 09:53:03.082003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:35:30.112 [2024-12-09 09:53:03.082013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:10512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.112 [2024-12-09 09:53:03.082018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:30.112 [2024-12-09 09:53:03.082029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:10072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.112 [2024-12-09 09:53:03.082034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:35:30.112 [2024-12-09 09:53:03.082044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:10104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.112 [2024-12-09 09:53:03.082049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:30.112 [2024-12-09 09:53:03.082059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.112 [2024-12-09 09:53:03.082065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:35:30.112 [2024-12-09 09:53:03.082075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:10168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.112 [2024-12-09 09:53:03.082080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:30.112 [2024-12-09 09:53:03.082090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:10208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.112 [2024-12-09 09:53:03.082096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:35:30.112 [2024-12-09 09:53:03.082106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:10240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.112 [2024-12-09 09:53:03.082111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:30.112 [2024-12-09 09:53:03.082121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:10272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.112 [2024-12-09 09:53:03.082126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:30.112 [2024-12-09 09:53:03.082137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:10304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.112 [2024-12-09 09:53:03.082142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:35:30.112 [2024-12-09 09:53:03.082152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:9968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.112 [2024-12-09 09:53:03.082157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:30.112 [2024-12-09 09:53:03.082168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:9576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.112 [2024-12-09 09:53:03.082173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:35:30.112 [2024-12-09 09:53:03.082183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:9696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.112 [2024-12-09 09:53:03.082190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:30.112 [2024-12-09 09:53:03.082200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:9824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.112 [2024-12-09 09:53:03.082205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:30.112 [2024-12-09 09:53:03.082215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:10032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.112 [2024-12-09 09:53:03.082221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:30.112 [2024-12-09 09:53:03.082231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:9864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.112 [2024-12-09 09:53:03.082236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:30.112 [2024-12-09 09:53:03.082246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:10096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.112 [2024-12-09 09:53:03.082252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:30.112 [2024-12-09 09:53:03.082262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:9048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.112 [2024-12-09 09:53:03.082267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:30.112 [2024-12-09 09:53:03.082277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:9856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.112 [2024-12-09 09:53:03.082283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:30.112 [2024-12-09 09:53:03.082293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.112 [2024-12-09 09:53:03.082298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:30.112 [2024-12-09 09:53:03.082309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.112 [2024-12-09 09:53:03.082314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:35:30.112 [2024-12-09 09:53:03.082324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:9168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.112 [2024-12-09 09:53:03.082329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:35:30.112 [2024-12-09 09:53:03.082340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:9416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.112 [2024-12-09 09:53:03.082345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:35:30.112 [2024-12-09 09:53:03.082355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:9728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.112 [2024-12-09 09:53:03.082361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:30.112 [2024-12-09 09:53:03.082371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.112 [2024-12-09 09:53:03.082377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:30.112 [2024-12-09 09:53:03.082388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:9912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.112 [2024-12-09 09:53:03.082394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:30.112 [2024-12-09 09:53:03.082404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:9976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.112 [2024-12-09 09:53:03.082409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:30.112 [2024-12-09 09:53:03.082419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.112 [2024-12-09 09:53:03.082425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:35:30.112 [2024-12-09 09:53:03.082435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:9616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.112 [2024-12-09 09:53:03.082440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:30.112 [2024-12-09 09:53:03.082450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:10520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.112 [2024-12-09 09:53:03.082456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:30.112 [2024-12-09 09:53:03.082466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:10200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.112 [2024-12-09 09:53:03.082471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:35:30.112 [2024-12-09 09:53:03.082482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:10232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.112 [2024-12-09 09:53:03.082487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:30.112 [2024-12-09 09:53:03.082497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.112 [2024-12-09 09:53:03.082503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:35:30.112 [2024-12-09 09:53:03.082513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:10296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.112 [2024-12-09 09:53:03.082518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:30.112 [2024-12-09 09:53:03.082529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:9760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.112 [2024-12-09 09:53:03.082534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:35:30.112 [2024-12-09 09:53:03.083025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:10080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.112 [2024-12-09 09:53:03.083036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:30.113 [2024-12-09 09:53:03.083048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:10144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.113 [2024-12-09 09:53:03.083053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:35:30.113 [2024-12-09 09:53:03.083065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:10536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.113 [2024-12-09 09:53:03.083070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:30.113 [2024-12-09 09:53:03.083081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:10552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.113 [2024-12-09 09:53:03.083086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:30.113 [2024-12-09 09:53:03.083096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:10568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.113 [2024-12-09 09:53:03.083102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:30.113 [2024-12-09 09:53:03.083112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:10584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.113 [2024-12-09 09:53:03.083117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:30.113 [2024-12-09 09:53:03.083128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.113 [2024-12-09 09:53:03.083133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:35:30.113 [2024-12-09 09:53:03.083143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:9960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.113 [2024-12-09 09:53:03.083148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:35:30.113 [2024-12-09 09:53:03.083159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:8952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.113 [2024-12-09 09:53:03.083164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:35:30.113 [2024-12-09 09:53:03.083174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:9872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.113 [2024-12-09 09:53:03.083179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:30.113 [2024-12-09 09:53:03.083190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:10608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.113 [2024-12-09 09:53:03.083195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:30.113 [2024-12-09 09:53:03.083205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:10624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.113 [2024-12-09 09:53:03.083210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:30.113 [2024-12-09 09:53:03.083221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:10640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.113 [2024-12-09 09:53:03.083226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:30.113 [2024-12-09 09:53:03.083236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:10656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.113 [2024-12-09 09:53:03.083241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:30.113 [2024-12-09 09:53:03.083252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:10672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.113 [2024-12-09 09:53:03.083258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:35:30.113 [2024-12-09 09:53:03.083269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:10328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.113 [2024-12-09 09:53:03.083274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:35:30.113 [2024-12-09 09:53:03.083284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:10688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.113 [2024-12-09 09:53:03.083290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:30.113 [2024-12-09 09:53:03.083300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:10704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.113 [2024-12-09 09:53:03.083305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:30.113 [2024-12-09 09:53:03.083315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:10720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.113 [2024-12-09 09:53:03.083321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:30.113 [2024-12-09 09:53:03.083332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:10736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.113 [2024-12-09 09:53:03.083337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:35:30.113 [2024-12-09 09:53:03.083558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:10376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.113 [2024-12-09 09:53:03.083566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:35:30.113 [2024-12-09 09:53:03.083577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.113 [2024-12-09 09:53:03.083582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:35:30.113 [2024-12-09 09:53:03.083593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:10440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.113 [2024-12-09 09:53:03.083598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:30.113 [2024-12-09 09:53:03.083608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:10472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.113 [2024-12-09 09:53:03.083614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:30.113 [2024-12-09 09:53:03.083624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:10504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.113 [2024-12-09 09:53:03.083630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:30.113 [2024-12-09 09:53:03.083645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:10224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.113 [2024-12-09 09:53:03.083650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:30.113 [2024-12-09 09:53:03.083661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:10288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.113 [2024-12-09 09:53:03.083668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:30.113 [2024-12-09 09:53:03.083678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:10368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.113 [2024-12-09 09:53:03.083684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:30.113 [2024-12-09 09:53:03.083694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:10400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.113 [2024-12-09 09:53:03.083699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:30.113 [2024-12-09 09:53:03.083709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.113 [2024-12-09 09:53:03.083715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:30.113 [2024-12-09 09:53:03.083725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:10464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.113 [2024-12-09 09:53:03.083730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:30.113 [2024-12-09 09:53:03.083741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:10496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.113 [2024-12-09 09:53:03.083746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:30.113 [2024-12-09 09:53:03.083757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:10072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.113 [2024-12-09 09:53:03.083762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:30.113 [2024-12-09 09:53:03.083772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:10136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.113 [2024-12-09 09:53:03.083778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:30.113 [2024-12-09 09:53:03.083788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:10208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.113 [2024-12-09 09:53:03.083793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:30.113 [2024-12-09 09:53:03.083803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:10272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.113 [2024-12-09 09:53:03.083808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:35:30.113 [2024-12-09 09:53:03.083819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:9968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.113 [2024-12-09 09:53:03.083824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:30.113 [2024-12-09 09:53:03.084795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.113 [2024-12-09 09:53:03.084805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:30.113 [2024-12-09 09:53:03.084817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:10032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.113 [2024-12-09 09:53:03.084823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:30.113 [2024-12-09 09:53:03.084835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.113 [2024-12-09 09:53:03.084841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:30.114 [2024-12-09 09:53:03.084851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:9856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.114 [2024-12-09 09:53:03.084856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:30.114 [2024-12-09 09:53:03.084866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.114 [2024-12-09 09:53:03.084872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:30.114 [2024-12-09 09:53:03.084882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.114 [2024-12-09 09:53:03.084887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:30.114 [2024-12-09 09:53:03.084898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:10336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.114 [2024-12-09 09:53:03.084903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:30.114 [2024-12-09 09:53:03.084913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:9976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.114 [2024-12-09 09:53:03.084919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:30.114 [2024-12-09 09:53:03.084929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:9616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.114 [2024-12-09 09:53:03.084934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:30.114 [2024-12-09 09:53:03.084944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:10200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.114 [2024-12-09 09:53:03.084950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:30.114 [2024-12-09 09:53:03.084960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:10264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.114 [2024-12-09 09:53:03.084965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:30.114 [2024-12-09 09:53:03.084976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:9760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.114 [2024-12-09 09:53:03.084981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:30.114 [2024-12-09 09:53:03.084991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:10744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.114 [2024-12-09 09:53:03.084996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:30.114 [2024-12-09 09:53:03.085006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:10176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.114 [2024-12-09 09:53:03.085012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:35:30.114 [2024-12-09 09:53:03.085023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:10320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.114 [2024-12-09 09:53:03.085029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:30.114 [2024-12-09 09:53:03.085039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.114 [2024-12-09 09:53:03.085044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:30.114 [2024-12-09 09:53:03.085055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:10552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.114 [2024-12-09 09:53:03.085060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:30.114 [2024-12-09 09:53:03.085070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:10584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.114 [2024-12-09 09:53:03.085075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:30.114 [2024-12-09 09:53:03.085086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:9960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.114 [2024-12-09 09:53:03.085091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:30.114 [2024-12-09 09:53:03.085101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:9872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.114 [2024-12-09 09:53:03.085107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:30.114 [2024-12-09 09:53:03.085117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:10624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.114 [2024-12-09 09:53:03.085122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:30.114 [2024-12-09 09:53:03.085133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:10656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.114 [2024-12-09 09:53:03.085138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:30.114 [2024-12-09 09:53:03.085148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:10328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.114 [2024-12-09 09:53:03.085153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:30.114 [2024-12-09 09:53:03.085164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:10704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.114 [2024-12-09 09:53:03.085169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:30.114 [2024-12-09 09:53:03.085179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:10736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.114 [2024-12-09 09:53:03.085184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:30.114 [2024-12-09 09:53:03.085194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:10408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.114 [2024-12-09 09:53:03.085200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:30.114 [2024-12-09 09:53:03.085210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:10472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.114 [2024-12-09 09:53:03.085217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:35:30.114 [2024-12-09 09:53:03.085227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.114 [2024-12-09 09:53:03.085232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:30.114 [2024-12-09 09:53:03.085243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:10368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.114 [2024-12-09 09:53:03.085248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:30.114 [2024-12-09 09:53:03.085258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:10432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.114 [2024-12-09 09:53:03.085263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:35:30.114 [2024-12-09 09:53:03.085274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:10496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.114 [2024-12-09 09:53:03.085279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:35:30.114 [2024-12-09 09:53:03.085290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:10136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.114 [2024-12-09 09:53:03.085295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:30.114 [2024-12-09 09:53:03.085305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.114 [2024-12-09 09:53:03.085311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:30.114 [2024-12-09 09:53:03.086252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:10752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.114 [2024-12-09 09:53:03.086265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:30.114 [2024-12-09 09:53:03.086285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:10768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.114 [2024-12-09 09:53:03.086291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:30.114 [2024-12-09 09:53:03.086302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:10784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.114 [2024-12-09 09:53:03.086307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:30.114 [2024-12-09 09:53:03.086318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:10800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.114 [2024-12-09 09:53:03.086323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:30.114 [2024-12-09 09:53:03.086334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:10816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.114 [2024-12-09 09:53:03.086339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:30.114 [2024-12-09 09:53:03.086349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:10528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.114 [2024-12-09 09:53:03.086358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:30.114 [2024-12-09 09:53:03.086368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:10560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.114 [2024-12-09 09:53:03.086374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:35:30.114 [2024-12-09 09:53:03.086384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.114 [2024-12-09 09:53:03.086390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:30.114 [2024-12-09 09:53:03.086400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:10632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.114 [2024-12-09 09:53:03.086405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:35:30.115 [2024-12-09 09:53:03.086416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:10840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.115 [2024-12-09 09:53:03.086421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:35:30.115 [2024-12-09 09:53:03.086431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:10856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.115 [2024-12-09 09:53:03.086437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:35:30.115 [2024-12-09 09:53:03.086447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:10872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.115 [2024-12-09 09:53:03.086452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:35:30.115 [2024-12-09 09:53:03.086463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.115 [2024-12-09 09:53:03.086468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:35:30.115 [2024-12-09 09:53:03.086478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:10904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.115 [2024-12-09 09:53:03.086483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:35:30.115 [2024-12-09 09:53:03.086494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:10920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.115 [2024-12-09 09:53:03.086499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:35:30.115 [2024-12-09 09:53:03.086509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:10936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:30.115 [2024-12-09 09:53:03.086515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:35:30.115 [2024-12-09 09:53:03.086525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:10664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:30.115 [2024-12-09 09:53:03.086531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:35:30.115 12076.68 IOPS, 47.17 MiB/s [2024-12-09T08:53:05.568Z] 12104.27 IOPS, 47.28 MiB/s [2024-12-09T08:53:05.568Z] Received shutdown signal, test time was about 26.732908 seconds 00:35:30.115 00:35:30.115 Latency(us) 00:35:30.115 [2024-12-09T08:53:05.568Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:30.115 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:35:30.115 Verification LBA range: start 0x0 length 0x4000 00:35:30.115 Nvme0n1 : 26.73 12118.11 47.34 0.00 0.00 10543.84 198.83 3019898.88 00:35:30.115 [2024-12-09T08:53:05.568Z] =================================================================================================================== 00:35:30.115 [2024-12-09T08:53:05.568Z] Total : 12118.11 47.34 0.00 0.00 10543.84 198.83 3019898.88 00:35:30.115 09:53:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:30.115 09:53:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:35:30.115 09:53:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:35:30.115 09:53:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:35:30.115 09:53:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:30.115 09:53:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:35:30.115 09:53:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:30.115 09:53:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:35:30.115 09:53:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:30.115 09:53:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:30.376 rmmod nvme_tcp 00:35:30.376 rmmod nvme_fabrics 00:35:30.376 rmmod nvme_keyring 00:35:30.376 09:53:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:30.376 09:53:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:35:30.376 09:53:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:35:30.376 09:53:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 3006541 ']' 00:35:30.376 09:53:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 3006541 00:35:30.376 09:53:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 3006541 ']' 00:35:30.376 09:53:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 3006541 00:35:30.376 09:53:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:35:30.376 09:53:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:30.376 09:53:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3006541 00:35:30.376 09:53:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:30.376 09:53:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:30.376 09:53:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3006541' 00:35:30.376 killing process with pid 3006541 00:35:30.376 09:53:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 3006541 00:35:30.376 09:53:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 3006541 00:35:30.376 09:53:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:30.376 09:53:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:30.376 09:53:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:30.376 09:53:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:35:30.376 09:53:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:35:30.376 09:53:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:35:30.376 09:53:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:30.376 09:53:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:30.376 09:53:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:30.376 09:53:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:30.376 09:53:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:30.376 09:53:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:32.922 09:53:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:32.922 00:35:32.922 real 0m40.418s 00:35:32.922 user 1m44.065s 00:35:32.922 sys 0m11.414s 00:35:32.922 09:53:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:32.922 09:53:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:35:32.922 ************************************ 00:35:32.922 END TEST nvmf_host_multipath_status 00:35:32.922 ************************************ 00:35:32.922 09:53:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:35:32.922 09:53:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:32.922 09:53:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:32.922 09:53:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:32.922 ************************************ 00:35:32.922 START TEST nvmf_discovery_remove_ifc 00:35:32.922 ************************************ 00:35:32.922 09:53:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:35:32.922 * Looking for test storage... 00:35:32.922 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:32.922 09:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:32.922 09:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:35:32.922 09:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:32.922 09:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:32.922 09:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:32.922 09:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:32.922 09:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:32.922 09:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:35:32.922 09:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:35:32.922 09:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:35:32.922 09:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:35:32.922 09:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:35:32.922 09:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:35:32.922 09:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:35:32.922 09:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:32.922 09:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:35:32.922 09:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:35:32.922 09:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:32.922 09:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:32.922 09:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:35:32.922 09:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:35:32.922 09:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:32.922 09:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:35:32.922 09:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:35:32.922 09:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:35:32.922 09:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:35:32.922 09:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:32.922 09:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:35:32.922 09:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:35:32.922 09:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:32.922 09:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:32.922 09:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:35:32.922 09:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:32.922 09:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:32.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:32.922 --rc genhtml_branch_coverage=1 00:35:32.922 --rc genhtml_function_coverage=1 00:35:32.922 --rc genhtml_legend=1 00:35:32.922 --rc geninfo_all_blocks=1 00:35:32.922 --rc geninfo_unexecuted_blocks=1 00:35:32.922 00:35:32.922 ' 00:35:32.922 09:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:32.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:32.922 --rc genhtml_branch_coverage=1 00:35:32.922 --rc genhtml_function_coverage=1 00:35:32.923 --rc genhtml_legend=1 00:35:32.923 --rc geninfo_all_blocks=1 00:35:32.923 --rc geninfo_unexecuted_blocks=1 00:35:32.923 00:35:32.923 ' 00:35:32.923 09:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:32.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:32.923 --rc genhtml_branch_coverage=1 00:35:32.923 --rc genhtml_function_coverage=1 00:35:32.923 --rc genhtml_legend=1 00:35:32.923 --rc geninfo_all_blocks=1 00:35:32.923 --rc geninfo_unexecuted_blocks=1 00:35:32.923 00:35:32.923 ' 00:35:32.923 09:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:32.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:32.923 --rc genhtml_branch_coverage=1 00:35:32.923 --rc genhtml_function_coverage=1 00:35:32.923 --rc genhtml_legend=1 00:35:32.923 --rc geninfo_all_blocks=1 00:35:32.923 --rc geninfo_unexecuted_blocks=1 00:35:32.923 00:35:32.923 ' 00:35:32.923 09:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:32.923 09:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:35:32.923 09:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:32.923 09:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:32.923 09:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:32.923 09:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:32.923 09:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:32.923 09:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:32.923 09:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:32.923 09:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:32.923 09:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:32.923 09:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:32.923 09:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:32.923 09:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:32.923 09:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:32.923 09:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:32.923 09:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:32.923 09:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:32.923 09:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:32.923 09:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:35:32.923 09:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:32.923 09:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:32.923 09:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:32.923 09:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:32.923 09:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:32.923 09:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:32.923 09:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:35:32.923 09:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:32.923 09:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:35:32.923 09:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:32.923 09:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:32.923 09:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:32.923 09:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:32.923 09:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:32.923 09:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:32.923 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:32.923 09:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:32.923 09:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:32.923 09:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:32.923 09:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:35:32.923 09:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:35:32.923 09:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:35:32.923 09:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:35:32.923 09:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:35:32.923 09:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:35:32.923 09:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:35:32.923 09:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:32.923 09:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:32.923 09:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:32.923 09:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:32.923 09:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:32.923 09:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:32.923 09:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:32.923 09:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:32.923 09:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:32.923 09:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:32.923 09:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:35:32.923 09:53:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:41.066 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:41.066 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:35:41.066 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:41.066 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:41.066 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:41.066 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:41.067 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:41.067 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:35:41.067 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:41.067 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:35:41.067 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:35:41.067 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:35:41.067 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:35:41.067 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:35:41.067 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:35:41.067 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:41.067 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:41.067 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:41.067 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:41.067 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:41.067 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:41.067 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:41.067 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:41.067 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:41.067 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:41.067 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:41.067 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:41.067 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:41.067 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:41.067 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:41.067 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:41.067 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:41.067 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:41.067 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:41.067 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:41.067 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:41.067 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:41.067 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:41.067 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:41.067 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:41.067 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:41.067 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:41.067 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:41.067 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:41.067 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:41.067 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:41.067 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:41.067 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:41.067 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:41.067 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:41.067 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:41.067 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:41.067 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:41.067 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:41.067 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:41.067 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:41.067 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:41.067 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:41.067 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:41.067 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:41.067 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:41.067 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:41.067 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:41.067 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:41.067 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:41.067 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:41.067 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:41.067 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:41.067 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:41.067 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:41.067 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:41.067 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:41.067 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:41.067 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:35:41.067 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:41.067 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:41.067 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:41.067 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:41.067 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:41.067 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:41.067 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:41.067 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:41.067 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:41.067 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:41.067 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:41.067 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:41.067 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:41.067 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:41.067 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:41.067 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:41.067 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:41.067 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:41.067 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:41.067 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:41.067 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:41.067 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:41.067 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:41.067 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:41.067 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:41.067 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:41.067 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:41.067 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.544 ms 00:35:41.067 00:35:41.067 --- 10.0.0.2 ping statistics --- 00:35:41.067 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:41.067 rtt min/avg/max/mdev = 0.544/0.544/0.544/0.000 ms 00:35:41.067 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:41.067 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:41.067 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.345 ms 00:35:41.067 00:35:41.067 --- 10.0.0.1 ping statistics --- 00:35:41.067 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:41.067 rtt min/avg/max/mdev = 0.345/0.345/0.345/0.000 ms 00:35:41.067 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:41.067 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:35:41.067 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:41.067 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:41.068 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:41.068 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:41.068 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:41.068 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:41.068 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:41.068 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:35:41.068 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:41.068 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:41.068 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:41.068 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=3016732 00:35:41.068 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 3016732 00:35:41.068 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:35:41.068 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 3016732 ']' 00:35:41.068 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:41.068 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:41.068 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:41.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:41.068 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:41.068 09:53:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:41.068 [2024-12-09 09:53:15.631309] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:35:41.068 [2024-12-09 09:53:15.631378] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:41.068 [2024-12-09 09:53:15.729305] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:41.068 [2024-12-09 09:53:15.755212] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:41.068 [2024-12-09 09:53:15.755262] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:41.068 [2024-12-09 09:53:15.755275] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:41.068 [2024-12-09 09:53:15.755283] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:41.068 [2024-12-09 09:53:15.755288] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:41.068 [2024-12-09 09:53:15.756001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:41.068 09:53:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:41.068 09:53:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:35:41.068 09:53:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:41.068 09:53:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:41.068 09:53:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:41.068 09:53:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:41.068 09:53:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:35:41.068 09:53:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.068 09:53:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:41.068 [2024-12-09 09:53:16.499453] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:41.068 [2024-12-09 09:53:16.507703] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:35:41.345 null0 00:35:41.345 [2024-12-09 09:53:16.539666] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:41.345 09:53:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.345 09:53:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=3016803 00:35:41.345 09:53:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3016803 /tmp/host.sock 00:35:41.345 09:53:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:35:41.345 09:53:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 3016803 ']' 00:35:41.345 09:53:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:35:41.345 09:53:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:41.345 09:53:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:35:41.345 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:35:41.345 09:53:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:41.345 09:53:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:41.345 [2024-12-09 09:53:16.614832] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:35:41.345 [2024-12-09 09:53:16.614899] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3016803 ] 00:35:41.345 [2024-12-09 09:53:16.706101] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:41.345 [2024-12-09 09:53:16.734474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:42.289 09:53:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:42.289 09:53:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:35:42.289 09:53:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:42.289 09:53:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:35:42.289 09:53:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.289 09:53:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:42.289 09:53:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.289 09:53:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:35:42.289 09:53:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.289 09:53:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:42.289 09:53:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:42.289 09:53:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:35:42.289 09:53:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:42.289 09:53:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:43.230 [2024-12-09 09:53:18.548187] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:35:43.230 [2024-12-09 09:53:18.548218] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:35:43.230 [2024-12-09 09:53:18.548237] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:35:43.230 [2024-12-09 09:53:18.635498] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:35:43.518 [2024-12-09 09:53:18.859015] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:35:43.518 [2024-12-09 09:53:18.860107] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x17b3b20:1 started. 00:35:43.518 [2024-12-09 09:53:18.861989] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:35:43.518 [2024-12-09 09:53:18.862053] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:35:43.518 [2024-12-09 09:53:18.862079] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:35:43.518 [2024-12-09 09:53:18.862098] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:35:43.518 [2024-12-09 09:53:18.862123] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:35:43.518 09:53:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:43.518 09:53:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:35:43.518 09:53:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:43.518 [2024-12-09 09:53:18.867453] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x17b3b20 was disconnected and freed. delete nvme_qpair. 00:35:43.518 09:53:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:43.518 09:53:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:43.518 09:53:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.519 09:53:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:43.519 09:53:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:43.519 09:53:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:43.519 09:53:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:43.519 09:53:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:35:43.519 09:53:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:35:43.519 09:53:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:35:43.779 09:53:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:35:43.779 09:53:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:43.779 09:53:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:43.779 09:53:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:43.779 09:53:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.779 09:53:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:43.779 09:53:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:43.779 09:53:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:43.779 09:53:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:43.779 09:53:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:43.779 09:53:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:44.717 09:53:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:44.717 09:53:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:44.717 09:53:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:44.717 09:53:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.717 09:53:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:44.717 09:53:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:44.717 09:53:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:44.717 09:53:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.717 09:53:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:44.717 09:53:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:46.099 09:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:46.099 09:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:46.099 09:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:46.099 09:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.099 09:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:46.099 09:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:46.099 09:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:46.099 09:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.099 09:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:46.099 09:53:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:47.055 09:53:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:47.055 09:53:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:47.055 09:53:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:47.055 09:53:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.055 09:53:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:47.055 09:53:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:47.055 09:53:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:47.055 09:53:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.055 09:53:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:47.055 09:53:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:47.998 09:53:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:47.999 09:53:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:47.999 09:53:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:47.999 09:53:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.999 09:53:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:47.999 09:53:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:47.999 09:53:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:47.999 09:53:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.999 09:53:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:47.999 09:53:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:48.942 [2024-12-09 09:53:24.302150] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:35:48.942 [2024-12-09 09:53:24.302190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:48.942 [2024-12-09 09:53:24.302199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:48.942 [2024-12-09 09:53:24.302206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:48.942 [2024-12-09 09:53:24.302211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:48.942 [2024-12-09 09:53:24.302217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:48.942 [2024-12-09 09:53:24.302222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:48.942 [2024-12-09 09:53:24.302228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:48.942 [2024-12-09 09:53:24.302233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:48.942 [2024-12-09 09:53:24.302239] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:35:48.942 [2024-12-09 09:53:24.302245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:48.942 [2024-12-09 09:53:24.302250] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1790350 is same with the state(6) to be set 00:35:48.942 [2024-12-09 09:53:24.312172] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1790350 (9): Bad file descriptor 00:35:48.942 09:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:48.942 [2024-12-09 09:53:24.322205] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:35:48.942 [2024-12-09 09:53:24.322215] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:35:48.942 [2024-12-09 09:53:24.322220] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:35:48.942 [2024-12-09 09:53:24.322224] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:35:48.942 [2024-12-09 09:53:24.322241] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:35:48.942 09:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:48.942 09:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:48.942 09:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.942 09:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:48.942 09:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:48.942 09:53:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:50.324 [2024-12-09 09:53:25.340733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:35:50.324 [2024-12-09 09:53:25.340826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1790350 with addr=10.0.0.2, port=4420 00:35:50.324 [2024-12-09 09:53:25.340859] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1790350 is same with the state(6) to be set 00:35:50.324 [2024-12-09 09:53:25.340918] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1790350 (9): Bad file descriptor 00:35:50.324 [2024-12-09 09:53:25.342033] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:35:50.324 [2024-12-09 09:53:25.342103] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:35:50.324 [2024-12-09 09:53:25.342127] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:35:50.324 [2024-12-09 09:53:25.342152] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:35:50.324 [2024-12-09 09:53:25.342173] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:35:50.324 [2024-12-09 09:53:25.342189] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:35:50.324 [2024-12-09 09:53:25.342203] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:35:50.324 [2024-12-09 09:53:25.342225] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:35:50.324 [2024-12-09 09:53:25.342240] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:35:50.324 09:53:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.324 09:53:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:35:50.324 09:53:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:50.893 [2024-12-09 09:53:26.344661] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:35:50.893 [2024-12-09 09:53:26.344677] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:35:50.893 [2024-12-09 09:53:26.344689] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:35:50.893 [2024-12-09 09:53:26.344694] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:35:50.893 [2024-12-09 09:53:26.344700] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:35:50.893 [2024-12-09 09:53:26.344705] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:35:50.893 [2024-12-09 09:53:26.344709] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:35:50.893 [2024-12-09 09:53:26.344712] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:35:50.893 [2024-12-09 09:53:26.344729] bdev_nvme.c:7262:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:35:50.893 [2024-12-09 09:53:26.344748] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:35:50.893 [2024-12-09 09:53:26.344755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.893 [2024-12-09 09:53:26.344762] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:35:50.893 [2024-12-09 09:53:26.344768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:50.893 [2024-12-09 09:53:26.344773] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:35:51.153 [2024-12-09 09:53:26.344779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.153 [2024-12-09 09:53:26.344787] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:35:51.153 [2024-12-09 09:53:26.344792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.153 [2024-12-09 09:53:26.344799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:35:51.153 [2024-12-09 09:53:26.344806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:51.153 [2024-12-09 09:53:26.344814] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:35:51.153 [2024-12-09 09:53:26.345085] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x177faa0 (9): Bad file descriptor 00:35:51.153 [2024-12-09 09:53:26.346094] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:35:51.153 [2024-12-09 09:53:26.346102] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:35:51.153 09:53:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:51.153 09:53:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:51.153 09:53:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:51.153 09:53:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.153 09:53:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:51.153 09:53:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:51.153 09:53:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:51.153 09:53:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.153 09:53:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:35:51.153 09:53:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:51.153 09:53:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:51.153 09:53:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:35:51.153 09:53:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:51.153 09:53:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:51.153 09:53:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:51.153 09:53:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:51.153 09:53:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.153 09:53:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:51.153 09:53:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:51.153 09:53:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.153 09:53:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:35:51.153 09:53:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:52.537 09:53:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:52.537 09:53:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:52.537 09:53:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:52.537 09:53:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.537 09:53:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:52.537 09:53:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:52.537 09:53:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:52.537 09:53:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.537 09:53:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:35:52.537 09:53:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:35:53.109 [2024-12-09 09:53:28.404835] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:35:53.109 [2024-12-09 09:53:28.404850] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:35:53.109 [2024-12-09 09:53:28.404861] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:35:53.109 [2024-12-09 09:53:28.492104] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:35:53.109 [2024-12-09 09:53:28.550748] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:35:53.109 [2024-12-09 09:53:28.551434] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x1781de0:1 started. 00:35:53.109 [2024-12-09 09:53:28.552327] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:35:53.109 [2024-12-09 09:53:28.552353] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:35:53.109 [2024-12-09 09:53:28.552367] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:35:53.109 [2024-12-09 09:53:28.552377] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:35:53.109 [2024-12-09 09:53:28.552383] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:35:53.370 [2024-12-09 09:53:28.561260] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x1781de0 was disconnected and freed. delete nvme_qpair. 00:35:53.370 09:53:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:35:53.370 09:53:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:35:53.370 09:53:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:35:53.370 09:53:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.370 09:53:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:35:53.370 09:53:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:53.370 09:53:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:35:53.370 09:53:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.370 09:53:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:35:53.370 09:53:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:35:53.370 09:53:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 3016803 00:35:53.370 09:53:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 3016803 ']' 00:35:53.370 09:53:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 3016803 00:35:53.370 09:53:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:35:53.370 09:53:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:53.370 09:53:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3016803 00:35:53.370 09:53:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:53.370 09:53:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:53.370 09:53:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3016803' 00:35:53.370 killing process with pid 3016803 00:35:53.370 09:53:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 3016803 00:35:53.370 09:53:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 3016803 00:35:53.631 09:53:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:35:53.631 09:53:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:53.631 09:53:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:35:53.631 09:53:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:53.631 09:53:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:35:53.631 09:53:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:53.631 09:53:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:53.631 rmmod nvme_tcp 00:35:53.631 rmmod nvme_fabrics 00:35:53.631 rmmod nvme_keyring 00:35:53.631 09:53:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:53.631 09:53:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:35:53.631 09:53:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:35:53.631 09:53:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 3016732 ']' 00:35:53.631 09:53:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 3016732 00:35:53.631 09:53:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 3016732 ']' 00:35:53.631 09:53:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 3016732 00:35:53.631 09:53:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:35:53.631 09:53:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:53.631 09:53:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3016732 00:35:53.631 09:53:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:53.631 09:53:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:53.631 09:53:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3016732' 00:35:53.631 killing process with pid 3016732 00:35:53.631 09:53:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 3016732 00:35:53.631 09:53:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 3016732 00:35:53.892 09:53:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:53.892 09:53:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:53.892 09:53:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:53.892 09:53:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:35:53.892 09:53:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:35:53.892 09:53:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:53.892 09:53:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:35:53.892 09:53:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:53.893 09:53:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:53.893 09:53:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:53.893 09:53:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:53.893 09:53:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:55.803 09:53:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:55.803 00:35:55.803 real 0m23.230s 00:35:55.803 user 0m27.327s 00:35:55.803 sys 0m6.998s 00:35:55.803 09:53:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:55.803 09:53:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:35:55.803 ************************************ 00:35:55.803 END TEST nvmf_discovery_remove_ifc 00:35:55.803 ************************************ 00:35:55.803 09:53:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:35:55.803 09:53:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:55.803 09:53:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:55.803 09:53:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:56.064 ************************************ 00:35:56.064 START TEST nvmf_identify_kernel_target 00:35:56.064 ************************************ 00:35:56.064 09:53:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:35:56.064 * Looking for test storage... 00:35:56.064 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:56.064 09:53:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:56.064 09:53:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:35:56.064 09:53:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:56.064 09:53:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:56.064 09:53:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:56.064 09:53:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:56.064 09:53:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:56.064 09:53:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:35:56.064 09:53:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:35:56.064 09:53:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:35:56.064 09:53:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:35:56.064 09:53:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:35:56.064 09:53:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:35:56.064 09:53:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:35:56.064 09:53:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:56.064 09:53:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:35:56.064 09:53:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:35:56.064 09:53:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:56.064 09:53:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:56.064 09:53:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:35:56.064 09:53:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:35:56.064 09:53:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:56.064 09:53:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:35:56.064 09:53:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:35:56.064 09:53:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:35:56.064 09:53:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:35:56.064 09:53:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:56.064 09:53:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:35:56.064 09:53:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:35:56.064 09:53:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:56.064 09:53:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:56.064 09:53:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:35:56.064 09:53:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:56.064 09:53:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:56.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:56.064 --rc genhtml_branch_coverage=1 00:35:56.064 --rc genhtml_function_coverage=1 00:35:56.064 --rc genhtml_legend=1 00:35:56.064 --rc geninfo_all_blocks=1 00:35:56.064 --rc geninfo_unexecuted_blocks=1 00:35:56.064 00:35:56.064 ' 00:35:56.064 09:53:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:56.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:56.064 --rc genhtml_branch_coverage=1 00:35:56.064 --rc genhtml_function_coverage=1 00:35:56.064 --rc genhtml_legend=1 00:35:56.064 --rc geninfo_all_blocks=1 00:35:56.064 --rc geninfo_unexecuted_blocks=1 00:35:56.064 00:35:56.064 ' 00:35:56.064 09:53:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:56.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:56.064 --rc genhtml_branch_coverage=1 00:35:56.064 --rc genhtml_function_coverage=1 00:35:56.064 --rc genhtml_legend=1 00:35:56.064 --rc geninfo_all_blocks=1 00:35:56.064 --rc geninfo_unexecuted_blocks=1 00:35:56.064 00:35:56.064 ' 00:35:56.064 09:53:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:56.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:56.064 --rc genhtml_branch_coverage=1 00:35:56.064 --rc genhtml_function_coverage=1 00:35:56.064 --rc genhtml_legend=1 00:35:56.064 --rc geninfo_all_blocks=1 00:35:56.064 --rc geninfo_unexecuted_blocks=1 00:35:56.064 00:35:56.064 ' 00:35:56.064 09:53:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:56.064 09:53:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:35:56.065 09:53:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:56.065 09:53:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:56.065 09:53:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:56.065 09:53:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:56.065 09:53:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:56.065 09:53:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:56.065 09:53:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:56.065 09:53:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:56.065 09:53:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:56.065 09:53:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:56.065 09:53:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:56.065 09:53:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:56.065 09:53:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:56.065 09:53:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:56.065 09:53:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:56.065 09:53:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:56.065 09:53:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:56.065 09:53:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:35:56.065 09:53:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:56.065 09:53:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:56.065 09:53:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:56.065 09:53:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:56.065 09:53:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:56.065 09:53:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:56.065 09:53:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:35:56.065 09:53:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:56.065 09:53:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:35:56.065 09:53:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:56.065 09:53:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:56.065 09:53:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:56.065 09:53:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:56.065 09:53:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:56.065 09:53:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:56.065 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:56.065 09:53:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:56.065 09:53:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:56.065 09:53:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:56.065 09:53:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:35:56.065 09:53:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:56.065 09:53:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:56.065 09:53:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:56.065 09:53:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:56.065 09:53:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:56.065 09:53:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:56.065 09:53:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:56.065 09:53:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:56.065 09:53:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:56.065 09:53:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:56.065 09:53:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:35:56.065 09:53:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:36:04.205 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:04.205 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:36:04.205 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:04.205 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:04.205 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:04.205 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:04.205 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:04.205 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:36:04.205 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:04.205 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:36:04.205 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:36:04.205 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:36:04.205 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:36:04.205 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:36:04.205 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:36:04.205 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:04.205 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:04.205 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:04.205 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:04.205 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:04.205 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:04.205 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:04.205 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:04.205 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:04.205 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:04.205 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:04.205 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:04.205 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:04.205 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:04.205 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:04.205 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:04.205 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:04.205 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:04.205 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:04.205 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:36:04.205 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:36:04.205 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:04.205 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:04.205 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:04.205 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:04.205 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:04.205 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:04.205 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:36:04.205 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:36:04.205 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:04.205 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:04.205 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:04.205 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:04.205 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:04.205 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:04.205 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:04.205 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:04.205 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:04.205 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:04.205 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:04.205 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:04.205 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:04.205 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:04.205 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:04.205 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:36:04.205 Found net devices under 0000:4b:00.0: cvl_0_0 00:36:04.205 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:04.205 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:04.205 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:04.205 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:04.205 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:04.206 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:04.206 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:04.206 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:04.206 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:36:04.206 Found net devices under 0000:4b:00.1: cvl_0_1 00:36:04.206 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:04.206 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:04.206 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:36:04.206 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:04.206 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:04.206 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:04.206 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:04.206 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:04.206 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:04.206 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:04.206 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:04.206 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:04.206 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:04.206 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:04.206 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:04.206 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:04.206 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:04.206 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:04.206 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:04.206 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:04.206 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:04.206 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:04.206 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:04.206 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:04.206 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:04.206 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:04.206 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:04.206 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:04.206 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:04.206 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:04.206 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.676 ms 00:36:04.206 00:36:04.206 --- 10.0.0.2 ping statistics --- 00:36:04.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:04.206 rtt min/avg/max/mdev = 0.676/0.676/0.676/0.000 ms 00:36:04.206 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:04.206 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:04.206 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:36:04.206 00:36:04.206 --- 10.0.0.1 ping statistics --- 00:36:04.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:04.206 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:36:04.206 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:04.206 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:36:04.206 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:04.206 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:04.206 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:04.206 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:04.206 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:04.206 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:04.206 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:04.206 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:36:04.206 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:36:04.206 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:36:04.206 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:04.206 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:04.206 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:04.206 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:04.206 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:04.206 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:04.206 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:04.206 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:04.206 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:04.206 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:36:04.206 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:36:04.206 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:36:04.206 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:36:04.206 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:04.206 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:04.206 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:36:04.206 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:36:04.206 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:36:04.206 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:36:04.206 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:36:04.206 09:53:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:06.751 Waiting for block devices as requested 00:36:07.010 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:36:07.011 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:36:07.011 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:36:07.271 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:36:07.271 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:36:07.271 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:36:07.271 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:36:07.532 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:36:07.532 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:36:07.793 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:36:07.793 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:36:07.793 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:36:08.054 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:36:08.054 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:36:08.054 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:36:08.348 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:36:08.348 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:36:08.639 09:53:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:36:08.639 09:53:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:36:08.639 09:53:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:36:08.639 09:53:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:36:08.639 09:53:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:36:08.639 09:53:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:36:08.639 09:53:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:36:08.639 09:53:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:36:08.639 09:53:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:36:08.639 No valid GPT data, bailing 00:36:08.639 09:53:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:36:08.639 09:53:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:36:08.639 09:53:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:36:08.639 09:53:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:36:08.639 09:53:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:36:08.639 09:53:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:08.639 09:53:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:08.639 09:53:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:36:08.639 09:53:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:36:08.639 09:53:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:36:08.639 09:53:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:36:08.639 09:53:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:36:08.639 09:53:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:36:08.639 09:53:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:36:08.639 09:53:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:36:08.639 09:53:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:36:08.639 09:53:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:36:08.639 09:53:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:36:08.904 00:36:08.904 Discovery Log Number of Records 2, Generation counter 2 00:36:08.904 =====Discovery Log Entry 0====== 00:36:08.905 trtype: tcp 00:36:08.905 adrfam: ipv4 00:36:08.905 subtype: current discovery subsystem 00:36:08.905 treq: not specified, sq flow control disable supported 00:36:08.905 portid: 1 00:36:08.905 trsvcid: 4420 00:36:08.905 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:36:08.905 traddr: 10.0.0.1 00:36:08.905 eflags: none 00:36:08.905 sectype: none 00:36:08.905 =====Discovery Log Entry 1====== 00:36:08.905 trtype: tcp 00:36:08.905 adrfam: ipv4 00:36:08.905 subtype: nvme subsystem 00:36:08.905 treq: not specified, sq flow control disable supported 00:36:08.905 portid: 1 00:36:08.905 trsvcid: 4420 00:36:08.905 subnqn: nqn.2016-06.io.spdk:testnqn 00:36:08.905 traddr: 10.0.0.1 00:36:08.905 eflags: none 00:36:08.905 sectype: none 00:36:08.905 09:53:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:36:08.905 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:36:08.905 ===================================================== 00:36:08.905 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:36:08.905 ===================================================== 00:36:08.905 Controller Capabilities/Features 00:36:08.905 ================================ 00:36:08.905 Vendor ID: 0000 00:36:08.905 Subsystem Vendor ID: 0000 00:36:08.905 Serial Number: 3f56c8abbc144523de7b 00:36:08.905 Model Number: Linux 00:36:08.905 Firmware Version: 6.8.9-20 00:36:08.905 Recommended Arb Burst: 0 00:36:08.905 IEEE OUI Identifier: 00 00 00 00:36:08.905 Multi-path I/O 00:36:08.905 May have multiple subsystem ports: No 00:36:08.905 May have multiple controllers: No 00:36:08.905 Associated with SR-IOV VF: No 00:36:08.905 Max Data Transfer Size: Unlimited 00:36:08.905 Max Number of Namespaces: 0 00:36:08.905 Max Number of I/O Queues: 1024 00:36:08.905 NVMe Specification Version (VS): 1.3 00:36:08.905 NVMe Specification Version (Identify): 1.3 00:36:08.905 Maximum Queue Entries: 1024 00:36:08.905 Contiguous Queues Required: No 00:36:08.905 Arbitration Mechanisms Supported 00:36:08.905 Weighted Round Robin: Not Supported 00:36:08.905 Vendor Specific: Not Supported 00:36:08.905 Reset Timeout: 7500 ms 00:36:08.905 Doorbell Stride: 4 bytes 00:36:08.905 NVM Subsystem Reset: Not Supported 00:36:08.905 Command Sets Supported 00:36:08.905 NVM Command Set: Supported 00:36:08.905 Boot Partition: Not Supported 00:36:08.905 Memory Page Size Minimum: 4096 bytes 00:36:08.905 Memory Page Size Maximum: 4096 bytes 00:36:08.905 Persistent Memory Region: Not Supported 00:36:08.905 Optional Asynchronous Events Supported 00:36:08.905 Namespace Attribute Notices: Not Supported 00:36:08.905 Firmware Activation Notices: Not Supported 00:36:08.905 ANA Change Notices: Not Supported 00:36:08.905 PLE Aggregate Log Change Notices: Not Supported 00:36:08.905 LBA Status Info Alert Notices: Not Supported 00:36:08.905 EGE Aggregate Log Change Notices: Not Supported 00:36:08.905 Normal NVM Subsystem Shutdown event: Not Supported 00:36:08.905 Zone Descriptor Change Notices: Not Supported 00:36:08.905 Discovery Log Change Notices: Supported 00:36:08.905 Controller Attributes 00:36:08.905 128-bit Host Identifier: Not Supported 00:36:08.905 Non-Operational Permissive Mode: Not Supported 00:36:08.905 NVM Sets: Not Supported 00:36:08.905 Read Recovery Levels: Not Supported 00:36:08.905 Endurance Groups: Not Supported 00:36:08.905 Predictable Latency Mode: Not Supported 00:36:08.905 Traffic Based Keep ALive: Not Supported 00:36:08.905 Namespace Granularity: Not Supported 00:36:08.905 SQ Associations: Not Supported 00:36:08.905 UUID List: Not Supported 00:36:08.905 Multi-Domain Subsystem: Not Supported 00:36:08.905 Fixed Capacity Management: Not Supported 00:36:08.905 Variable Capacity Management: Not Supported 00:36:08.905 Delete Endurance Group: Not Supported 00:36:08.905 Delete NVM Set: Not Supported 00:36:08.905 Extended LBA Formats Supported: Not Supported 00:36:08.905 Flexible Data Placement Supported: Not Supported 00:36:08.905 00:36:08.905 Controller Memory Buffer Support 00:36:08.905 ================================ 00:36:08.905 Supported: No 00:36:08.905 00:36:08.905 Persistent Memory Region Support 00:36:08.905 ================================ 00:36:08.905 Supported: No 00:36:08.905 00:36:08.905 Admin Command Set Attributes 00:36:08.905 ============================ 00:36:08.905 Security Send/Receive: Not Supported 00:36:08.905 Format NVM: Not Supported 00:36:08.905 Firmware Activate/Download: Not Supported 00:36:08.905 Namespace Management: Not Supported 00:36:08.905 Device Self-Test: Not Supported 00:36:08.905 Directives: Not Supported 00:36:08.905 NVMe-MI: Not Supported 00:36:08.905 Virtualization Management: Not Supported 00:36:08.905 Doorbell Buffer Config: Not Supported 00:36:08.905 Get LBA Status Capability: Not Supported 00:36:08.905 Command & Feature Lockdown Capability: Not Supported 00:36:08.905 Abort Command Limit: 1 00:36:08.905 Async Event Request Limit: 1 00:36:08.905 Number of Firmware Slots: N/A 00:36:08.905 Firmware Slot 1 Read-Only: N/A 00:36:08.905 Firmware Activation Without Reset: N/A 00:36:08.905 Multiple Update Detection Support: N/A 00:36:08.905 Firmware Update Granularity: No Information Provided 00:36:08.905 Per-Namespace SMART Log: No 00:36:08.905 Asymmetric Namespace Access Log Page: Not Supported 00:36:08.905 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:36:08.905 Command Effects Log Page: Not Supported 00:36:08.905 Get Log Page Extended Data: Supported 00:36:08.905 Telemetry Log Pages: Not Supported 00:36:08.905 Persistent Event Log Pages: Not Supported 00:36:08.905 Supported Log Pages Log Page: May Support 00:36:08.905 Commands Supported & Effects Log Page: Not Supported 00:36:08.905 Feature Identifiers & Effects Log Page:May Support 00:36:08.905 NVMe-MI Commands & Effects Log Page: May Support 00:36:08.905 Data Area 4 for Telemetry Log: Not Supported 00:36:08.905 Error Log Page Entries Supported: 1 00:36:08.905 Keep Alive: Not Supported 00:36:08.905 00:36:08.905 NVM Command Set Attributes 00:36:08.905 ========================== 00:36:08.905 Submission Queue Entry Size 00:36:08.905 Max: 1 00:36:08.905 Min: 1 00:36:08.905 Completion Queue Entry Size 00:36:08.905 Max: 1 00:36:08.905 Min: 1 00:36:08.905 Number of Namespaces: 0 00:36:08.905 Compare Command: Not Supported 00:36:08.905 Write Uncorrectable Command: Not Supported 00:36:08.905 Dataset Management Command: Not Supported 00:36:08.905 Write Zeroes Command: Not Supported 00:36:08.905 Set Features Save Field: Not Supported 00:36:08.905 Reservations: Not Supported 00:36:08.905 Timestamp: Not Supported 00:36:08.905 Copy: Not Supported 00:36:08.905 Volatile Write Cache: Not Present 00:36:08.905 Atomic Write Unit (Normal): 1 00:36:08.905 Atomic Write Unit (PFail): 1 00:36:08.905 Atomic Compare & Write Unit: 1 00:36:08.905 Fused Compare & Write: Not Supported 00:36:08.905 Scatter-Gather List 00:36:08.905 SGL Command Set: Supported 00:36:08.905 SGL Keyed: Not Supported 00:36:08.905 SGL Bit Bucket Descriptor: Not Supported 00:36:08.905 SGL Metadata Pointer: Not Supported 00:36:08.905 Oversized SGL: Not Supported 00:36:08.905 SGL Metadata Address: Not Supported 00:36:08.905 SGL Offset: Supported 00:36:08.905 Transport SGL Data Block: Not Supported 00:36:08.905 Replay Protected Memory Block: Not Supported 00:36:08.905 00:36:08.905 Firmware Slot Information 00:36:08.905 ========================= 00:36:08.905 Active slot: 0 00:36:08.905 00:36:08.905 00:36:08.905 Error Log 00:36:08.905 ========= 00:36:08.905 00:36:08.905 Active Namespaces 00:36:08.905 ================= 00:36:08.905 Discovery Log Page 00:36:08.905 ================== 00:36:08.905 Generation Counter: 2 00:36:08.905 Number of Records: 2 00:36:08.905 Record Format: 0 00:36:08.905 00:36:08.905 Discovery Log Entry 0 00:36:08.905 ---------------------- 00:36:08.905 Transport Type: 3 (TCP) 00:36:08.905 Address Family: 1 (IPv4) 00:36:08.905 Subsystem Type: 3 (Current Discovery Subsystem) 00:36:08.905 Entry Flags: 00:36:08.906 Duplicate Returned Information: 0 00:36:08.906 Explicit Persistent Connection Support for Discovery: 0 00:36:08.906 Transport Requirements: 00:36:08.906 Secure Channel: Not Specified 00:36:08.906 Port ID: 1 (0x0001) 00:36:08.906 Controller ID: 65535 (0xffff) 00:36:08.906 Admin Max SQ Size: 32 00:36:08.906 Transport Service Identifier: 4420 00:36:08.906 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:36:08.906 Transport Address: 10.0.0.1 00:36:08.906 Discovery Log Entry 1 00:36:08.906 ---------------------- 00:36:08.906 Transport Type: 3 (TCP) 00:36:08.906 Address Family: 1 (IPv4) 00:36:08.906 Subsystem Type: 2 (NVM Subsystem) 00:36:08.906 Entry Flags: 00:36:08.906 Duplicate Returned Information: 0 00:36:08.906 Explicit Persistent Connection Support for Discovery: 0 00:36:08.906 Transport Requirements: 00:36:08.906 Secure Channel: Not Specified 00:36:08.906 Port ID: 1 (0x0001) 00:36:08.906 Controller ID: 65535 (0xffff) 00:36:08.906 Admin Max SQ Size: 32 00:36:08.906 Transport Service Identifier: 4420 00:36:08.906 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:36:08.906 Transport Address: 10.0.0.1 00:36:08.906 09:53:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:08.906 get_feature(0x01) failed 00:36:08.906 get_feature(0x02) failed 00:36:08.906 get_feature(0x04) failed 00:36:08.906 ===================================================== 00:36:08.906 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:08.906 ===================================================== 00:36:08.906 Controller Capabilities/Features 00:36:08.906 ================================ 00:36:08.906 Vendor ID: 0000 00:36:08.906 Subsystem Vendor ID: 0000 00:36:08.906 Serial Number: c5bc9cac847ecbfca270 00:36:08.906 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:36:08.906 Firmware Version: 6.8.9-20 00:36:08.906 Recommended Arb Burst: 6 00:36:08.906 IEEE OUI Identifier: 00 00 00 00:36:08.906 Multi-path I/O 00:36:08.906 May have multiple subsystem ports: Yes 00:36:08.906 May have multiple controllers: Yes 00:36:08.906 Associated with SR-IOV VF: No 00:36:08.906 Max Data Transfer Size: Unlimited 00:36:08.906 Max Number of Namespaces: 1024 00:36:08.906 Max Number of I/O Queues: 128 00:36:08.906 NVMe Specification Version (VS): 1.3 00:36:08.906 NVMe Specification Version (Identify): 1.3 00:36:08.906 Maximum Queue Entries: 1024 00:36:08.906 Contiguous Queues Required: No 00:36:08.906 Arbitration Mechanisms Supported 00:36:08.906 Weighted Round Robin: Not Supported 00:36:08.906 Vendor Specific: Not Supported 00:36:08.906 Reset Timeout: 7500 ms 00:36:08.906 Doorbell Stride: 4 bytes 00:36:08.906 NVM Subsystem Reset: Not Supported 00:36:08.906 Command Sets Supported 00:36:08.906 NVM Command Set: Supported 00:36:08.906 Boot Partition: Not Supported 00:36:08.906 Memory Page Size Minimum: 4096 bytes 00:36:08.906 Memory Page Size Maximum: 4096 bytes 00:36:08.906 Persistent Memory Region: Not Supported 00:36:08.906 Optional Asynchronous Events Supported 00:36:08.906 Namespace Attribute Notices: Supported 00:36:08.906 Firmware Activation Notices: Not Supported 00:36:08.906 ANA Change Notices: Supported 00:36:08.906 PLE Aggregate Log Change Notices: Not Supported 00:36:08.906 LBA Status Info Alert Notices: Not Supported 00:36:08.906 EGE Aggregate Log Change Notices: Not Supported 00:36:08.906 Normal NVM Subsystem Shutdown event: Not Supported 00:36:08.906 Zone Descriptor Change Notices: Not Supported 00:36:08.906 Discovery Log Change Notices: Not Supported 00:36:08.906 Controller Attributes 00:36:08.906 128-bit Host Identifier: Supported 00:36:08.906 Non-Operational Permissive Mode: Not Supported 00:36:08.906 NVM Sets: Not Supported 00:36:08.906 Read Recovery Levels: Not Supported 00:36:08.906 Endurance Groups: Not Supported 00:36:08.906 Predictable Latency Mode: Not Supported 00:36:08.906 Traffic Based Keep ALive: Supported 00:36:08.906 Namespace Granularity: Not Supported 00:36:08.906 SQ Associations: Not Supported 00:36:08.906 UUID List: Not Supported 00:36:08.906 Multi-Domain Subsystem: Not Supported 00:36:08.906 Fixed Capacity Management: Not Supported 00:36:08.906 Variable Capacity Management: Not Supported 00:36:08.906 Delete Endurance Group: Not Supported 00:36:08.906 Delete NVM Set: Not Supported 00:36:08.906 Extended LBA Formats Supported: Not Supported 00:36:08.906 Flexible Data Placement Supported: Not Supported 00:36:08.906 00:36:08.906 Controller Memory Buffer Support 00:36:08.906 ================================ 00:36:08.906 Supported: No 00:36:08.906 00:36:08.906 Persistent Memory Region Support 00:36:08.906 ================================ 00:36:08.906 Supported: No 00:36:08.906 00:36:08.906 Admin Command Set Attributes 00:36:08.906 ============================ 00:36:08.906 Security Send/Receive: Not Supported 00:36:08.906 Format NVM: Not Supported 00:36:08.906 Firmware Activate/Download: Not Supported 00:36:08.906 Namespace Management: Not Supported 00:36:08.906 Device Self-Test: Not Supported 00:36:08.906 Directives: Not Supported 00:36:08.906 NVMe-MI: Not Supported 00:36:08.906 Virtualization Management: Not Supported 00:36:08.906 Doorbell Buffer Config: Not Supported 00:36:08.906 Get LBA Status Capability: Not Supported 00:36:08.906 Command & Feature Lockdown Capability: Not Supported 00:36:08.906 Abort Command Limit: 4 00:36:08.906 Async Event Request Limit: 4 00:36:08.906 Number of Firmware Slots: N/A 00:36:08.906 Firmware Slot 1 Read-Only: N/A 00:36:08.906 Firmware Activation Without Reset: N/A 00:36:08.906 Multiple Update Detection Support: N/A 00:36:08.906 Firmware Update Granularity: No Information Provided 00:36:08.906 Per-Namespace SMART Log: Yes 00:36:08.906 Asymmetric Namespace Access Log Page: Supported 00:36:08.906 ANA Transition Time : 10 sec 00:36:08.906 00:36:08.906 Asymmetric Namespace Access Capabilities 00:36:08.906 ANA Optimized State : Supported 00:36:08.906 ANA Non-Optimized State : Supported 00:36:08.906 ANA Inaccessible State : Supported 00:36:08.906 ANA Persistent Loss State : Supported 00:36:08.906 ANA Change State : Supported 00:36:08.906 ANAGRPID is not changed : No 00:36:08.906 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:36:08.906 00:36:08.906 ANA Group Identifier Maximum : 128 00:36:08.906 Number of ANA Group Identifiers : 128 00:36:08.906 Max Number of Allowed Namespaces : 1024 00:36:08.906 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:36:08.906 Command Effects Log Page: Supported 00:36:08.906 Get Log Page Extended Data: Supported 00:36:08.906 Telemetry Log Pages: Not Supported 00:36:08.906 Persistent Event Log Pages: Not Supported 00:36:08.906 Supported Log Pages Log Page: May Support 00:36:08.906 Commands Supported & Effects Log Page: Not Supported 00:36:08.906 Feature Identifiers & Effects Log Page:May Support 00:36:08.906 NVMe-MI Commands & Effects Log Page: May Support 00:36:08.906 Data Area 4 for Telemetry Log: Not Supported 00:36:08.906 Error Log Page Entries Supported: 128 00:36:08.906 Keep Alive: Supported 00:36:08.906 Keep Alive Granularity: 1000 ms 00:36:08.906 00:36:08.906 NVM Command Set Attributes 00:36:08.906 ========================== 00:36:08.906 Submission Queue Entry Size 00:36:08.906 Max: 64 00:36:08.906 Min: 64 00:36:08.906 Completion Queue Entry Size 00:36:08.906 Max: 16 00:36:08.906 Min: 16 00:36:08.906 Number of Namespaces: 1024 00:36:08.906 Compare Command: Not Supported 00:36:08.906 Write Uncorrectable Command: Not Supported 00:36:08.906 Dataset Management Command: Supported 00:36:08.906 Write Zeroes Command: Supported 00:36:08.906 Set Features Save Field: Not Supported 00:36:08.906 Reservations: Not Supported 00:36:08.906 Timestamp: Not Supported 00:36:08.906 Copy: Not Supported 00:36:08.906 Volatile Write Cache: Present 00:36:08.906 Atomic Write Unit (Normal): 1 00:36:08.906 Atomic Write Unit (PFail): 1 00:36:08.906 Atomic Compare & Write Unit: 1 00:36:08.906 Fused Compare & Write: Not Supported 00:36:08.906 Scatter-Gather List 00:36:08.906 SGL Command Set: Supported 00:36:08.906 SGL Keyed: Not Supported 00:36:08.906 SGL Bit Bucket Descriptor: Not Supported 00:36:08.906 SGL Metadata Pointer: Not Supported 00:36:08.906 Oversized SGL: Not Supported 00:36:08.906 SGL Metadata Address: Not Supported 00:36:08.906 SGL Offset: Supported 00:36:08.906 Transport SGL Data Block: Not Supported 00:36:08.906 Replay Protected Memory Block: Not Supported 00:36:08.906 00:36:08.906 Firmware Slot Information 00:36:08.906 ========================= 00:36:08.906 Active slot: 0 00:36:08.906 00:36:08.906 Asymmetric Namespace Access 00:36:08.907 =========================== 00:36:08.907 Change Count : 0 00:36:08.907 Number of ANA Group Descriptors : 1 00:36:08.907 ANA Group Descriptor : 0 00:36:08.907 ANA Group ID : 1 00:36:08.907 Number of NSID Values : 1 00:36:08.907 Change Count : 0 00:36:08.907 ANA State : 1 00:36:08.907 Namespace Identifier : 1 00:36:08.907 00:36:08.907 Commands Supported and Effects 00:36:08.907 ============================== 00:36:08.907 Admin Commands 00:36:08.907 -------------- 00:36:08.907 Get Log Page (02h): Supported 00:36:08.907 Identify (06h): Supported 00:36:08.907 Abort (08h): Supported 00:36:08.907 Set Features (09h): Supported 00:36:08.907 Get Features (0Ah): Supported 00:36:08.907 Asynchronous Event Request (0Ch): Supported 00:36:08.907 Keep Alive (18h): Supported 00:36:08.907 I/O Commands 00:36:08.907 ------------ 00:36:08.907 Flush (00h): Supported 00:36:08.907 Write (01h): Supported LBA-Change 00:36:08.907 Read (02h): Supported 00:36:08.907 Write Zeroes (08h): Supported LBA-Change 00:36:08.907 Dataset Management (09h): Supported 00:36:08.907 00:36:08.907 Error Log 00:36:08.907 ========= 00:36:08.907 Entry: 0 00:36:08.907 Error Count: 0x3 00:36:08.907 Submission Queue Id: 0x0 00:36:08.907 Command Id: 0x5 00:36:08.907 Phase Bit: 0 00:36:08.907 Status Code: 0x2 00:36:08.907 Status Code Type: 0x0 00:36:08.907 Do Not Retry: 1 00:36:08.907 Error Location: 0x28 00:36:08.907 LBA: 0x0 00:36:08.907 Namespace: 0x0 00:36:08.907 Vendor Log Page: 0x0 00:36:08.907 ----------- 00:36:08.907 Entry: 1 00:36:08.907 Error Count: 0x2 00:36:08.907 Submission Queue Id: 0x0 00:36:08.907 Command Id: 0x5 00:36:08.907 Phase Bit: 0 00:36:08.907 Status Code: 0x2 00:36:08.907 Status Code Type: 0x0 00:36:08.907 Do Not Retry: 1 00:36:08.907 Error Location: 0x28 00:36:08.907 LBA: 0x0 00:36:08.907 Namespace: 0x0 00:36:08.907 Vendor Log Page: 0x0 00:36:08.907 ----------- 00:36:08.907 Entry: 2 00:36:08.907 Error Count: 0x1 00:36:08.907 Submission Queue Id: 0x0 00:36:08.907 Command Id: 0x4 00:36:08.907 Phase Bit: 0 00:36:08.907 Status Code: 0x2 00:36:08.907 Status Code Type: 0x0 00:36:08.907 Do Not Retry: 1 00:36:08.907 Error Location: 0x28 00:36:08.907 LBA: 0x0 00:36:08.907 Namespace: 0x0 00:36:08.907 Vendor Log Page: 0x0 00:36:08.907 00:36:08.907 Number of Queues 00:36:08.907 ================ 00:36:08.907 Number of I/O Submission Queues: 128 00:36:08.907 Number of I/O Completion Queues: 128 00:36:08.907 00:36:08.907 ZNS Specific Controller Data 00:36:08.907 ============================ 00:36:08.907 Zone Append Size Limit: 0 00:36:08.907 00:36:08.907 00:36:08.907 Active Namespaces 00:36:08.907 ================= 00:36:08.907 get_feature(0x05) failed 00:36:08.907 Namespace ID:1 00:36:08.907 Command Set Identifier: NVM (00h) 00:36:08.907 Deallocate: Supported 00:36:08.907 Deallocated/Unwritten Error: Not Supported 00:36:08.907 Deallocated Read Value: Unknown 00:36:08.907 Deallocate in Write Zeroes: Not Supported 00:36:08.907 Deallocated Guard Field: 0xFFFF 00:36:08.907 Flush: Supported 00:36:08.907 Reservation: Not Supported 00:36:08.907 Namespace Sharing Capabilities: Multiple Controllers 00:36:08.907 Size (in LBAs): 3750748848 (1788GiB) 00:36:08.907 Capacity (in LBAs): 3750748848 (1788GiB) 00:36:08.907 Utilization (in LBAs): 3750748848 (1788GiB) 00:36:08.907 UUID: ad59623d-d93b-46ff-8a2b-63dcac936f1a 00:36:08.907 Thin Provisioning: Not Supported 00:36:08.907 Per-NS Atomic Units: Yes 00:36:08.907 Atomic Write Unit (Normal): 8 00:36:08.907 Atomic Write Unit (PFail): 8 00:36:08.907 Preferred Write Granularity: 8 00:36:08.907 Atomic Compare & Write Unit: 8 00:36:08.907 Atomic Boundary Size (Normal): 0 00:36:08.907 Atomic Boundary Size (PFail): 0 00:36:08.907 Atomic Boundary Offset: 0 00:36:08.907 NGUID/EUI64 Never Reused: No 00:36:08.907 ANA group ID: 1 00:36:08.907 Namespace Write Protected: No 00:36:08.907 Number of LBA Formats: 1 00:36:08.907 Current LBA Format: LBA Format #00 00:36:08.907 LBA Format #00: Data Size: 512 Metadata Size: 0 00:36:08.907 00:36:08.907 09:53:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:36:08.907 09:53:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:08.907 09:53:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:36:08.907 09:53:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:08.907 09:53:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:36:08.907 09:53:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:08.907 09:53:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:09.169 rmmod nvme_tcp 00:36:09.169 rmmod nvme_fabrics 00:36:09.169 09:53:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:09.169 09:53:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:36:09.169 09:53:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:36:09.169 09:53:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:36:09.169 09:53:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:09.169 09:53:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:09.169 09:53:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:09.169 09:53:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:36:09.169 09:53:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:36:09.169 09:53:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:09.169 09:53:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:36:09.169 09:53:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:09.169 09:53:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:09.169 09:53:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:09.169 09:53:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:09.169 09:53:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:11.081 09:53:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:11.081 09:53:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:36:11.081 09:53:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:36:11.081 09:53:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:36:11.081 09:53:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:11.081 09:53:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:11.081 09:53:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:36:11.081 09:53:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:11.081 09:53:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:36:11.081 09:53:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:36:11.342 09:53:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:14.643 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:36:14.643 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:36:14.643 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:36:14.643 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:36:14.643 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:36:14.643 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:36:14.643 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:36:14.643 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:36:14.643 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:36:14.643 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:36:14.643 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:36:14.643 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:36:14.643 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:36:14.643 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:36:14.643 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:36:14.643 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:36:14.643 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:36:14.903 00:36:14.903 real 0m18.960s 00:36:14.903 user 0m5.028s 00:36:14.903 sys 0m10.836s 00:36:14.903 09:53:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:14.903 09:53:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:36:14.903 ************************************ 00:36:14.903 END TEST nvmf_identify_kernel_target 00:36:14.903 ************************************ 00:36:14.903 09:53:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:36:14.903 09:53:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:36:14.903 09:53:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:14.903 09:53:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:14.903 ************************************ 00:36:14.903 START TEST nvmf_auth_host 00:36:14.903 ************************************ 00:36:14.903 09:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:36:15.164 * Looking for test storage... 00:36:15.164 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:15.164 09:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:15.164 09:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:36:15.164 09:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:15.164 09:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:15.164 09:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:15.164 09:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:15.164 09:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:15.164 09:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:36:15.164 09:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:36:15.164 09:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:36:15.164 09:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:36:15.164 09:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:36:15.164 09:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:36:15.165 09:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:36:15.165 09:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:15.165 09:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:36:15.165 09:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:36:15.165 09:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:15.165 09:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:15.165 09:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:36:15.165 09:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:36:15.165 09:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:15.165 09:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:36:15.165 09:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:36:15.165 09:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:36:15.165 09:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:36:15.165 09:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:15.165 09:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:36:15.165 09:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:36:15.165 09:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:15.165 09:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:15.165 09:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:36:15.165 09:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:15.165 09:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:15.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:15.165 --rc genhtml_branch_coverage=1 00:36:15.165 --rc genhtml_function_coverage=1 00:36:15.165 --rc genhtml_legend=1 00:36:15.165 --rc geninfo_all_blocks=1 00:36:15.165 --rc geninfo_unexecuted_blocks=1 00:36:15.165 00:36:15.165 ' 00:36:15.165 09:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:15.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:15.165 --rc genhtml_branch_coverage=1 00:36:15.165 --rc genhtml_function_coverage=1 00:36:15.165 --rc genhtml_legend=1 00:36:15.165 --rc geninfo_all_blocks=1 00:36:15.165 --rc geninfo_unexecuted_blocks=1 00:36:15.165 00:36:15.165 ' 00:36:15.165 09:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:15.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:15.165 --rc genhtml_branch_coverage=1 00:36:15.165 --rc genhtml_function_coverage=1 00:36:15.165 --rc genhtml_legend=1 00:36:15.165 --rc geninfo_all_blocks=1 00:36:15.165 --rc geninfo_unexecuted_blocks=1 00:36:15.165 00:36:15.165 ' 00:36:15.165 09:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:15.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:15.165 --rc genhtml_branch_coverage=1 00:36:15.165 --rc genhtml_function_coverage=1 00:36:15.165 --rc genhtml_legend=1 00:36:15.165 --rc geninfo_all_blocks=1 00:36:15.165 --rc geninfo_unexecuted_blocks=1 00:36:15.165 00:36:15.165 ' 00:36:15.165 09:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:15.165 09:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:36:15.165 09:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:15.165 09:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:15.165 09:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:15.165 09:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:15.165 09:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:15.165 09:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:15.165 09:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:15.165 09:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:15.165 09:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:15.165 09:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:15.165 09:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:15.165 09:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:15.165 09:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:15.165 09:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:15.165 09:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:15.165 09:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:15.165 09:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:15.165 09:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:36:15.165 09:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:15.165 09:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:15.165 09:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:15.165 09:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:15.165 09:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:15.165 09:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:15.165 09:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:36:15.165 09:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:15.165 09:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:36:15.165 09:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:15.165 09:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:15.165 09:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:15.165 09:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:15.165 09:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:15.165 09:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:15.165 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:15.165 09:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:15.165 09:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:15.165 09:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:15.165 09:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:36:15.165 09:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:36:15.165 09:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:36:15.165 09:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:36:15.165 09:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:15.165 09:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:36:15.165 09:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:36:15.165 09:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:36:15.165 09:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:36:15.165 09:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:15.165 09:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:15.165 09:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:15.165 09:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:15.165 09:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:15.165 09:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:15.165 09:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:15.166 09:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:15.166 09:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:15.166 09:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:15.166 09:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:36:15.166 09:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.304 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:23.304 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:36:23.304 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:23.304 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:23.304 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:23.304 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:23.304 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:23.304 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:36:23.304 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:23.304 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:36:23.304 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:36:23.304 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:36:23.304 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:36:23.304 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:36:23.304 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:36:23.304 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:23.304 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:23.304 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:23.304 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:23.305 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:23.305 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:23.305 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:23.305 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:23.305 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:23.305 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:23.305 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:23.305 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:23.305 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:23.305 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:23.305 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:23.305 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:23.305 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:23.305 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:23.305 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:23.305 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:36:23.305 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:36:23.305 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:23.305 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:23.305 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:23.305 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:23.305 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:23.305 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:23.305 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:36:23.305 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:36:23.305 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:23.305 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:23.305 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:23.305 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:23.305 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:23.305 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:23.305 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:23.305 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:23.305 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:23.305 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:23.305 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:23.305 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:23.305 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:23.305 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:23.305 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:23.305 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:36:23.305 Found net devices under 0000:4b:00.0: cvl_0_0 00:36:23.305 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:23.305 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:23.305 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:23.305 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:23.305 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:23.305 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:23.305 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:23.305 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:23.305 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:36:23.305 Found net devices under 0000:4b:00.1: cvl_0_1 00:36:23.305 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:23.305 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:23.305 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:36:23.305 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:23.305 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:23.305 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:23.305 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:23.305 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:23.305 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:23.305 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:23.305 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:23.305 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:23.305 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:23.305 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:23.305 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:23.305 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:23.305 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:23.305 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:23.305 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:23.305 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:23.305 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:23.305 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:23.305 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:23.305 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:23.305 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:23.305 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:23.305 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:23.305 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:23.305 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:23.305 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:23.305 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.628 ms 00:36:23.305 00:36:23.305 --- 10.0.0.2 ping statistics --- 00:36:23.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:23.305 rtt min/avg/max/mdev = 0.628/0.628/0.628/0.000 ms 00:36:23.306 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:23.306 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:23.306 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:36:23.306 00:36:23.306 --- 10.0.0.1 ping statistics --- 00:36:23.306 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:23.306 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:36:23.306 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:23.306 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:36:23.306 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:23.306 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:23.306 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:23.306 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:23.306 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:23.306 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:23.306 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:23.306 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:36:23.306 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:23.306 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:23.306 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.306 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=3030978 00:36:23.306 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 3030978 00:36:23.306 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:36:23.306 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 3030978 ']' 00:36:23.306 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:23.306 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:23.306 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:23.306 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:23.306 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.306 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:23.306 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:36:23.306 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:23.306 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:23.306 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.306 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:23.306 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:36:23.306 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:36:23.306 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:36:23.306 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:23.306 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:36:23.306 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:36:23.306 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:36:23.306 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:36:23.306 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=beeb052bb408b941399c4890f16a7418 00:36:23.306 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:36:23.306 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Pd5 00:36:23.306 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key beeb052bb408b941399c4890f16a7418 0 00:36:23.306 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 beeb052bb408b941399c4890f16a7418 0 00:36:23.306 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:36:23.306 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:36:23.306 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=beeb052bb408b941399c4890f16a7418 00:36:23.306 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:36:23.306 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:36:23.306 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Pd5 00:36:23.306 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Pd5 00:36:23.306 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.Pd5 00:36:23.306 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:36:23.306 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:36:23.306 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:23.306 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:36:23.306 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:36:23.306 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:36:23.306 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:36:23.306 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=21529b93e6331e385aad0ddaee9b2c4f3dc4a9353a4eefa31aaea99f95e47dda 00:36:23.306 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:36:23.306 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.5IK 00:36:23.306 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 21529b93e6331e385aad0ddaee9b2c4f3dc4a9353a4eefa31aaea99f95e47dda 3 00:36:23.306 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 21529b93e6331e385aad0ddaee9b2c4f3dc4a9353a4eefa31aaea99f95e47dda 3 00:36:23.306 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:36:23.306 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:36:23.306 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=21529b93e6331e385aad0ddaee9b2c4f3dc4a9353a4eefa31aaea99f95e47dda 00:36:23.306 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:36:23.306 09:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:36:23.306 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.5IK 00:36:23.306 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.5IK 00:36:23.306 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.5IK 00:36:23.306 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:36:23.306 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:36:23.306 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:23.306 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:36:23.306 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:36:23.307 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:36:23.307 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:36:23.307 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=7cfd6b5b6677474534c302ffa8d8b06debb738b46fb186ed 00:36:23.307 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:36:23.307 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.K09 00:36:23.307 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 7cfd6b5b6677474534c302ffa8d8b06debb738b46fb186ed 0 00:36:23.307 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 7cfd6b5b6677474534c302ffa8d8b06debb738b46fb186ed 0 00:36:23.307 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:36:23.307 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:36:23.307 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=7cfd6b5b6677474534c302ffa8d8b06debb738b46fb186ed 00:36:23.307 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:36:23.307 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:36:23.307 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.K09 00:36:23.307 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.K09 00:36:23.307 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.K09 00:36:23.307 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:36:23.307 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:36:23.307 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:23.307 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:36:23.307 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:36:23.307 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:36:23.307 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:36:23.307 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=903f0f48aab1da10e4c8ead40a4aeb5786b5d123fd5e413f 00:36:23.307 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:36:23.307 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.2sw 00:36:23.307 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 903f0f48aab1da10e4c8ead40a4aeb5786b5d123fd5e413f 2 00:36:23.307 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 903f0f48aab1da10e4c8ead40a4aeb5786b5d123fd5e413f 2 00:36:23.307 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:36:23.307 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:36:23.307 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=903f0f48aab1da10e4c8ead40a4aeb5786b5d123fd5e413f 00:36:23.307 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:36:23.307 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:36:23.307 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.2sw 00:36:23.307 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.2sw 00:36:23.307 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.2sw 00:36:23.307 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:36:23.307 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:36:23.307 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:23.307 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:36:23.307 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:36:23.307 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:36:23.307 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:36:23.307 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=e2b9b4cc18192a8003e88cabd0584ed9 00:36:23.307 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:36:23.307 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.UJY 00:36:23.307 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key e2b9b4cc18192a8003e88cabd0584ed9 1 00:36:23.307 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 e2b9b4cc18192a8003e88cabd0584ed9 1 00:36:23.307 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:36:23.307 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:36:23.307 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=e2b9b4cc18192a8003e88cabd0584ed9 00:36:23.307 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:36:23.307 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:36:23.307 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.UJY 00:36:23.307 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.UJY 00:36:23.307 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.UJY 00:36:23.307 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:36:23.307 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:36:23.307 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:23.307 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:36:23.307 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:36:23.307 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:36:23.307 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:36:23.307 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b64274408d8efcf8d5002f0896fdbcf3 00:36:23.307 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:36:23.307 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.5re 00:36:23.307 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b64274408d8efcf8d5002f0896fdbcf3 1 00:36:23.307 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b64274408d8efcf8d5002f0896fdbcf3 1 00:36:23.307 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:36:23.307 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:36:23.307 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b64274408d8efcf8d5002f0896fdbcf3 00:36:23.307 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:36:23.307 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:36:23.307 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.5re 00:36:23.307 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.5re 00:36:23.307 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.5re 00:36:23.307 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:36:23.307 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:36:23.307 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:23.307 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:36:23.307 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:36:23.307 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:36:23.307 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:36:23.308 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ac2e826f4feb86916dc2b0b22a30d3157cb8237d04153fcd 00:36:23.308 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:36:23.308 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.TpT 00:36:23.308 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ac2e826f4feb86916dc2b0b22a30d3157cb8237d04153fcd 2 00:36:23.308 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ac2e826f4feb86916dc2b0b22a30d3157cb8237d04153fcd 2 00:36:23.308 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:36:23.308 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:36:23.308 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ac2e826f4feb86916dc2b0b22a30d3157cb8237d04153fcd 00:36:23.308 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:36:23.308 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:36:23.308 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.TpT 00:36:23.308 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.TpT 00:36:23.308 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.TpT 00:36:23.308 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:36:23.308 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:36:23.308 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:23.308 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:36:23.308 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:36:23.308 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:36:23.308 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:36:23.308 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=340c074309210d3824c4aed792fd6eac 00:36:23.308 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:36:23.308 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.nLf 00:36:23.308 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 340c074309210d3824c4aed792fd6eac 0 00:36:23.308 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 340c074309210d3824c4aed792fd6eac 0 00:36:23.308 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:36:23.308 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:36:23.308 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=340c074309210d3824c4aed792fd6eac 00:36:23.308 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:36:23.308 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:36:23.308 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.nLf 00:36:23.308 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.nLf 00:36:23.308 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.nLf 00:36:23.308 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:36:23.308 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:36:23.308 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:36:23.308 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:36:23.308 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:36:23.308 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:36:23.308 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:36:23.308 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=902cddea3c8cfedda9b3d37abb36985872701db0370bb325742b1076922cb704 00:36:23.308 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:36:23.308 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.kor 00:36:23.308 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 902cddea3c8cfedda9b3d37abb36985872701db0370bb325742b1076922cb704 3 00:36:23.308 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 902cddea3c8cfedda9b3d37abb36985872701db0370bb325742b1076922cb704 3 00:36:23.308 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:36:23.308 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:36:23.308 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=902cddea3c8cfedda9b3d37abb36985872701db0370bb325742b1076922cb704 00:36:23.308 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:36:23.308 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:36:23.308 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.kor 00:36:23.308 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.kor 00:36:23.308 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.kor 00:36:23.308 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:36:23.308 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3030978 00:36:23.308 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 3030978 ']' 00:36:23.308 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:23.308 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:23.308 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:23.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:23.308 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:23.308 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.308 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:23.308 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:36:23.308 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:36:23.308 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Pd5 00:36:23.308 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.308 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.308 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.308 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.5IK ]] 00:36:23.308 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.5IK 00:36:23.308 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.308 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.308 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.308 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:36:23.308 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.K09 00:36:23.308 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.308 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.308 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.308 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.2sw ]] 00:36:23.308 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.2sw 00:36:23.308 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.309 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.309 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.309 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:36:23.309 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.UJY 00:36:23.309 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.309 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.309 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.309 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.5re ]] 00:36:23.309 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.5re 00:36:23.309 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.309 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.309 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.309 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:36:23.309 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.TpT 00:36:23.309 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.309 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.309 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.309 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.nLf ]] 00:36:23.309 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.nLf 00:36:23.309 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.309 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.309 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.309 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:36:23.309 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.kor 00:36:23.309 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.309 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:23.309 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.309 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:36:23.309 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:36:23.570 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:36:23.570 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:23.570 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:23.570 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:23.570 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:23.570 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:23.570 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:23.570 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:23.571 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:23.571 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:23.571 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:23.571 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:36:23.571 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:36:23.571 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:36:23.571 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:23.571 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:36:23.571 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:36:23.571 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:36:23.571 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:36:23.571 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:36:23.571 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:36:23.571 09:53:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:26.870 Waiting for block devices as requested 00:36:26.870 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:36:26.870 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:36:26.870 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:36:27.130 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:36:27.130 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:36:27.130 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:36:27.391 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:36:27.391 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:36:27.391 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:36:27.653 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:36:27.653 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:36:27.653 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:36:27.916 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:36:27.916 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:36:27.916 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:36:27.916 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:36:28.183 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:36:29.125 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:36:29.125 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:36:29.125 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:36:29.125 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:36:29.125 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:36:29.125 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:36:29.125 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:36:29.125 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:36:29.125 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:36:29.125 No valid GPT data, bailing 00:36:29.125 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:36:29.125 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:36:29.125 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:36:29.125 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:36:29.125 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:36:29.125 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:36:29.125 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:36:29.125 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:36:29.125 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:36:29.125 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:36:29.125 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:36:29.125 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:36:29.125 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:36:29.125 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:36:29.125 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:36:29.125 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:36:29.125 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:36:29.125 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:36:29.125 00:36:29.125 Discovery Log Number of Records 2, Generation counter 2 00:36:29.125 =====Discovery Log Entry 0====== 00:36:29.125 trtype: tcp 00:36:29.125 adrfam: ipv4 00:36:29.125 subtype: current discovery subsystem 00:36:29.125 treq: not specified, sq flow control disable supported 00:36:29.125 portid: 1 00:36:29.125 trsvcid: 4420 00:36:29.125 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:36:29.125 traddr: 10.0.0.1 00:36:29.125 eflags: none 00:36:29.125 sectype: none 00:36:29.125 =====Discovery Log Entry 1====== 00:36:29.125 trtype: tcp 00:36:29.125 adrfam: ipv4 00:36:29.125 subtype: nvme subsystem 00:36:29.125 treq: not specified, sq flow control disable supported 00:36:29.125 portid: 1 00:36:29.125 trsvcid: 4420 00:36:29.125 subnqn: nqn.2024-02.io.spdk:cnode0 00:36:29.125 traddr: 10.0.0.1 00:36:29.125 eflags: none 00:36:29.125 sectype: none 00:36:29.125 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:36:29.125 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:36:29.125 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:36:29.125 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:36:29.125 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:29.125 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:29.125 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:29.125 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:29.125 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2NmZDZiNWI2Njc3NDc0NTM0YzMwMmZmYThkOGIwNmRlYmI3MzhiNDZmYjE4NmVkp8rIAg==: 00:36:29.125 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTAzZjBmNDhhYWIxZGExMGU0YzhlYWQ0MGE0YWViNTc4NmI1ZDEyM2ZkNWU0MTNmdVUh2g==: 00:36:29.125 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:29.125 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:29.125 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2NmZDZiNWI2Njc3NDc0NTM0YzMwMmZmYThkOGIwNmRlYmI3MzhiNDZmYjE4NmVkp8rIAg==: 00:36:29.125 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTAzZjBmNDhhYWIxZGExMGU0YzhlYWQ0MGE0YWViNTc4NmI1ZDEyM2ZkNWU0MTNmdVUh2g==: ]] 00:36:29.125 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTAzZjBmNDhhYWIxZGExMGU0YzhlYWQ0MGE0YWViNTc4NmI1ZDEyM2ZkNWU0MTNmdVUh2g==: 00:36:29.125 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:36:29.125 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:36:29.125 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:36:29.125 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:36:29.125 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:36:29.125 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:29.125 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:36:29.125 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:36:29.125 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:29.125 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:29.125 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:36:29.125 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.125 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.125 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.125 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:29.125 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:29.125 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:29.125 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:29.125 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:29.125 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:29.125 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:29.125 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:29.125 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:29.125 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:29.125 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:29.125 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:29.125 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.125 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.386 nvme0n1 00:36:29.386 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.386 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:29.386 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:29.386 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.386 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.386 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.386 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:29.386 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:29.386 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.386 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.386 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.386 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:36:29.386 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:29.386 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:29.386 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:36:29.386 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:29.386 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:29.386 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:29.386 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:29.386 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmVlYjA1MmJiNDA4Yjk0MTM5OWM0ODkwZjE2YTc0MTjFh59d: 00:36:29.386 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjE1MjliOTNlNjMzMWUzODVhYWQwZGRhZWU5YjJjNGYzZGM0YTkzNTNhNGVlZmEzMWFhZWE5OWY5NWU0N2RkYaKXeuI=: 00:36:29.386 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:29.386 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:29.386 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmVlYjA1MmJiNDA4Yjk0MTM5OWM0ODkwZjE2YTc0MTjFh59d: 00:36:29.386 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjE1MjliOTNlNjMzMWUzODVhYWQwZGRhZWU5YjJjNGYzZGM0YTkzNTNhNGVlZmEzMWFhZWE5OWY5NWU0N2RkYaKXeuI=: ]] 00:36:29.386 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjE1MjliOTNlNjMzMWUzODVhYWQwZGRhZWU5YjJjNGYzZGM0YTkzNTNhNGVlZmEzMWFhZWE5OWY5NWU0N2RkYaKXeuI=: 00:36:29.386 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:36:29.386 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:29.386 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:29.386 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:29.386 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:29.386 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:29.386 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:29.386 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.386 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.386 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.386 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:29.386 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:29.386 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:29.386 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:29.386 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:29.386 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:29.386 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:29.386 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:29.386 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:29.386 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:29.386 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:29.386 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:29.386 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.386 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.647 nvme0n1 00:36:29.647 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.647 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:29.647 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:29.648 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.648 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.648 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.648 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:29.648 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:29.648 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.648 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.648 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.648 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:29.648 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:36:29.648 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:29.648 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:29.648 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:29.648 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:29.648 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2NmZDZiNWI2Njc3NDc0NTM0YzMwMmZmYThkOGIwNmRlYmI3MzhiNDZmYjE4NmVkp8rIAg==: 00:36:29.648 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTAzZjBmNDhhYWIxZGExMGU0YzhlYWQ0MGE0YWViNTc4NmI1ZDEyM2ZkNWU0MTNmdVUh2g==: 00:36:29.648 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:29.648 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:29.648 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2NmZDZiNWI2Njc3NDc0NTM0YzMwMmZmYThkOGIwNmRlYmI3MzhiNDZmYjE4NmVkp8rIAg==: 00:36:29.648 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTAzZjBmNDhhYWIxZGExMGU0YzhlYWQ0MGE0YWViNTc4NmI1ZDEyM2ZkNWU0MTNmdVUh2g==: ]] 00:36:29.648 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTAzZjBmNDhhYWIxZGExMGU0YzhlYWQ0MGE0YWViNTc4NmI1ZDEyM2ZkNWU0MTNmdVUh2g==: 00:36:29.648 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:36:29.648 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:29.648 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:29.648 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:29.648 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:29.648 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:29.648 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:29.648 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.648 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.648 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.648 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:29.648 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:29.648 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:29.648 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:29.648 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:29.648 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:29.648 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:29.648 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:29.648 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:29.648 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:29.648 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:29.648 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:29.648 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.648 09:54:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.908 nvme0n1 00:36:29.908 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.908 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:29.908 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:29.908 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.908 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.908 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.908 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:29.908 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:29.908 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.908 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.908 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.908 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:29.908 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:36:29.908 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:29.908 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:29.908 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:29.908 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:29.908 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTJiOWI0Y2MxODE5MmE4MDAzZTg4Y2FiZDA1ODRlZDk2K/RY: 00:36:29.908 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjY0Mjc0NDA4ZDhlZmNmOGQ1MDAyZjA4OTZmZGJjZjN5rQ7a: 00:36:29.908 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:29.908 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:29.908 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTJiOWI0Y2MxODE5MmE4MDAzZTg4Y2FiZDA1ODRlZDk2K/RY: 00:36:29.908 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjY0Mjc0NDA4ZDhlZmNmOGQ1MDAyZjA4OTZmZGJjZjN5rQ7a: ]] 00:36:29.908 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjY0Mjc0NDA4ZDhlZmNmOGQ1MDAyZjA4OTZmZGJjZjN5rQ7a: 00:36:29.908 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:36:29.908 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:29.908 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:29.908 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:29.908 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:29.908 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:29.908 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:29.908 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.908 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.908 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.908 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:29.908 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:29.908 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:29.908 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:29.908 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:29.908 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:29.908 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:29.908 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:29.908 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:29.908 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:29.908 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:29.908 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:29.908 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.908 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:29.908 nvme0n1 00:36:29.908 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.908 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:29.908 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:29.908 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.908 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.168 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.168 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:30.168 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:30.168 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.168 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.168 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.168 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:30.168 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:36:30.168 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:30.168 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:30.168 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:30.168 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:30.168 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWMyZTgyNmY0ZmViODY5MTZkYzJiMGIyMmEzMGQzMTU3Y2I4MjM3ZDA0MTUzZmNkz5udcQ==: 00:36:30.168 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzQwYzA3NDMwOTIxMGQzODI0YzRhZWQ3OTJmZDZlYWNfk7c2: 00:36:30.168 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:30.168 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:30.168 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWMyZTgyNmY0ZmViODY5MTZkYzJiMGIyMmEzMGQzMTU3Y2I4MjM3ZDA0MTUzZmNkz5udcQ==: 00:36:30.168 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzQwYzA3NDMwOTIxMGQzODI0YzRhZWQ3OTJmZDZlYWNfk7c2: ]] 00:36:30.168 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzQwYzA3NDMwOTIxMGQzODI0YzRhZWQ3OTJmZDZlYWNfk7c2: 00:36:30.168 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:36:30.168 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:30.168 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:30.168 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:30.168 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:30.168 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:30.168 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:30.168 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.168 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.168 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.168 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:30.168 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:30.168 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:30.168 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:30.168 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:30.168 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:30.168 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:30.168 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:30.168 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:30.168 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:30.168 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:30.168 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:30.168 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.168 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.168 nvme0n1 00:36:30.168 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.168 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:30.168 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:30.168 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.168 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.168 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.429 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:30.429 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:30.429 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.429 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.429 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.429 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:30.429 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:36:30.429 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:30.429 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:30.429 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:30.429 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:30.429 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTAyY2RkZWEzYzhjZmVkZGE5YjNkMzdhYmIzNjk4NTg3MjcwMWRiMDM3MGJiMzI1NzQyYjEwNzY5MjJjYjcwNFwfJzg=: 00:36:30.429 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:30.429 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:30.429 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:30.429 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTAyY2RkZWEzYzhjZmVkZGE5YjNkMzdhYmIzNjk4NTg3MjcwMWRiMDM3MGJiMzI1NzQyYjEwNzY5MjJjYjcwNFwfJzg=: 00:36:30.429 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:30.429 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:36:30.429 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:30.429 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:30.429 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:30.429 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:30.429 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:30.429 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:36:30.429 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.429 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.429 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.429 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:30.429 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:30.429 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:30.429 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:30.429 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:30.429 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:30.429 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:30.429 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:30.429 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:30.429 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:30.429 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:30.429 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:30.429 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.429 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.429 nvme0n1 00:36:30.429 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.429 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:30.429 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:30.429 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.429 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.429 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.429 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:30.429 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:30.429 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.429 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.429 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.429 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:30.429 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:30.429 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:36:30.429 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:30.429 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:30.690 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:30.690 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:30.690 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmVlYjA1MmJiNDA4Yjk0MTM5OWM0ODkwZjE2YTc0MTjFh59d: 00:36:30.690 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjE1MjliOTNlNjMzMWUzODVhYWQwZGRhZWU5YjJjNGYzZGM0YTkzNTNhNGVlZmEzMWFhZWE5OWY5NWU0N2RkYaKXeuI=: 00:36:30.690 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:30.690 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:30.690 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmVlYjA1MmJiNDA4Yjk0MTM5OWM0ODkwZjE2YTc0MTjFh59d: 00:36:30.690 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjE1MjliOTNlNjMzMWUzODVhYWQwZGRhZWU5YjJjNGYzZGM0YTkzNTNhNGVlZmEzMWFhZWE5OWY5NWU0N2RkYaKXeuI=: ]] 00:36:30.690 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjE1MjliOTNlNjMzMWUzODVhYWQwZGRhZWU5YjJjNGYzZGM0YTkzNTNhNGVlZmEzMWFhZWE5OWY5NWU0N2RkYaKXeuI=: 00:36:30.690 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:36:30.690 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:30.690 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:30.691 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:30.691 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:30.691 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:30.691 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:36:30.691 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.691 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.691 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.691 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:30.691 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:30.691 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:30.691 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:30.691 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:30.691 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:30.691 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:30.691 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:30.691 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:30.691 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:30.691 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:30.691 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:30.691 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.691 09:54:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.691 nvme0n1 00:36:30.691 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.691 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:30.691 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:30.691 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.691 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.691 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.691 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:30.691 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:30.691 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.691 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.952 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.952 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:30.952 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:36:30.952 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:30.952 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:30.952 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:30.952 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:30.952 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2NmZDZiNWI2Njc3NDc0NTM0YzMwMmZmYThkOGIwNmRlYmI3MzhiNDZmYjE4NmVkp8rIAg==: 00:36:30.952 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTAzZjBmNDhhYWIxZGExMGU0YzhlYWQ0MGE0YWViNTc4NmI1ZDEyM2ZkNWU0MTNmdVUh2g==: 00:36:30.952 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:30.952 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:30.952 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2NmZDZiNWI2Njc3NDc0NTM0YzMwMmZmYThkOGIwNmRlYmI3MzhiNDZmYjE4NmVkp8rIAg==: 00:36:30.952 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTAzZjBmNDhhYWIxZGExMGU0YzhlYWQ0MGE0YWViNTc4NmI1ZDEyM2ZkNWU0MTNmdVUh2g==: ]] 00:36:30.952 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTAzZjBmNDhhYWIxZGExMGU0YzhlYWQ0MGE0YWViNTc4NmI1ZDEyM2ZkNWU0MTNmdVUh2g==: 00:36:30.952 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:36:30.952 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:30.952 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:30.952 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:30.952 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:30.952 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:30.952 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:36:30.952 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.952 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.952 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.952 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:30.952 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:30.952 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:30.952 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:30.952 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:30.952 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:30.952 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:30.952 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:30.952 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:30.952 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:30.952 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:30.952 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:30.952 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.952 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.952 nvme0n1 00:36:30.952 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.952 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:30.952 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:30.952 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.952 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:30.952 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.952 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:30.952 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:30.952 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.952 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.213 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:31.213 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:31.213 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:36:31.213 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:31.213 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:31.213 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:31.213 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:31.213 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTJiOWI0Y2MxODE5MmE4MDAzZTg4Y2FiZDA1ODRlZDk2K/RY: 00:36:31.213 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjY0Mjc0NDA4ZDhlZmNmOGQ1MDAyZjA4OTZmZGJjZjN5rQ7a: 00:36:31.213 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:31.213 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:31.213 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTJiOWI0Y2MxODE5MmE4MDAzZTg4Y2FiZDA1ODRlZDk2K/RY: 00:36:31.213 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjY0Mjc0NDA4ZDhlZmNmOGQ1MDAyZjA4OTZmZGJjZjN5rQ7a: ]] 00:36:31.213 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjY0Mjc0NDA4ZDhlZmNmOGQ1MDAyZjA4OTZmZGJjZjN5rQ7a: 00:36:31.213 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:36:31.213 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:31.213 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:31.213 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:31.213 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:31.213 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:31.213 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:36:31.213 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.213 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.213 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:31.213 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:31.213 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:31.213 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:31.213 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:31.213 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:31.213 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:31.213 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:31.213 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:31.213 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:31.213 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:31.213 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:31.213 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:31.213 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.213 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.213 nvme0n1 00:36:31.213 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:31.213 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:31.213 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:31.213 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.213 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.213 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:31.213 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:31.213 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:31.213 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.213 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.473 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:31.473 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:31.473 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:36:31.473 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:31.473 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:31.473 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:31.473 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:31.473 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWMyZTgyNmY0ZmViODY5MTZkYzJiMGIyMmEzMGQzMTU3Y2I4MjM3ZDA0MTUzZmNkz5udcQ==: 00:36:31.473 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzQwYzA3NDMwOTIxMGQzODI0YzRhZWQ3OTJmZDZlYWNfk7c2: 00:36:31.473 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:31.473 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:31.473 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWMyZTgyNmY0ZmViODY5MTZkYzJiMGIyMmEzMGQzMTU3Y2I4MjM3ZDA0MTUzZmNkz5udcQ==: 00:36:31.474 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzQwYzA3NDMwOTIxMGQzODI0YzRhZWQ3OTJmZDZlYWNfk7c2: ]] 00:36:31.474 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzQwYzA3NDMwOTIxMGQzODI0YzRhZWQ3OTJmZDZlYWNfk7c2: 00:36:31.474 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:36:31.474 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:31.474 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:31.474 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:31.474 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:31.474 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:31.474 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:36:31.474 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.474 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.474 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:31.474 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:31.474 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:31.474 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:31.474 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:31.474 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:31.474 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:31.474 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:31.474 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:31.474 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:31.474 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:31.474 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:31.474 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:31.474 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.474 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.474 nvme0n1 00:36:31.474 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:31.474 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:31.474 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:31.474 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.474 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.474 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:31.474 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:31.474 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:31.474 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.474 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.735 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:31.735 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:31.735 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:36:31.735 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:31.735 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:31.735 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:31.735 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:31.735 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTAyY2RkZWEzYzhjZmVkZGE5YjNkMzdhYmIzNjk4NTg3MjcwMWRiMDM3MGJiMzI1NzQyYjEwNzY5MjJjYjcwNFwfJzg=: 00:36:31.735 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:31.735 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:31.735 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:31.735 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTAyY2RkZWEzYzhjZmVkZGE5YjNkMzdhYmIzNjk4NTg3MjcwMWRiMDM3MGJiMzI1NzQyYjEwNzY5MjJjYjcwNFwfJzg=: 00:36:31.735 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:31.735 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:36:31.735 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:31.735 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:31.735 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:31.735 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:31.735 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:31.735 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:36:31.735 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.735 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.735 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:31.735 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:31.735 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:31.735 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:31.735 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:31.735 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:31.735 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:31.735 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:31.735 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:31.735 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:31.735 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:31.735 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:31.735 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:31.735 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.735 09:54:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.735 nvme0n1 00:36:31.735 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:31.735 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:31.735 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:31.735 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.735 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.735 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:31.735 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:31.735 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:31.735 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.735 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.996 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:31.996 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:31.996 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:31.996 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:36:31.996 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:31.996 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:31.996 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:31.996 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:31.996 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmVlYjA1MmJiNDA4Yjk0MTM5OWM0ODkwZjE2YTc0MTjFh59d: 00:36:31.996 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjE1MjliOTNlNjMzMWUzODVhYWQwZGRhZWU5YjJjNGYzZGM0YTkzNTNhNGVlZmEzMWFhZWE5OWY5NWU0N2RkYaKXeuI=: 00:36:31.996 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:31.996 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:31.996 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmVlYjA1MmJiNDA4Yjk0MTM5OWM0ODkwZjE2YTc0MTjFh59d: 00:36:31.996 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjE1MjliOTNlNjMzMWUzODVhYWQwZGRhZWU5YjJjNGYzZGM0YTkzNTNhNGVlZmEzMWFhZWE5OWY5NWU0N2RkYaKXeuI=: ]] 00:36:31.996 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjE1MjliOTNlNjMzMWUzODVhYWQwZGRhZWU5YjJjNGYzZGM0YTkzNTNhNGVlZmEzMWFhZWE5OWY5NWU0N2RkYaKXeuI=: 00:36:31.996 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:36:31.996 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:31.996 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:31.996 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:31.996 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:31.996 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:31.996 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:36:31.996 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.996 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.996 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:31.996 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:31.996 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:31.996 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:31.996 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:31.996 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:31.996 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:31.996 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:31.996 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:31.996 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:31.996 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:31.996 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:31.996 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:31.996 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.996 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.258 nvme0n1 00:36:32.258 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.258 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:32.258 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:32.258 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.258 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.258 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.258 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:32.258 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:32.258 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.258 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.258 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.258 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:32.258 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:36:32.258 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:32.258 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:32.258 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:32.258 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:32.258 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2NmZDZiNWI2Njc3NDc0NTM0YzMwMmZmYThkOGIwNmRlYmI3MzhiNDZmYjE4NmVkp8rIAg==: 00:36:32.258 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTAzZjBmNDhhYWIxZGExMGU0YzhlYWQ0MGE0YWViNTc4NmI1ZDEyM2ZkNWU0MTNmdVUh2g==: 00:36:32.258 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:32.258 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:32.258 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2NmZDZiNWI2Njc3NDc0NTM0YzMwMmZmYThkOGIwNmRlYmI3MzhiNDZmYjE4NmVkp8rIAg==: 00:36:32.258 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTAzZjBmNDhhYWIxZGExMGU0YzhlYWQ0MGE0YWViNTc4NmI1ZDEyM2ZkNWU0MTNmdVUh2g==: ]] 00:36:32.258 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTAzZjBmNDhhYWIxZGExMGU0YzhlYWQ0MGE0YWViNTc4NmI1ZDEyM2ZkNWU0MTNmdVUh2g==: 00:36:32.258 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:36:32.258 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:32.258 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:32.258 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:32.258 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:32.258 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:32.258 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:36:32.258 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.258 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.258 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.258 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:32.258 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:32.258 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:32.258 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:32.258 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:32.258 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:32.258 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:32.258 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:32.258 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:32.258 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:32.258 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:32.258 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:32.258 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.258 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.519 nvme0n1 00:36:32.519 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.519 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:32.519 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:32.519 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.519 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.519 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.519 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:32.519 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:32.519 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.519 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.519 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.519 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:32.519 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:36:32.519 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:32.519 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:32.519 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:32.519 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:32.519 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTJiOWI0Y2MxODE5MmE4MDAzZTg4Y2FiZDA1ODRlZDk2K/RY: 00:36:32.519 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjY0Mjc0NDA4ZDhlZmNmOGQ1MDAyZjA4OTZmZGJjZjN5rQ7a: 00:36:32.519 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:32.519 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:32.519 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTJiOWI0Y2MxODE5MmE4MDAzZTg4Y2FiZDA1ODRlZDk2K/RY: 00:36:32.519 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjY0Mjc0NDA4ZDhlZmNmOGQ1MDAyZjA4OTZmZGJjZjN5rQ7a: ]] 00:36:32.519 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjY0Mjc0NDA4ZDhlZmNmOGQ1MDAyZjA4OTZmZGJjZjN5rQ7a: 00:36:32.519 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:36:32.519 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:32.519 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:32.519 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:32.519 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:32.519 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:32.520 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:36:32.520 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.520 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.520 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.520 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:32.520 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:32.520 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:32.520 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:32.520 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:32.520 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:32.520 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:32.520 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:32.520 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:32.520 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:32.520 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:32.520 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:32.520 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.520 09:54:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.781 nvme0n1 00:36:32.781 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.781 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:32.781 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.781 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:32.781 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:32.781 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.781 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:32.781 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:32.781 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.781 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.042 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.042 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:33.042 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:36:33.042 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:33.042 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:33.042 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:33.042 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:33.042 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWMyZTgyNmY0ZmViODY5MTZkYzJiMGIyMmEzMGQzMTU3Y2I4MjM3ZDA0MTUzZmNkz5udcQ==: 00:36:33.042 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzQwYzA3NDMwOTIxMGQzODI0YzRhZWQ3OTJmZDZlYWNfk7c2: 00:36:33.042 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:33.042 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:33.042 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWMyZTgyNmY0ZmViODY5MTZkYzJiMGIyMmEzMGQzMTU3Y2I4MjM3ZDA0MTUzZmNkz5udcQ==: 00:36:33.042 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzQwYzA3NDMwOTIxMGQzODI0YzRhZWQ3OTJmZDZlYWNfk7c2: ]] 00:36:33.042 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzQwYzA3NDMwOTIxMGQzODI0YzRhZWQ3OTJmZDZlYWNfk7c2: 00:36:33.042 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:36:33.042 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:33.042 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:33.042 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:33.042 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:33.042 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:33.042 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:36:33.042 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.042 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.042 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.042 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:33.042 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:33.042 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:33.042 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:33.042 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:33.042 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:33.042 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:33.042 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:33.042 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:33.042 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:33.042 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:33.042 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:33.042 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.042 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.303 nvme0n1 00:36:33.303 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.303 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:33.303 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:33.303 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.303 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.303 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.303 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:33.303 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:33.303 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.303 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.303 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.303 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:33.303 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:36:33.303 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:33.303 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:33.303 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:33.303 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:33.303 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTAyY2RkZWEzYzhjZmVkZGE5YjNkMzdhYmIzNjk4NTg3MjcwMWRiMDM3MGJiMzI1NzQyYjEwNzY5MjJjYjcwNFwfJzg=: 00:36:33.303 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:33.303 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:33.303 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:33.303 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTAyY2RkZWEzYzhjZmVkZGE5YjNkMzdhYmIzNjk4NTg3MjcwMWRiMDM3MGJiMzI1NzQyYjEwNzY5MjJjYjcwNFwfJzg=: 00:36:33.303 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:33.303 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:36:33.303 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:33.303 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:33.303 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:33.303 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:33.303 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:33.303 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:36:33.303 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.303 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.303 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.303 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:33.303 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:33.303 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:33.303 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:33.303 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:33.303 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:33.303 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:33.303 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:33.303 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:33.304 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:33.304 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:33.304 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:33.304 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.304 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.565 nvme0n1 00:36:33.565 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.565 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:33.565 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:33.565 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.565 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.565 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.565 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:33.565 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:33.565 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.565 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.565 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.565 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:33.565 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:33.565 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:36:33.565 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:33.565 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:33.565 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:33.565 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:33.565 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmVlYjA1MmJiNDA4Yjk0MTM5OWM0ODkwZjE2YTc0MTjFh59d: 00:36:33.565 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjE1MjliOTNlNjMzMWUzODVhYWQwZGRhZWU5YjJjNGYzZGM0YTkzNTNhNGVlZmEzMWFhZWE5OWY5NWU0N2RkYaKXeuI=: 00:36:33.565 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:33.565 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:33.565 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmVlYjA1MmJiNDA4Yjk0MTM5OWM0ODkwZjE2YTc0MTjFh59d: 00:36:33.566 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjE1MjliOTNlNjMzMWUzODVhYWQwZGRhZWU5YjJjNGYzZGM0YTkzNTNhNGVlZmEzMWFhZWE5OWY5NWU0N2RkYaKXeuI=: ]] 00:36:33.566 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjE1MjliOTNlNjMzMWUzODVhYWQwZGRhZWU5YjJjNGYzZGM0YTkzNTNhNGVlZmEzMWFhZWE5OWY5NWU0N2RkYaKXeuI=: 00:36:33.566 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:36:33.566 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:33.566 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:33.566 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:33.566 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:33.566 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:33.566 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:36:33.566 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.566 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:33.566 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.566 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:33.566 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:33.566 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:33.566 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:33.566 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:33.566 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:33.566 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:33.566 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:33.566 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:33.566 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:33.566 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:33.566 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:33.566 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.566 09:54:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.140 nvme0n1 00:36:34.140 09:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.140 09:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:34.140 09:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:34.140 09:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.140 09:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.140 09:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.140 09:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:34.140 09:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:34.140 09:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.140 09:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.140 09:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.140 09:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:34.140 09:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:36:34.140 09:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:34.140 09:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:34.140 09:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:34.140 09:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:34.140 09:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2NmZDZiNWI2Njc3NDc0NTM0YzMwMmZmYThkOGIwNmRlYmI3MzhiNDZmYjE4NmVkp8rIAg==: 00:36:34.140 09:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTAzZjBmNDhhYWIxZGExMGU0YzhlYWQ0MGE0YWViNTc4NmI1ZDEyM2ZkNWU0MTNmdVUh2g==: 00:36:34.140 09:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:34.140 09:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:34.140 09:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2NmZDZiNWI2Njc3NDc0NTM0YzMwMmZmYThkOGIwNmRlYmI3MzhiNDZmYjE4NmVkp8rIAg==: 00:36:34.140 09:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTAzZjBmNDhhYWIxZGExMGU0YzhlYWQ0MGE0YWViNTc4NmI1ZDEyM2ZkNWU0MTNmdVUh2g==: ]] 00:36:34.140 09:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTAzZjBmNDhhYWIxZGExMGU0YzhlYWQ0MGE0YWViNTc4NmI1ZDEyM2ZkNWU0MTNmdVUh2g==: 00:36:34.140 09:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:36:34.140 09:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:34.140 09:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:34.140 09:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:34.140 09:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:34.140 09:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:34.140 09:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:36:34.140 09:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.140 09:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.140 09:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.140 09:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:34.140 09:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:34.140 09:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:34.140 09:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:34.140 09:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:34.140 09:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:34.140 09:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:34.140 09:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:34.140 09:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:34.140 09:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:34.140 09:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:34.140 09:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:34.140 09:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.140 09:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.401 nvme0n1 00:36:34.401 09:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.401 09:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:34.401 09:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:34.401 09:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.401 09:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.662 09:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.662 09:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:34.662 09:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:34.662 09:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.662 09:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.662 09:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.662 09:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:34.662 09:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:36:34.662 09:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:34.662 09:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:34.662 09:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:34.662 09:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:34.662 09:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTJiOWI0Y2MxODE5MmE4MDAzZTg4Y2FiZDA1ODRlZDk2K/RY: 00:36:34.662 09:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjY0Mjc0NDA4ZDhlZmNmOGQ1MDAyZjA4OTZmZGJjZjN5rQ7a: 00:36:34.662 09:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:34.662 09:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:34.662 09:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTJiOWI0Y2MxODE5MmE4MDAzZTg4Y2FiZDA1ODRlZDk2K/RY: 00:36:34.662 09:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjY0Mjc0NDA4ZDhlZmNmOGQ1MDAyZjA4OTZmZGJjZjN5rQ7a: ]] 00:36:34.662 09:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjY0Mjc0NDA4ZDhlZmNmOGQ1MDAyZjA4OTZmZGJjZjN5rQ7a: 00:36:34.662 09:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:36:34.662 09:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:34.662 09:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:34.662 09:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:34.662 09:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:34.662 09:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:34.662 09:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:36:34.662 09:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.662 09:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.662 09:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.662 09:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:34.662 09:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:34.662 09:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:34.662 09:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:34.662 09:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:34.662 09:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:34.662 09:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:34.662 09:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:34.662 09:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:34.662 09:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:34.662 09:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:34.663 09:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:34.663 09:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.663 09:54:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.923 nvme0n1 00:36:34.923 09:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.923 09:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:34.923 09:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.923 09:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:34.923 09:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:34.923 09:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.186 09:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:35.186 09:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:35.186 09:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.186 09:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.186 09:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.186 09:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:35.186 09:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:36:35.186 09:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:35.186 09:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:35.186 09:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:35.186 09:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:35.186 09:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWMyZTgyNmY0ZmViODY5MTZkYzJiMGIyMmEzMGQzMTU3Y2I4MjM3ZDA0MTUzZmNkz5udcQ==: 00:36:35.186 09:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzQwYzA3NDMwOTIxMGQzODI0YzRhZWQ3OTJmZDZlYWNfk7c2: 00:36:35.186 09:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:35.186 09:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:35.186 09:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWMyZTgyNmY0ZmViODY5MTZkYzJiMGIyMmEzMGQzMTU3Y2I4MjM3ZDA0MTUzZmNkz5udcQ==: 00:36:35.186 09:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzQwYzA3NDMwOTIxMGQzODI0YzRhZWQ3OTJmZDZlYWNfk7c2: ]] 00:36:35.186 09:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzQwYzA3NDMwOTIxMGQzODI0YzRhZWQ3OTJmZDZlYWNfk7c2: 00:36:35.186 09:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:36:35.186 09:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:35.186 09:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:35.186 09:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:35.186 09:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:35.186 09:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:35.186 09:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:36:35.186 09:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.186 09:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.186 09:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.186 09:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:35.186 09:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:35.186 09:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:35.186 09:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:35.186 09:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:35.186 09:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:35.186 09:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:35.186 09:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:35.186 09:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:35.186 09:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:35.186 09:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:35.186 09:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:35.186 09:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.186 09:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.449 nvme0n1 00:36:35.449 09:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.449 09:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:35.449 09:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:35.449 09:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.449 09:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.449 09:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.449 09:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:35.449 09:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:35.449 09:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.449 09:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.711 09:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.711 09:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:35.711 09:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:36:35.711 09:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:35.711 09:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:35.711 09:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:35.711 09:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:35.711 09:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTAyY2RkZWEzYzhjZmVkZGE5YjNkMzdhYmIzNjk4NTg3MjcwMWRiMDM3MGJiMzI1NzQyYjEwNzY5MjJjYjcwNFwfJzg=: 00:36:35.711 09:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:35.711 09:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:35.711 09:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:35.711 09:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTAyY2RkZWEzYzhjZmVkZGE5YjNkMzdhYmIzNjk4NTg3MjcwMWRiMDM3MGJiMzI1NzQyYjEwNzY5MjJjYjcwNFwfJzg=: 00:36:35.711 09:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:35.711 09:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:36:35.711 09:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:35.711 09:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:35.711 09:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:35.712 09:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:35.712 09:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:35.712 09:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:36:35.712 09:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.712 09:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.712 09:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.712 09:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:35.712 09:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:35.712 09:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:35.712 09:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:35.712 09:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:35.712 09:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:35.712 09:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:35.712 09:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:35.712 09:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:35.712 09:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:35.712 09:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:35.712 09:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:35.712 09:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.712 09:54:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.973 nvme0n1 00:36:35.973 09:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.973 09:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:35.973 09:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:35.973 09:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.973 09:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.973 09:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.973 09:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:35.973 09:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:35.973 09:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.973 09:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.973 09:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.973 09:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:35.973 09:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:35.973 09:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:36:35.973 09:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:35.973 09:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:35.973 09:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:35.973 09:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:35.973 09:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmVlYjA1MmJiNDA4Yjk0MTM5OWM0ODkwZjE2YTc0MTjFh59d: 00:36:35.973 09:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjE1MjliOTNlNjMzMWUzODVhYWQwZGRhZWU5YjJjNGYzZGM0YTkzNTNhNGVlZmEzMWFhZWE5OWY5NWU0N2RkYaKXeuI=: 00:36:35.973 09:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:35.973 09:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:35.973 09:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmVlYjA1MmJiNDA4Yjk0MTM5OWM0ODkwZjE2YTc0MTjFh59d: 00:36:35.973 09:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjE1MjliOTNlNjMzMWUzODVhYWQwZGRhZWU5YjJjNGYzZGM0YTkzNTNhNGVlZmEzMWFhZWE5OWY5NWU0N2RkYaKXeuI=: ]] 00:36:35.973 09:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjE1MjliOTNlNjMzMWUzODVhYWQwZGRhZWU5YjJjNGYzZGM0YTkzNTNhNGVlZmEzMWFhZWE5OWY5NWU0N2RkYaKXeuI=: 00:36:35.973 09:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:36:35.973 09:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:35.973 09:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:35.973 09:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:35.973 09:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:35.973 09:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:35.973 09:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:36:35.973 09:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.973 09:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:35.973 09:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.234 09:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:36.234 09:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:36.234 09:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:36.234 09:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:36.234 09:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:36.234 09:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:36.234 09:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:36.234 09:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:36.234 09:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:36.234 09:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:36.234 09:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:36.234 09:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:36.234 09:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.234 09:54:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.806 nvme0n1 00:36:36.806 09:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.806 09:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:36.806 09:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:36.806 09:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.806 09:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.806 09:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.806 09:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:36.806 09:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:36.806 09:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.806 09:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.806 09:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.806 09:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:36.806 09:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:36:36.806 09:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:36.806 09:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:36.806 09:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:36.806 09:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:36.806 09:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2NmZDZiNWI2Njc3NDc0NTM0YzMwMmZmYThkOGIwNmRlYmI3MzhiNDZmYjE4NmVkp8rIAg==: 00:36:36.806 09:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTAzZjBmNDhhYWIxZGExMGU0YzhlYWQ0MGE0YWViNTc4NmI1ZDEyM2ZkNWU0MTNmdVUh2g==: 00:36:36.806 09:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:36.806 09:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:36.806 09:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2NmZDZiNWI2Njc3NDc0NTM0YzMwMmZmYThkOGIwNmRlYmI3MzhiNDZmYjE4NmVkp8rIAg==: 00:36:36.806 09:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTAzZjBmNDhhYWIxZGExMGU0YzhlYWQ0MGE0YWViNTc4NmI1ZDEyM2ZkNWU0MTNmdVUh2g==: ]] 00:36:36.806 09:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTAzZjBmNDhhYWIxZGExMGU0YzhlYWQ0MGE0YWViNTc4NmI1ZDEyM2ZkNWU0MTNmdVUh2g==: 00:36:36.806 09:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:36:36.806 09:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:36.806 09:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:36.806 09:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:36.806 09:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:36.806 09:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:36.806 09:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:36:36.806 09:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.806 09:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:36.806 09:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.806 09:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:36.807 09:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:36.807 09:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:36.807 09:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:36.807 09:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:36.807 09:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:36.807 09:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:36.807 09:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:36.807 09:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:36.807 09:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:36.807 09:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:36.807 09:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:36.807 09:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.807 09:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.378 nvme0n1 00:36:37.378 09:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.378 09:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:37.378 09:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:37.378 09:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.378 09:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.378 09:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.378 09:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:37.378 09:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:37.378 09:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.378 09:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.638 09:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.638 09:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:37.638 09:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:36:37.638 09:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:37.638 09:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:37.638 09:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:37.638 09:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:37.638 09:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTJiOWI0Y2MxODE5MmE4MDAzZTg4Y2FiZDA1ODRlZDk2K/RY: 00:36:37.638 09:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjY0Mjc0NDA4ZDhlZmNmOGQ1MDAyZjA4OTZmZGJjZjN5rQ7a: 00:36:37.638 09:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:37.638 09:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:37.638 09:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTJiOWI0Y2MxODE5MmE4MDAzZTg4Y2FiZDA1ODRlZDk2K/RY: 00:36:37.638 09:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjY0Mjc0NDA4ZDhlZmNmOGQ1MDAyZjA4OTZmZGJjZjN5rQ7a: ]] 00:36:37.638 09:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjY0Mjc0NDA4ZDhlZmNmOGQ1MDAyZjA4OTZmZGJjZjN5rQ7a: 00:36:37.639 09:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:36:37.639 09:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:37.639 09:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:37.639 09:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:37.639 09:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:37.639 09:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:37.639 09:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:36:37.639 09:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.639 09:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:37.639 09:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.639 09:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:37.639 09:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:37.639 09:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:37.639 09:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:37.639 09:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:37.639 09:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:37.639 09:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:37.639 09:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:37.639 09:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:37.639 09:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:37.639 09:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:37.639 09:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:37.639 09:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.639 09:54:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.210 nvme0n1 00:36:38.210 09:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.210 09:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:38.210 09:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:38.210 09:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.210 09:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.210 09:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.210 09:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:38.210 09:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:38.210 09:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.210 09:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.210 09:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.210 09:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:38.210 09:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:36:38.210 09:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:38.210 09:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:38.210 09:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:38.210 09:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:38.210 09:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWMyZTgyNmY0ZmViODY5MTZkYzJiMGIyMmEzMGQzMTU3Y2I4MjM3ZDA0MTUzZmNkz5udcQ==: 00:36:38.210 09:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzQwYzA3NDMwOTIxMGQzODI0YzRhZWQ3OTJmZDZlYWNfk7c2: 00:36:38.210 09:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:38.210 09:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:38.210 09:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWMyZTgyNmY0ZmViODY5MTZkYzJiMGIyMmEzMGQzMTU3Y2I4MjM3ZDA0MTUzZmNkz5udcQ==: 00:36:38.210 09:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzQwYzA3NDMwOTIxMGQzODI0YzRhZWQ3OTJmZDZlYWNfk7c2: ]] 00:36:38.210 09:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzQwYzA3NDMwOTIxMGQzODI0YzRhZWQ3OTJmZDZlYWNfk7c2: 00:36:38.210 09:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:36:38.210 09:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:38.210 09:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:38.210 09:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:38.210 09:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:38.210 09:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:38.210 09:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:36:38.210 09:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.210 09:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.210 09:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.210 09:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:38.210 09:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:38.210 09:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:38.210 09:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:38.210 09:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:38.210 09:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:38.210 09:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:38.210 09:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:38.210 09:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:38.210 09:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:38.210 09:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:38.210 09:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:38.210 09:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.210 09:54:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.781 nvme0n1 00:36:38.781 09:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.782 09:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:38.782 09:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:38.782 09:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.782 09:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:38.782 09:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.042 09:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:39.042 09:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:39.042 09:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.042 09:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.042 09:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.042 09:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:39.042 09:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:36:39.042 09:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:39.042 09:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:36:39.042 09:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:39.042 09:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:39.042 09:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTAyY2RkZWEzYzhjZmVkZGE5YjNkMzdhYmIzNjk4NTg3MjcwMWRiMDM3MGJiMzI1NzQyYjEwNzY5MjJjYjcwNFwfJzg=: 00:36:39.042 09:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:39.042 09:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:36:39.042 09:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:39.042 09:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTAyY2RkZWEzYzhjZmVkZGE5YjNkMzdhYmIzNjk4NTg3MjcwMWRiMDM3MGJiMzI1NzQyYjEwNzY5MjJjYjcwNFwfJzg=: 00:36:39.042 09:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:39.042 09:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:36:39.042 09:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:39.042 09:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:36:39.042 09:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:39.042 09:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:39.042 09:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:39.042 09:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:36:39.042 09:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.042 09:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.042 09:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.042 09:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:39.042 09:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:39.042 09:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:39.042 09:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:39.042 09:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:39.042 09:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:39.042 09:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:39.042 09:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:39.042 09:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:39.042 09:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:39.042 09:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:39.042 09:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:39.042 09:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.042 09:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.613 nvme0n1 00:36:39.614 09:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.614 09:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:39.614 09:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:39.614 09:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.614 09:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.614 09:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.614 09:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:39.614 09:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:39.614 09:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.614 09:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.614 09:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.614 09:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:36:39.614 09:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:39.614 09:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:39.614 09:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:36:39.614 09:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:39.614 09:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:39.614 09:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:39.614 09:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:39.614 09:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmVlYjA1MmJiNDA4Yjk0MTM5OWM0ODkwZjE2YTc0MTjFh59d: 00:36:39.614 09:54:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjE1MjliOTNlNjMzMWUzODVhYWQwZGRhZWU5YjJjNGYzZGM0YTkzNTNhNGVlZmEzMWFhZWE5OWY5NWU0N2RkYaKXeuI=: 00:36:39.614 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:39.614 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:39.614 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmVlYjA1MmJiNDA4Yjk0MTM5OWM0ODkwZjE2YTc0MTjFh59d: 00:36:39.614 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjE1MjliOTNlNjMzMWUzODVhYWQwZGRhZWU5YjJjNGYzZGM0YTkzNTNhNGVlZmEzMWFhZWE5OWY5NWU0N2RkYaKXeuI=: ]] 00:36:39.614 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjE1MjliOTNlNjMzMWUzODVhYWQwZGRhZWU5YjJjNGYzZGM0YTkzNTNhNGVlZmEzMWFhZWE5OWY5NWU0N2RkYaKXeuI=: 00:36:39.614 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:36:39.614 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:39.614 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:39.614 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:39.614 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:39.614 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:39.614 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:39.614 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.614 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.614 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.614 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:39.614 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:39.614 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:39.614 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:39.614 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:39.614 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:39.614 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:39.614 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:39.614 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:39.614 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:39.614 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:39.614 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:39.614 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.614 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.880 nvme0n1 00:36:39.880 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.880 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:39.880 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:39.880 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.880 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.880 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.880 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:39.880 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:39.880 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.880 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.880 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.880 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:39.880 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:36:39.880 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:39.880 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:39.880 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:39.880 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:39.880 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2NmZDZiNWI2Njc3NDc0NTM0YzMwMmZmYThkOGIwNmRlYmI3MzhiNDZmYjE4NmVkp8rIAg==: 00:36:39.880 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTAzZjBmNDhhYWIxZGExMGU0YzhlYWQ0MGE0YWViNTc4NmI1ZDEyM2ZkNWU0MTNmdVUh2g==: 00:36:39.880 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:39.880 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:39.880 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2NmZDZiNWI2Njc3NDc0NTM0YzMwMmZmYThkOGIwNmRlYmI3MzhiNDZmYjE4NmVkp8rIAg==: 00:36:39.880 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTAzZjBmNDhhYWIxZGExMGU0YzhlYWQ0MGE0YWViNTc4NmI1ZDEyM2ZkNWU0MTNmdVUh2g==: ]] 00:36:39.880 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTAzZjBmNDhhYWIxZGExMGU0YzhlYWQ0MGE0YWViNTc4NmI1ZDEyM2ZkNWU0MTNmdVUh2g==: 00:36:39.880 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:36:39.880 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:39.880 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:39.880 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:39.880 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:39.880 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:39.880 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:39.880 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.880 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:39.880 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.880 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:39.880 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:39.880 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:39.880 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:39.880 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:39.880 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:39.880 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:39.880 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:39.880 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:39.880 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:39.880 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:39.880 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:39.880 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:39.880 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.142 nvme0n1 00:36:40.142 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.142 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:40.142 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:40.142 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.142 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.142 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.142 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:40.142 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:40.142 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.142 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.142 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.142 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:40.142 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:36:40.142 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:40.142 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:40.142 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:40.142 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:40.142 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTJiOWI0Y2MxODE5MmE4MDAzZTg4Y2FiZDA1ODRlZDk2K/RY: 00:36:40.142 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjY0Mjc0NDA4ZDhlZmNmOGQ1MDAyZjA4OTZmZGJjZjN5rQ7a: 00:36:40.142 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:40.142 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:40.142 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTJiOWI0Y2MxODE5MmE4MDAzZTg4Y2FiZDA1ODRlZDk2K/RY: 00:36:40.142 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjY0Mjc0NDA4ZDhlZmNmOGQ1MDAyZjA4OTZmZGJjZjN5rQ7a: ]] 00:36:40.142 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjY0Mjc0NDA4ZDhlZmNmOGQ1MDAyZjA4OTZmZGJjZjN5rQ7a: 00:36:40.142 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:36:40.142 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:40.142 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:40.142 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:40.142 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:40.142 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:40.142 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:40.142 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.142 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.142 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.142 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:40.142 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:40.142 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:40.142 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:40.142 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:40.142 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:40.142 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:40.142 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:40.142 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:40.142 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:40.142 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:40.142 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:40.142 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.142 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.402 nvme0n1 00:36:40.402 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.402 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:40.402 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.402 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:40.402 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.402 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.402 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:40.402 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:40.402 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.402 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.402 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.402 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:40.402 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:36:40.402 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:40.402 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:40.402 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:40.402 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:40.402 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWMyZTgyNmY0ZmViODY5MTZkYzJiMGIyMmEzMGQzMTU3Y2I4MjM3ZDA0MTUzZmNkz5udcQ==: 00:36:40.402 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzQwYzA3NDMwOTIxMGQzODI0YzRhZWQ3OTJmZDZlYWNfk7c2: 00:36:40.402 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:40.402 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:40.402 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWMyZTgyNmY0ZmViODY5MTZkYzJiMGIyMmEzMGQzMTU3Y2I4MjM3ZDA0MTUzZmNkz5udcQ==: 00:36:40.402 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzQwYzA3NDMwOTIxMGQzODI0YzRhZWQ3OTJmZDZlYWNfk7c2: ]] 00:36:40.402 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzQwYzA3NDMwOTIxMGQzODI0YzRhZWQ3OTJmZDZlYWNfk7c2: 00:36:40.402 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:36:40.402 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:40.402 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:40.402 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:40.402 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:40.402 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:40.402 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:40.402 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.402 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.402 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.402 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:40.402 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:40.402 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:40.402 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:40.402 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:40.402 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:40.402 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:40.402 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:40.402 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:40.402 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:40.402 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:40.402 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:40.402 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.402 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.661 nvme0n1 00:36:40.661 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.661 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:40.661 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:40.661 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.661 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.661 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.661 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:40.661 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:40.661 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.661 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.661 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.661 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:40.661 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:36:40.661 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:40.661 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:40.661 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:40.661 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:40.661 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTAyY2RkZWEzYzhjZmVkZGE5YjNkMzdhYmIzNjk4NTg3MjcwMWRiMDM3MGJiMzI1NzQyYjEwNzY5MjJjYjcwNFwfJzg=: 00:36:40.661 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:40.661 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:40.661 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:40.661 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTAyY2RkZWEzYzhjZmVkZGE5YjNkMzdhYmIzNjk4NTg3MjcwMWRiMDM3MGJiMzI1NzQyYjEwNzY5MjJjYjcwNFwfJzg=: 00:36:40.661 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:40.661 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:36:40.661 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:40.661 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:40.661 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:40.661 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:40.661 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:40.661 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:36:40.661 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.661 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.661 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.661 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:40.661 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:40.661 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:40.661 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:40.661 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:40.661 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:40.661 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:40.661 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:40.661 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:40.661 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:40.661 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:40.661 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:40.661 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.661 09:54:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.661 nvme0n1 00:36:40.661 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.661 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:40.661 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:40.661 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.661 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.661 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.921 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:40.921 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:40.921 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.921 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.921 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.921 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:40.921 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:40.921 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:36:40.921 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:40.921 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:40.921 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:40.921 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:40.921 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmVlYjA1MmJiNDA4Yjk0MTM5OWM0ODkwZjE2YTc0MTjFh59d: 00:36:40.921 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjE1MjliOTNlNjMzMWUzODVhYWQwZGRhZWU5YjJjNGYzZGM0YTkzNTNhNGVlZmEzMWFhZWE5OWY5NWU0N2RkYaKXeuI=: 00:36:40.921 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:40.921 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:40.921 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmVlYjA1MmJiNDA4Yjk0MTM5OWM0ODkwZjE2YTc0MTjFh59d: 00:36:40.921 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjE1MjliOTNlNjMzMWUzODVhYWQwZGRhZWU5YjJjNGYzZGM0YTkzNTNhNGVlZmEzMWFhZWE5OWY5NWU0N2RkYaKXeuI=: ]] 00:36:40.921 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjE1MjliOTNlNjMzMWUzODVhYWQwZGRhZWU5YjJjNGYzZGM0YTkzNTNhNGVlZmEzMWFhZWE5OWY5NWU0N2RkYaKXeuI=: 00:36:40.922 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:36:40.922 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:40.922 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:40.922 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:40.922 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:40.922 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:40.922 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:40.922 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.922 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.922 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.922 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:40.922 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:40.922 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:40.922 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:40.922 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:40.922 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:40.922 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:40.922 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:40.922 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:40.922 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:40.922 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:40.922 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:40.922 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.922 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.922 nvme0n1 00:36:40.922 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.922 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:40.922 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:40.922 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.922 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.922 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.182 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:41.182 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:41.182 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.182 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.182 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.182 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:41.182 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:36:41.182 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:41.182 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:41.182 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:41.182 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:41.182 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2NmZDZiNWI2Njc3NDc0NTM0YzMwMmZmYThkOGIwNmRlYmI3MzhiNDZmYjE4NmVkp8rIAg==: 00:36:41.182 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTAzZjBmNDhhYWIxZGExMGU0YzhlYWQ0MGE0YWViNTc4NmI1ZDEyM2ZkNWU0MTNmdVUh2g==: 00:36:41.182 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:41.182 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:41.182 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2NmZDZiNWI2Njc3NDc0NTM0YzMwMmZmYThkOGIwNmRlYmI3MzhiNDZmYjE4NmVkp8rIAg==: 00:36:41.182 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTAzZjBmNDhhYWIxZGExMGU0YzhlYWQ0MGE0YWViNTc4NmI1ZDEyM2ZkNWU0MTNmdVUh2g==: ]] 00:36:41.182 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTAzZjBmNDhhYWIxZGExMGU0YzhlYWQ0MGE0YWViNTc4NmI1ZDEyM2ZkNWU0MTNmdVUh2g==: 00:36:41.182 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:36:41.182 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:41.182 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:41.182 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:41.182 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:41.182 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:41.182 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:41.182 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.182 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.182 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.182 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:41.182 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:41.182 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:41.182 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:41.182 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:41.182 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:41.182 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:41.182 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:41.182 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:41.182 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:41.182 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:41.182 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:41.182 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.182 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.182 nvme0n1 00:36:41.182 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.182 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:41.182 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:41.182 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.182 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.182 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.442 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:41.442 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:41.442 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.442 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.442 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.442 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:41.442 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:36:41.442 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:41.442 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:41.442 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:41.442 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:41.443 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTJiOWI0Y2MxODE5MmE4MDAzZTg4Y2FiZDA1ODRlZDk2K/RY: 00:36:41.443 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjY0Mjc0NDA4ZDhlZmNmOGQ1MDAyZjA4OTZmZGJjZjN5rQ7a: 00:36:41.443 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:41.443 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:41.443 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTJiOWI0Y2MxODE5MmE4MDAzZTg4Y2FiZDA1ODRlZDk2K/RY: 00:36:41.443 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjY0Mjc0NDA4ZDhlZmNmOGQ1MDAyZjA4OTZmZGJjZjN5rQ7a: ]] 00:36:41.443 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjY0Mjc0NDA4ZDhlZmNmOGQ1MDAyZjA4OTZmZGJjZjN5rQ7a: 00:36:41.443 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:36:41.443 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:41.443 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:41.443 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:41.443 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:41.443 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:41.443 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:41.443 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.443 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.443 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.443 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:41.443 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:41.443 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:41.443 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:41.443 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:41.443 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:41.443 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:41.443 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:41.443 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:41.443 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:41.443 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:41.443 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:41.443 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.443 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.443 nvme0n1 00:36:41.443 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.443 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:41.443 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:41.443 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.443 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.703 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.703 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:41.703 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:41.703 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.703 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.703 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.703 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:41.703 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:36:41.703 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:41.703 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:41.703 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:41.703 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:41.703 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWMyZTgyNmY0ZmViODY5MTZkYzJiMGIyMmEzMGQzMTU3Y2I4MjM3ZDA0MTUzZmNkz5udcQ==: 00:36:41.703 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzQwYzA3NDMwOTIxMGQzODI0YzRhZWQ3OTJmZDZlYWNfk7c2: 00:36:41.703 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:41.703 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:41.703 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWMyZTgyNmY0ZmViODY5MTZkYzJiMGIyMmEzMGQzMTU3Y2I4MjM3ZDA0MTUzZmNkz5udcQ==: 00:36:41.703 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzQwYzA3NDMwOTIxMGQzODI0YzRhZWQ3OTJmZDZlYWNfk7c2: ]] 00:36:41.703 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzQwYzA3NDMwOTIxMGQzODI0YzRhZWQ3OTJmZDZlYWNfk7c2: 00:36:41.703 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:36:41.703 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:41.703 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:41.703 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:41.703 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:41.703 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:41.703 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:41.703 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.703 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.703 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.703 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:41.703 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:41.703 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:41.703 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:41.703 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:41.703 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:41.703 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:41.703 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:41.703 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:41.703 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:41.703 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:41.703 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:41.703 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.703 09:54:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.703 nvme0n1 00:36:41.703 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.703 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:41.703 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:41.703 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.703 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.703 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.962 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:41.963 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:41.963 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.963 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.963 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.963 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:41.963 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:36:41.963 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:41.963 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:41.963 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:41.963 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:41.963 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTAyY2RkZWEzYzhjZmVkZGE5YjNkMzdhYmIzNjk4NTg3MjcwMWRiMDM3MGJiMzI1NzQyYjEwNzY5MjJjYjcwNFwfJzg=: 00:36:41.963 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:41.963 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:41.963 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:41.963 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTAyY2RkZWEzYzhjZmVkZGE5YjNkMzdhYmIzNjk4NTg3MjcwMWRiMDM3MGJiMzI1NzQyYjEwNzY5MjJjYjcwNFwfJzg=: 00:36:41.963 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:41.963 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:36:41.963 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:41.963 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:41.963 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:41.963 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:41.963 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:41.963 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:36:41.963 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.963 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.963 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.963 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:41.963 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:41.963 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:41.963 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:41.963 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:41.963 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:41.963 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:41.963 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:41.963 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:41.963 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:41.963 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:41.963 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:41.963 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.963 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.963 nvme0n1 00:36:41.963 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.963 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:41.963 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:41.963 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.963 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.963 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:42.224 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:42.224 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:42.224 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:42.224 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.224 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:42.224 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:42.224 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:42.224 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:36:42.224 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:42.224 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:42.224 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:42.224 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:42.224 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmVlYjA1MmJiNDA4Yjk0MTM5OWM0ODkwZjE2YTc0MTjFh59d: 00:36:42.224 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjE1MjliOTNlNjMzMWUzODVhYWQwZGRhZWU5YjJjNGYzZGM0YTkzNTNhNGVlZmEzMWFhZWE5OWY5NWU0N2RkYaKXeuI=: 00:36:42.224 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:42.224 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:42.224 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmVlYjA1MmJiNDA4Yjk0MTM5OWM0ODkwZjE2YTc0MTjFh59d: 00:36:42.224 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjE1MjliOTNlNjMzMWUzODVhYWQwZGRhZWU5YjJjNGYzZGM0YTkzNTNhNGVlZmEzMWFhZWE5OWY5NWU0N2RkYaKXeuI=: ]] 00:36:42.224 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjE1MjliOTNlNjMzMWUzODVhYWQwZGRhZWU5YjJjNGYzZGM0YTkzNTNhNGVlZmEzMWFhZWE5OWY5NWU0N2RkYaKXeuI=: 00:36:42.224 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:36:42.224 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:42.224 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:42.224 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:42.224 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:42.224 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:42.224 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:42.224 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:42.224 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.224 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:42.224 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:42.224 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:42.224 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:42.224 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:42.224 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:42.224 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:42.224 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:42.224 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:42.224 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:42.224 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:42.224 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:42.224 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:42.224 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:42.224 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.486 nvme0n1 00:36:42.486 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:42.486 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:42.486 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:42.486 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:42.486 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.486 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:42.486 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:42.486 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:42.486 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:42.486 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.486 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:42.486 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:42.486 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:36:42.486 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:42.486 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:42.486 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:42.486 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:42.486 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2NmZDZiNWI2Njc3NDc0NTM0YzMwMmZmYThkOGIwNmRlYmI3MzhiNDZmYjE4NmVkp8rIAg==: 00:36:42.486 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTAzZjBmNDhhYWIxZGExMGU0YzhlYWQ0MGE0YWViNTc4NmI1ZDEyM2ZkNWU0MTNmdVUh2g==: 00:36:42.486 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:42.486 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:42.486 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2NmZDZiNWI2Njc3NDc0NTM0YzMwMmZmYThkOGIwNmRlYmI3MzhiNDZmYjE4NmVkp8rIAg==: 00:36:42.486 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTAzZjBmNDhhYWIxZGExMGU0YzhlYWQ0MGE0YWViNTc4NmI1ZDEyM2ZkNWU0MTNmdVUh2g==: ]] 00:36:42.486 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTAzZjBmNDhhYWIxZGExMGU0YzhlYWQ0MGE0YWViNTc4NmI1ZDEyM2ZkNWU0MTNmdVUh2g==: 00:36:42.486 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:36:42.486 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:42.486 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:42.486 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:42.486 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:42.486 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:42.486 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:42.486 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:42.486 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.486 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:42.486 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:42.486 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:42.486 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:42.486 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:42.486 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:42.486 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:42.486 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:42.486 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:42.486 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:42.486 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:42.486 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:42.486 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:42.486 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:42.486 09:54:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.748 nvme0n1 00:36:42.748 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:42.748 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:42.748 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:42.748 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:42.748 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.748 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:42.748 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:42.748 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:42.748 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:42.748 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.748 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:42.748 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:42.748 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:36:42.748 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:42.748 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:42.748 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:42.748 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:42.748 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTJiOWI0Y2MxODE5MmE4MDAzZTg4Y2FiZDA1ODRlZDk2K/RY: 00:36:42.748 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjY0Mjc0NDA4ZDhlZmNmOGQ1MDAyZjA4OTZmZGJjZjN5rQ7a: 00:36:42.748 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:42.748 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:42.748 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTJiOWI0Y2MxODE5MmE4MDAzZTg4Y2FiZDA1ODRlZDk2K/RY: 00:36:42.748 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjY0Mjc0NDA4ZDhlZmNmOGQ1MDAyZjA4OTZmZGJjZjN5rQ7a: ]] 00:36:42.748 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjY0Mjc0NDA4ZDhlZmNmOGQ1MDAyZjA4OTZmZGJjZjN5rQ7a: 00:36:42.748 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:36:42.748 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:42.748 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:42.748 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:42.748 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:42.748 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:42.748 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:42.748 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:42.748 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:42.748 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:42.748 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:42.748 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:42.748 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:42.748 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:42.748 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:42.748 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:42.749 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:42.749 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:42.749 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:42.749 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:42.749 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:42.749 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:42.749 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:42.749 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:43.009 nvme0n1 00:36:43.010 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.010 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:43.010 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:43.010 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.010 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:43.010 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.010 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:43.010 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:43.010 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.010 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:43.270 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.270 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:43.270 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:36:43.271 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:43.271 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:43.271 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:43.271 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:43.271 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWMyZTgyNmY0ZmViODY5MTZkYzJiMGIyMmEzMGQzMTU3Y2I4MjM3ZDA0MTUzZmNkz5udcQ==: 00:36:43.271 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzQwYzA3NDMwOTIxMGQzODI0YzRhZWQ3OTJmZDZlYWNfk7c2: 00:36:43.271 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:43.271 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:43.271 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWMyZTgyNmY0ZmViODY5MTZkYzJiMGIyMmEzMGQzMTU3Y2I4MjM3ZDA0MTUzZmNkz5udcQ==: 00:36:43.271 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzQwYzA3NDMwOTIxMGQzODI0YzRhZWQ3OTJmZDZlYWNfk7c2: ]] 00:36:43.271 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzQwYzA3NDMwOTIxMGQzODI0YzRhZWQ3OTJmZDZlYWNfk7c2: 00:36:43.271 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:36:43.271 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:43.271 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:43.271 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:43.271 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:43.271 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:43.271 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:43.271 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.271 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:43.271 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.271 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:43.271 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:43.271 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:43.271 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:43.271 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:43.271 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:43.271 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:43.271 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:43.271 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:43.271 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:43.271 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:43.271 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:43.271 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.271 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:43.531 nvme0n1 00:36:43.531 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.531 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:43.531 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:43.531 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.531 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:43.532 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.532 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:43.532 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:43.532 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.532 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:43.532 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.532 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:43.532 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:36:43.532 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:43.532 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:43.532 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:43.532 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:43.532 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTAyY2RkZWEzYzhjZmVkZGE5YjNkMzdhYmIzNjk4NTg3MjcwMWRiMDM3MGJiMzI1NzQyYjEwNzY5MjJjYjcwNFwfJzg=: 00:36:43.532 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:43.532 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:43.532 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:43.532 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTAyY2RkZWEzYzhjZmVkZGE5YjNkMzdhYmIzNjk4NTg3MjcwMWRiMDM3MGJiMzI1NzQyYjEwNzY5MjJjYjcwNFwfJzg=: 00:36:43.532 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:43.532 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:36:43.532 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:43.532 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:43.532 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:43.532 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:43.532 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:43.532 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:36:43.532 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.532 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:43.532 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.532 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:43.532 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:43.532 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:43.532 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:43.532 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:43.532 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:43.532 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:43.532 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:43.532 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:43.532 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:43.532 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:43.532 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:43.532 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.532 09:54:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:43.792 nvme0n1 00:36:43.792 09:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.792 09:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:43.792 09:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:43.792 09:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.792 09:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:43.792 09:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.792 09:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:43.792 09:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:43.793 09:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.793 09:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:43.793 09:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.793 09:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:43.793 09:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:43.793 09:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:36:43.793 09:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:43.793 09:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:43.793 09:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:43.793 09:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:43.793 09:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmVlYjA1MmJiNDA4Yjk0MTM5OWM0ODkwZjE2YTc0MTjFh59d: 00:36:43.793 09:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjE1MjliOTNlNjMzMWUzODVhYWQwZGRhZWU5YjJjNGYzZGM0YTkzNTNhNGVlZmEzMWFhZWE5OWY5NWU0N2RkYaKXeuI=: 00:36:43.793 09:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:43.793 09:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:43.793 09:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmVlYjA1MmJiNDA4Yjk0MTM5OWM0ODkwZjE2YTc0MTjFh59d: 00:36:43.793 09:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjE1MjliOTNlNjMzMWUzODVhYWQwZGRhZWU5YjJjNGYzZGM0YTkzNTNhNGVlZmEzMWFhZWE5OWY5NWU0N2RkYaKXeuI=: ]] 00:36:43.793 09:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjE1MjliOTNlNjMzMWUzODVhYWQwZGRhZWU5YjJjNGYzZGM0YTkzNTNhNGVlZmEzMWFhZWE5OWY5NWU0N2RkYaKXeuI=: 00:36:43.793 09:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:36:43.793 09:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:43.793 09:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:43.793 09:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:43.793 09:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:43.793 09:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:43.793 09:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:43.793 09:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.793 09:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:43.793 09:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.793 09:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:43.793 09:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:43.793 09:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:43.793 09:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:43.793 09:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:43.793 09:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:43.793 09:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:43.793 09:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:43.793 09:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:43.793 09:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:43.793 09:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:43.793 09:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:43.793 09:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.793 09:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.366 nvme0n1 00:36:44.366 09:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:44.366 09:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:44.366 09:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:44.366 09:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.366 09:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.366 09:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:44.366 09:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:44.366 09:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:44.366 09:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.366 09:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.366 09:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:44.366 09:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:44.366 09:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:36:44.366 09:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:44.366 09:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:44.366 09:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:44.366 09:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:44.366 09:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2NmZDZiNWI2Njc3NDc0NTM0YzMwMmZmYThkOGIwNmRlYmI3MzhiNDZmYjE4NmVkp8rIAg==: 00:36:44.366 09:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTAzZjBmNDhhYWIxZGExMGU0YzhlYWQ0MGE0YWViNTc4NmI1ZDEyM2ZkNWU0MTNmdVUh2g==: 00:36:44.366 09:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:44.366 09:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:44.366 09:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2NmZDZiNWI2Njc3NDc0NTM0YzMwMmZmYThkOGIwNmRlYmI3MzhiNDZmYjE4NmVkp8rIAg==: 00:36:44.366 09:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTAzZjBmNDhhYWIxZGExMGU0YzhlYWQ0MGE0YWViNTc4NmI1ZDEyM2ZkNWU0MTNmdVUh2g==: ]] 00:36:44.366 09:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTAzZjBmNDhhYWIxZGExMGU0YzhlYWQ0MGE0YWViNTc4NmI1ZDEyM2ZkNWU0MTNmdVUh2g==: 00:36:44.366 09:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:36:44.366 09:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:44.366 09:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:44.366 09:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:44.366 09:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:44.366 09:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:44.366 09:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:44.366 09:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.366 09:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.366 09:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:44.366 09:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:44.366 09:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:44.366 09:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:44.366 09:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:44.366 09:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:44.366 09:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:44.366 09:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:44.366 09:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:44.366 09:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:44.366 09:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:44.366 09:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:44.366 09:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:44.366 09:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.366 09:54:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.938 nvme0n1 00:36:44.938 09:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:44.938 09:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:44.938 09:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:44.938 09:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.938 09:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.938 09:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:44.938 09:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:44.938 09:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:44.938 09:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.938 09:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.938 09:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:44.938 09:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:44.938 09:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:36:44.938 09:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:44.938 09:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:44.938 09:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:44.938 09:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:44.938 09:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTJiOWI0Y2MxODE5MmE4MDAzZTg4Y2FiZDA1ODRlZDk2K/RY: 00:36:44.938 09:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjY0Mjc0NDA4ZDhlZmNmOGQ1MDAyZjA4OTZmZGJjZjN5rQ7a: 00:36:44.938 09:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:44.938 09:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:44.938 09:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTJiOWI0Y2MxODE5MmE4MDAzZTg4Y2FiZDA1ODRlZDk2K/RY: 00:36:44.938 09:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjY0Mjc0NDA4ZDhlZmNmOGQ1MDAyZjA4OTZmZGJjZjN5rQ7a: ]] 00:36:44.938 09:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjY0Mjc0NDA4ZDhlZmNmOGQ1MDAyZjA4OTZmZGJjZjN5rQ7a: 00:36:44.938 09:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:36:44.938 09:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:44.938 09:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:44.938 09:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:44.938 09:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:44.938 09:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:44.938 09:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:44.938 09:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.938 09:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:44.938 09:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:44.938 09:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:44.938 09:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:44.938 09:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:44.938 09:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:44.938 09:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:44.938 09:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:44.938 09:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:44.938 09:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:44.938 09:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:44.938 09:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:44.938 09:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:44.938 09:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:44.938 09:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.938 09:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.199 nvme0n1 00:36:45.199 09:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.199 09:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:45.199 09:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:45.199 09:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.199 09:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.199 09:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.199 09:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:45.200 09:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:45.200 09:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.200 09:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.200 09:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.200 09:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:45.200 09:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:36:45.200 09:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:45.200 09:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:45.200 09:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:45.460 09:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:45.460 09:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWMyZTgyNmY0ZmViODY5MTZkYzJiMGIyMmEzMGQzMTU3Y2I4MjM3ZDA0MTUzZmNkz5udcQ==: 00:36:45.460 09:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzQwYzA3NDMwOTIxMGQzODI0YzRhZWQ3OTJmZDZlYWNfk7c2: 00:36:45.460 09:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:45.460 09:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:45.460 09:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWMyZTgyNmY0ZmViODY5MTZkYzJiMGIyMmEzMGQzMTU3Y2I4MjM3ZDA0MTUzZmNkz5udcQ==: 00:36:45.460 09:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzQwYzA3NDMwOTIxMGQzODI0YzRhZWQ3OTJmZDZlYWNfk7c2: ]] 00:36:45.460 09:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzQwYzA3NDMwOTIxMGQzODI0YzRhZWQ3OTJmZDZlYWNfk7c2: 00:36:45.460 09:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:36:45.460 09:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:45.460 09:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:45.460 09:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:45.460 09:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:45.460 09:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:45.460 09:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:45.460 09:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.460 09:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.460 09:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.460 09:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:45.460 09:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:45.460 09:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:45.460 09:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:45.460 09:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:45.460 09:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:45.460 09:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:45.460 09:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:45.460 09:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:45.460 09:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:45.460 09:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:45.460 09:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:45.460 09:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.460 09:54:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.721 nvme0n1 00:36:45.721 09:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.721 09:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:45.721 09:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:45.721 09:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.721 09:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.721 09:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.721 09:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:45.721 09:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:45.721 09:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.721 09:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.721 09:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.721 09:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:45.721 09:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:36:45.721 09:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:45.721 09:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:45.721 09:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:45.721 09:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:45.721 09:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTAyY2RkZWEzYzhjZmVkZGE5YjNkMzdhYmIzNjk4NTg3MjcwMWRiMDM3MGJiMzI1NzQyYjEwNzY5MjJjYjcwNFwfJzg=: 00:36:45.721 09:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:45.721 09:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:45.721 09:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:45.721 09:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTAyY2RkZWEzYzhjZmVkZGE5YjNkMzdhYmIzNjk4NTg3MjcwMWRiMDM3MGJiMzI1NzQyYjEwNzY5MjJjYjcwNFwfJzg=: 00:36:45.721 09:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:45.721 09:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:36:45.721 09:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:45.721 09:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:45.721 09:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:45.721 09:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:45.721 09:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:45.721 09:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:36:45.721 09:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.721 09:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:45.721 09:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.721 09:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:45.721 09:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:45.721 09:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:45.721 09:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:45.721 09:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:45.721 09:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:45.721 09:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:45.721 09:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:45.721 09:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:45.721 09:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:45.721 09:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:45.721 09:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:45.721 09:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.721 09:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:46.292 nvme0n1 00:36:46.293 09:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:46.293 09:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:46.293 09:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:46.293 09:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:46.293 09:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:46.293 09:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:46.293 09:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:46.293 09:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:46.293 09:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:46.293 09:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:46.293 09:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:46.293 09:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:46.293 09:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:46.293 09:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:36:46.293 09:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:46.293 09:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:46.293 09:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:46.293 09:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:46.293 09:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmVlYjA1MmJiNDA4Yjk0MTM5OWM0ODkwZjE2YTc0MTjFh59d: 00:36:46.293 09:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjE1MjliOTNlNjMzMWUzODVhYWQwZGRhZWU5YjJjNGYzZGM0YTkzNTNhNGVlZmEzMWFhZWE5OWY5NWU0N2RkYaKXeuI=: 00:36:46.293 09:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:46.293 09:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:46.293 09:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmVlYjA1MmJiNDA4Yjk0MTM5OWM0ODkwZjE2YTc0MTjFh59d: 00:36:46.293 09:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjE1MjliOTNlNjMzMWUzODVhYWQwZGRhZWU5YjJjNGYzZGM0YTkzNTNhNGVlZmEzMWFhZWE5OWY5NWU0N2RkYaKXeuI=: ]] 00:36:46.293 09:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjE1MjliOTNlNjMzMWUzODVhYWQwZGRhZWU5YjJjNGYzZGM0YTkzNTNhNGVlZmEzMWFhZWE5OWY5NWU0N2RkYaKXeuI=: 00:36:46.293 09:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:36:46.293 09:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:46.293 09:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:46.293 09:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:46.293 09:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:46.293 09:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:46.293 09:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:46.293 09:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:46.293 09:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:46.293 09:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:46.293 09:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:46.293 09:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:46.293 09:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:46.293 09:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:46.293 09:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:46.293 09:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:46.293 09:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:46.293 09:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:46.293 09:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:46.293 09:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:46.293 09:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:46.293 09:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:46.293 09:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:46.293 09:54:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:46.864 nvme0n1 00:36:46.864 09:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:46.864 09:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:46.864 09:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:46.864 09:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:46.864 09:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.125 09:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:47.125 09:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:47.125 09:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:47.125 09:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:47.125 09:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.125 09:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:47.125 09:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:47.125 09:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:36:47.125 09:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:47.125 09:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:47.125 09:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:47.125 09:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:47.125 09:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2NmZDZiNWI2Njc3NDc0NTM0YzMwMmZmYThkOGIwNmRlYmI3MzhiNDZmYjE4NmVkp8rIAg==: 00:36:47.125 09:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTAzZjBmNDhhYWIxZGExMGU0YzhlYWQ0MGE0YWViNTc4NmI1ZDEyM2ZkNWU0MTNmdVUh2g==: 00:36:47.125 09:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:47.125 09:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:47.125 09:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2NmZDZiNWI2Njc3NDc0NTM0YzMwMmZmYThkOGIwNmRlYmI3MzhiNDZmYjE4NmVkp8rIAg==: 00:36:47.125 09:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTAzZjBmNDhhYWIxZGExMGU0YzhlYWQ0MGE0YWViNTc4NmI1ZDEyM2ZkNWU0MTNmdVUh2g==: ]] 00:36:47.125 09:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTAzZjBmNDhhYWIxZGExMGU0YzhlYWQ0MGE0YWViNTc4NmI1ZDEyM2ZkNWU0MTNmdVUh2g==: 00:36:47.125 09:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:36:47.125 09:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:47.125 09:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:47.125 09:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:47.125 09:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:47.125 09:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:47.125 09:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:47.125 09:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:47.125 09:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.125 09:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:47.125 09:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:47.125 09:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:47.125 09:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:47.125 09:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:47.125 09:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:47.125 09:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:47.125 09:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:47.125 09:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:47.125 09:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:47.125 09:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:47.125 09:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:47.125 09:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:47.125 09:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:47.125 09:54:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.697 nvme0n1 00:36:47.697 09:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:47.697 09:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:47.697 09:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:47.697 09:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:47.697 09:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.697 09:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:47.697 09:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:47.697 09:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:47.697 09:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:47.697 09:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.697 09:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:47.697 09:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:47.697 09:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:36:47.697 09:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:47.697 09:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:47.697 09:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:47.697 09:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:47.697 09:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTJiOWI0Y2MxODE5MmE4MDAzZTg4Y2FiZDA1ODRlZDk2K/RY: 00:36:47.697 09:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjY0Mjc0NDA4ZDhlZmNmOGQ1MDAyZjA4OTZmZGJjZjN5rQ7a: 00:36:47.697 09:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:47.697 09:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:47.697 09:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTJiOWI0Y2MxODE5MmE4MDAzZTg4Y2FiZDA1ODRlZDk2K/RY: 00:36:47.697 09:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjY0Mjc0NDA4ZDhlZmNmOGQ1MDAyZjA4OTZmZGJjZjN5rQ7a: ]] 00:36:47.697 09:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjY0Mjc0NDA4ZDhlZmNmOGQ1MDAyZjA4OTZmZGJjZjN5rQ7a: 00:36:47.697 09:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:36:47.697 09:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:47.697 09:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:47.697 09:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:47.697 09:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:47.697 09:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:47.697 09:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:47.697 09:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:47.697 09:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:47.697 09:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:47.697 09:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:47.697 09:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:47.697 09:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:47.697 09:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:47.697 09:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:47.697 09:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:47.697 09:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:47.698 09:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:47.698 09:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:47.698 09:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:47.698 09:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:47.698 09:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:47.698 09:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:47.698 09:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:48.642 nvme0n1 00:36:48.642 09:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.642 09:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:48.642 09:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:48.642 09:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.642 09:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:48.642 09:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.642 09:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:48.642 09:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:48.642 09:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.642 09:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:48.642 09:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.642 09:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:48.642 09:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:36:48.642 09:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:48.642 09:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:48.642 09:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:48.642 09:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:48.642 09:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWMyZTgyNmY0ZmViODY5MTZkYzJiMGIyMmEzMGQzMTU3Y2I4MjM3ZDA0MTUzZmNkz5udcQ==: 00:36:48.642 09:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzQwYzA3NDMwOTIxMGQzODI0YzRhZWQ3OTJmZDZlYWNfk7c2: 00:36:48.642 09:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:48.642 09:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:48.642 09:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWMyZTgyNmY0ZmViODY5MTZkYzJiMGIyMmEzMGQzMTU3Y2I4MjM3ZDA0MTUzZmNkz5udcQ==: 00:36:48.642 09:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzQwYzA3NDMwOTIxMGQzODI0YzRhZWQ3OTJmZDZlYWNfk7c2: ]] 00:36:48.642 09:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzQwYzA3NDMwOTIxMGQzODI0YzRhZWQ3OTJmZDZlYWNfk7c2: 00:36:48.642 09:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:36:48.642 09:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:48.642 09:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:48.642 09:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:48.642 09:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:48.642 09:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:48.642 09:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:48.642 09:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.642 09:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:48.642 09:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.642 09:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:48.642 09:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:48.642 09:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:48.642 09:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:48.642 09:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:48.642 09:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:48.642 09:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:48.642 09:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:48.642 09:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:48.642 09:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:48.642 09:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:48.642 09:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:48.642 09:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.642 09:54:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:49.214 nvme0n1 00:36:49.214 09:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:49.214 09:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:49.214 09:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:49.214 09:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:49.214 09:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:49.214 09:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:49.214 09:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:49.214 09:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:49.214 09:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:49.214 09:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:49.214 09:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:49.214 09:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:49.214 09:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:36:49.214 09:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:49.214 09:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:36:49.214 09:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:49.214 09:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:49.214 09:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTAyY2RkZWEzYzhjZmVkZGE5YjNkMzdhYmIzNjk4NTg3MjcwMWRiMDM3MGJiMzI1NzQyYjEwNzY5MjJjYjcwNFwfJzg=: 00:36:49.214 09:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:49.214 09:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:36:49.214 09:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:49.214 09:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTAyY2RkZWEzYzhjZmVkZGE5YjNkMzdhYmIzNjk4NTg3MjcwMWRiMDM3MGJiMzI1NzQyYjEwNzY5MjJjYjcwNFwfJzg=: 00:36:49.214 09:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:49.214 09:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:36:49.214 09:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:49.214 09:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:36:49.214 09:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:49.214 09:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:49.214 09:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:49.214 09:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:36:49.214 09:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:49.214 09:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:49.214 09:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:49.214 09:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:49.214 09:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:49.214 09:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:49.214 09:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:49.214 09:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:49.214 09:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:49.214 09:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:49.214 09:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:49.214 09:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:49.214 09:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:49.214 09:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:49.214 09:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:49.214 09:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:49.214 09:54:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:49.791 nvme0n1 00:36:49.791 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:49.791 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:49.791 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:49.792 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:49.792 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:49.792 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:50.052 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:50.052 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:50.052 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:50.052 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:50.052 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:50.052 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:36:50.052 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:50.052 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:50.052 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:36:50.052 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:50.052 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:50.052 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:50.052 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:50.052 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmVlYjA1MmJiNDA4Yjk0MTM5OWM0ODkwZjE2YTc0MTjFh59d: 00:36:50.052 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjE1MjliOTNlNjMzMWUzODVhYWQwZGRhZWU5YjJjNGYzZGM0YTkzNTNhNGVlZmEzMWFhZWE5OWY5NWU0N2RkYaKXeuI=: 00:36:50.052 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:50.052 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:50.052 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmVlYjA1MmJiNDA4Yjk0MTM5OWM0ODkwZjE2YTc0MTjFh59d: 00:36:50.052 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjE1MjliOTNlNjMzMWUzODVhYWQwZGRhZWU5YjJjNGYzZGM0YTkzNTNhNGVlZmEzMWFhZWE5OWY5NWU0N2RkYaKXeuI=: ]] 00:36:50.052 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjE1MjliOTNlNjMzMWUzODVhYWQwZGRhZWU5YjJjNGYzZGM0YTkzNTNhNGVlZmEzMWFhZWE5OWY5NWU0N2RkYaKXeuI=: 00:36:50.052 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:36:50.052 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:50.052 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:50.052 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:50.052 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:50.052 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:50.052 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:50.052 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:50.052 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:50.052 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:50.052 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:50.052 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:50.052 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:50.052 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:50.052 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:50.052 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:50.052 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:50.052 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:50.052 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:50.052 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:50.052 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:50.052 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:50.052 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:50.052 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:50.052 nvme0n1 00:36:50.052 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:50.052 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:50.052 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:50.052 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:50.052 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:50.052 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:50.052 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:50.052 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:50.052 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:50.052 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:50.312 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:50.312 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:50.312 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:36:50.312 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:50.312 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:50.312 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:50.312 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:50.312 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2NmZDZiNWI2Njc3NDc0NTM0YzMwMmZmYThkOGIwNmRlYmI3MzhiNDZmYjE4NmVkp8rIAg==: 00:36:50.312 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTAzZjBmNDhhYWIxZGExMGU0YzhlYWQ0MGE0YWViNTc4NmI1ZDEyM2ZkNWU0MTNmdVUh2g==: 00:36:50.312 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:50.312 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:50.312 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2NmZDZiNWI2Njc3NDc0NTM0YzMwMmZmYThkOGIwNmRlYmI3MzhiNDZmYjE4NmVkp8rIAg==: 00:36:50.312 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTAzZjBmNDhhYWIxZGExMGU0YzhlYWQ0MGE0YWViNTc4NmI1ZDEyM2ZkNWU0MTNmdVUh2g==: ]] 00:36:50.312 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTAzZjBmNDhhYWIxZGExMGU0YzhlYWQ0MGE0YWViNTc4NmI1ZDEyM2ZkNWU0MTNmdVUh2g==: 00:36:50.312 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:36:50.312 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:50.312 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:50.312 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:50.312 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:50.312 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:50.312 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:50.312 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:50.312 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:50.312 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:50.312 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:50.312 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:50.312 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:50.312 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:50.312 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:50.312 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:50.312 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:50.312 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:50.312 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:50.312 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:50.312 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:50.312 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:50.312 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:50.312 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:50.312 nvme0n1 00:36:50.312 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:50.312 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:50.312 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:50.312 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:50.312 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:50.312 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:50.312 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:50.312 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:50.312 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:50.312 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:50.312 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:50.312 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:50.312 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:36:50.312 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:50.312 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:50.312 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:50.312 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:50.312 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTJiOWI0Y2MxODE5MmE4MDAzZTg4Y2FiZDA1ODRlZDk2K/RY: 00:36:50.312 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjY0Mjc0NDA4ZDhlZmNmOGQ1MDAyZjA4OTZmZGJjZjN5rQ7a: 00:36:50.312 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:50.312 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:50.312 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTJiOWI0Y2MxODE5MmE4MDAzZTg4Y2FiZDA1ODRlZDk2K/RY: 00:36:50.312 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjY0Mjc0NDA4ZDhlZmNmOGQ1MDAyZjA4OTZmZGJjZjN5rQ7a: ]] 00:36:50.312 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjY0Mjc0NDA4ZDhlZmNmOGQ1MDAyZjA4OTZmZGJjZjN5rQ7a: 00:36:50.312 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:36:50.312 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:50.312 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:50.312 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:50.312 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:50.313 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:50.313 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:50.313 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:50.313 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:50.313 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:50.313 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:50.313 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:50.313 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:50.313 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:50.313 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:50.313 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:50.313 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:50.313 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:50.313 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:50.313 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:50.313 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:50.313 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:50.313 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:50.313 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:50.573 nvme0n1 00:36:50.573 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:50.573 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:50.573 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:50.573 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:50.573 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:50.573 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:50.573 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:50.573 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:50.573 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:50.573 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:50.573 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:50.573 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:50.573 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:36:50.573 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:50.573 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:50.573 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:50.573 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:50.573 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWMyZTgyNmY0ZmViODY5MTZkYzJiMGIyMmEzMGQzMTU3Y2I4MjM3ZDA0MTUzZmNkz5udcQ==: 00:36:50.573 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzQwYzA3NDMwOTIxMGQzODI0YzRhZWQ3OTJmZDZlYWNfk7c2: 00:36:50.573 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:50.573 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:50.573 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWMyZTgyNmY0ZmViODY5MTZkYzJiMGIyMmEzMGQzMTU3Y2I4MjM3ZDA0MTUzZmNkz5udcQ==: 00:36:50.573 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzQwYzA3NDMwOTIxMGQzODI0YzRhZWQ3OTJmZDZlYWNfk7c2: ]] 00:36:50.573 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzQwYzA3NDMwOTIxMGQzODI0YzRhZWQ3OTJmZDZlYWNfk7c2: 00:36:50.573 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:36:50.573 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:50.573 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:50.573 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:50.573 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:50.573 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:50.573 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:50.573 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:50.573 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:50.573 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:50.573 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:50.573 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:50.573 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:50.573 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:50.573 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:50.573 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:50.573 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:50.573 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:50.573 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:50.573 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:50.573 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:50.573 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:50.573 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:50.573 09:54:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:50.834 nvme0n1 00:36:50.834 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:50.834 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:50.834 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:50.834 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:50.834 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:50.834 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:50.834 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:50.834 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:50.834 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:50.834 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:50.834 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:50.834 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:50.834 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:36:50.834 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:50.834 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:50.834 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:36:50.834 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:50.834 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTAyY2RkZWEzYzhjZmVkZGE5YjNkMzdhYmIzNjk4NTg3MjcwMWRiMDM3MGJiMzI1NzQyYjEwNzY5MjJjYjcwNFwfJzg=: 00:36:50.834 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:50.834 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:50.834 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:36:50.834 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTAyY2RkZWEzYzhjZmVkZGE5YjNkMzdhYmIzNjk4NTg3MjcwMWRiMDM3MGJiMzI1NzQyYjEwNzY5MjJjYjcwNFwfJzg=: 00:36:50.834 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:50.834 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:36:50.834 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:50.834 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:50.834 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:36:50.834 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:50.834 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:50.834 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:36:50.834 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:50.834 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:50.834 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:50.834 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:50.834 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:50.834 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:50.834 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:50.834 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:50.834 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:50.834 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:50.834 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:50.834 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:50.834 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:50.834 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:50.834 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:50.834 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:50.834 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:51.094 nvme0n1 00:36:51.094 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:51.094 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:51.094 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:51.094 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:51.094 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:51.094 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:51.094 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:51.094 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:51.094 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:51.094 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:51.094 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:51.094 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:51.094 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:51.094 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:36:51.094 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:51.094 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:51.094 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:51.094 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:51.094 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmVlYjA1MmJiNDA4Yjk0MTM5OWM0ODkwZjE2YTc0MTjFh59d: 00:36:51.094 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjE1MjliOTNlNjMzMWUzODVhYWQwZGRhZWU5YjJjNGYzZGM0YTkzNTNhNGVlZmEzMWFhZWE5OWY5NWU0N2RkYaKXeuI=: 00:36:51.094 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:51.094 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:51.094 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmVlYjA1MmJiNDA4Yjk0MTM5OWM0ODkwZjE2YTc0MTjFh59d: 00:36:51.094 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjE1MjliOTNlNjMzMWUzODVhYWQwZGRhZWU5YjJjNGYzZGM0YTkzNTNhNGVlZmEzMWFhZWE5OWY5NWU0N2RkYaKXeuI=: ]] 00:36:51.094 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjE1MjliOTNlNjMzMWUzODVhYWQwZGRhZWU5YjJjNGYzZGM0YTkzNTNhNGVlZmEzMWFhZWE5OWY5NWU0N2RkYaKXeuI=: 00:36:51.094 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:36:51.094 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:51.094 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:51.094 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:51.094 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:51.094 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:51.094 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:51.094 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:51.094 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:51.094 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:51.094 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:51.094 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:51.094 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:51.094 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:51.094 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:51.094 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:51.094 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:51.094 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:51.094 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:51.094 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:51.094 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:51.094 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:51.094 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:51.094 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:51.354 nvme0n1 00:36:51.354 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:51.354 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:51.354 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:51.354 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:51.354 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:51.354 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:51.354 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:51.354 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:51.354 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:51.354 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:51.354 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:51.354 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:51.354 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:36:51.354 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:51.354 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:51.354 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:51.354 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:51.354 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2NmZDZiNWI2Njc3NDc0NTM0YzMwMmZmYThkOGIwNmRlYmI3MzhiNDZmYjE4NmVkp8rIAg==: 00:36:51.354 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTAzZjBmNDhhYWIxZGExMGU0YzhlYWQ0MGE0YWViNTc4NmI1ZDEyM2ZkNWU0MTNmdVUh2g==: 00:36:51.354 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:51.354 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:51.354 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2NmZDZiNWI2Njc3NDc0NTM0YzMwMmZmYThkOGIwNmRlYmI3MzhiNDZmYjE4NmVkp8rIAg==: 00:36:51.354 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTAzZjBmNDhhYWIxZGExMGU0YzhlYWQ0MGE0YWViNTc4NmI1ZDEyM2ZkNWU0MTNmdVUh2g==: ]] 00:36:51.354 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTAzZjBmNDhhYWIxZGExMGU0YzhlYWQ0MGE0YWViNTc4NmI1ZDEyM2ZkNWU0MTNmdVUh2g==: 00:36:51.354 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:36:51.354 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:51.354 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:51.354 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:51.354 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:51.354 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:51.354 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:51.354 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:51.354 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:51.354 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:51.354 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:51.354 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:51.354 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:51.354 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:51.354 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:51.354 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:51.354 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:51.355 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:51.355 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:51.355 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:51.355 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:51.355 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:51.355 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:51.355 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:51.614 nvme0n1 00:36:51.614 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:51.614 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:51.614 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:51.614 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:51.614 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:51.615 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:51.615 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:51.615 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:51.615 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:51.615 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:51.615 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:51.615 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:51.615 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:36:51.615 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:51.615 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:51.615 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:51.615 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:51.615 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTJiOWI0Y2MxODE5MmE4MDAzZTg4Y2FiZDA1ODRlZDk2K/RY: 00:36:51.615 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjY0Mjc0NDA4ZDhlZmNmOGQ1MDAyZjA4OTZmZGJjZjN5rQ7a: 00:36:51.615 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:51.615 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:51.615 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTJiOWI0Y2MxODE5MmE4MDAzZTg4Y2FiZDA1ODRlZDk2K/RY: 00:36:51.615 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjY0Mjc0NDA4ZDhlZmNmOGQ1MDAyZjA4OTZmZGJjZjN5rQ7a: ]] 00:36:51.615 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjY0Mjc0NDA4ZDhlZmNmOGQ1MDAyZjA4OTZmZGJjZjN5rQ7a: 00:36:51.615 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:36:51.615 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:51.615 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:51.615 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:51.615 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:51.615 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:51.615 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:51.615 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:51.615 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:51.615 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:51.615 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:51.615 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:51.615 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:51.615 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:51.615 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:51.615 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:51.615 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:51.615 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:51.615 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:51.615 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:51.615 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:51.615 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:51.615 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:51.615 09:54:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:51.875 nvme0n1 00:36:51.875 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:51.875 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:51.875 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:51.875 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:51.876 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:51.876 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:51.876 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:51.876 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:51.876 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:51.876 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:51.876 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:51.876 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:51.876 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:36:51.876 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:51.876 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:51.876 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:51.876 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:51.876 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWMyZTgyNmY0ZmViODY5MTZkYzJiMGIyMmEzMGQzMTU3Y2I4MjM3ZDA0MTUzZmNkz5udcQ==: 00:36:51.876 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzQwYzA3NDMwOTIxMGQzODI0YzRhZWQ3OTJmZDZlYWNfk7c2: 00:36:51.876 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:51.876 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:51.876 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWMyZTgyNmY0ZmViODY5MTZkYzJiMGIyMmEzMGQzMTU3Y2I4MjM3ZDA0MTUzZmNkz5udcQ==: 00:36:51.876 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzQwYzA3NDMwOTIxMGQzODI0YzRhZWQ3OTJmZDZlYWNfk7c2: ]] 00:36:51.876 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzQwYzA3NDMwOTIxMGQzODI0YzRhZWQ3OTJmZDZlYWNfk7c2: 00:36:51.876 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:36:51.876 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:51.876 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:51.876 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:51.876 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:51.876 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:51.876 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:51.876 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:51.876 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:51.876 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:51.876 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:51.876 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:51.876 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:51.876 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:51.876 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:51.876 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:51.876 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:51.876 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:51.876 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:51.876 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:51.876 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:51.876 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:51.876 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:51.876 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:52.137 nvme0n1 00:36:52.137 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.137 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:52.137 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:52.137 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.137 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:52.137 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.137 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:52.137 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:52.137 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.137 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:52.137 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.137 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:52.137 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:36:52.137 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:52.137 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:52.137 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:36:52.137 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:52.137 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTAyY2RkZWEzYzhjZmVkZGE5YjNkMzdhYmIzNjk4NTg3MjcwMWRiMDM3MGJiMzI1NzQyYjEwNzY5MjJjYjcwNFwfJzg=: 00:36:52.137 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:52.137 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:52.137 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:36:52.137 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTAyY2RkZWEzYzhjZmVkZGE5YjNkMzdhYmIzNjk4NTg3MjcwMWRiMDM3MGJiMzI1NzQyYjEwNzY5MjJjYjcwNFwfJzg=: 00:36:52.137 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:52.137 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:36:52.137 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:52.137 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:52.137 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:36:52.137 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:52.137 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:52.137 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:36:52.137 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.137 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:52.137 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.137 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:52.137 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:52.137 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:52.137 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:52.138 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:52.138 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:52.138 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:52.138 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:52.138 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:52.138 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:52.138 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:52.138 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:52.138 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.138 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:52.399 nvme0n1 00:36:52.399 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.399 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:52.399 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:52.399 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.399 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:52.399 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.399 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:52.399 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:52.399 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.399 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:52.399 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.399 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:52.399 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:52.399 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:36:52.399 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:52.399 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:52.399 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:52.399 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:52.399 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmVlYjA1MmJiNDA4Yjk0MTM5OWM0ODkwZjE2YTc0MTjFh59d: 00:36:52.399 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjE1MjliOTNlNjMzMWUzODVhYWQwZGRhZWU5YjJjNGYzZGM0YTkzNTNhNGVlZmEzMWFhZWE5OWY5NWU0N2RkYaKXeuI=: 00:36:52.399 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:52.399 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:52.399 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmVlYjA1MmJiNDA4Yjk0MTM5OWM0ODkwZjE2YTc0MTjFh59d: 00:36:52.399 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjE1MjliOTNlNjMzMWUzODVhYWQwZGRhZWU5YjJjNGYzZGM0YTkzNTNhNGVlZmEzMWFhZWE5OWY5NWU0N2RkYaKXeuI=: ]] 00:36:52.399 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjE1MjliOTNlNjMzMWUzODVhYWQwZGRhZWU5YjJjNGYzZGM0YTkzNTNhNGVlZmEzMWFhZWE5OWY5NWU0N2RkYaKXeuI=: 00:36:52.399 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:36:52.399 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:52.399 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:52.399 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:52.399 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:52.399 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:52.399 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:52.399 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.399 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:52.399 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.399 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:52.399 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:52.399 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:52.399 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:52.399 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:52.399 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:52.399 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:52.399 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:52.399 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:52.399 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:52.399 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:52.399 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:52.399 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.399 09:54:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:52.660 nvme0n1 00:36:52.660 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.660 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:52.660 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:52.660 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.660 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:52.660 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.660 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:52.660 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:52.660 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.660 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:52.660 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.660 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:52.660 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:36:52.660 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:52.660 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:52.660 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:52.660 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:52.660 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2NmZDZiNWI2Njc3NDc0NTM0YzMwMmZmYThkOGIwNmRlYmI3MzhiNDZmYjE4NmVkp8rIAg==: 00:36:52.660 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTAzZjBmNDhhYWIxZGExMGU0YzhlYWQ0MGE0YWViNTc4NmI1ZDEyM2ZkNWU0MTNmdVUh2g==: 00:36:52.661 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:52.661 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:52.661 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2NmZDZiNWI2Njc3NDc0NTM0YzMwMmZmYThkOGIwNmRlYmI3MzhiNDZmYjE4NmVkp8rIAg==: 00:36:52.661 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTAzZjBmNDhhYWIxZGExMGU0YzhlYWQ0MGE0YWViNTc4NmI1ZDEyM2ZkNWU0MTNmdVUh2g==: ]] 00:36:52.661 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTAzZjBmNDhhYWIxZGExMGU0YzhlYWQ0MGE0YWViNTc4NmI1ZDEyM2ZkNWU0MTNmdVUh2g==: 00:36:52.661 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:36:52.661 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:52.661 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:52.661 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:52.661 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:52.661 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:52.661 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:52.661 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.661 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:52.922 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.922 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:52.922 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:52.922 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:52.922 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:52.922 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:52.922 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:52.922 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:52.922 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:52.922 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:52.922 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:52.922 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:52.922 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:52.922 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.922 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:53.183 nvme0n1 00:36:53.183 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.183 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:53.183 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.183 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:53.183 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:53.183 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.183 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:53.183 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:53.183 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.183 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:53.183 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.183 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:53.183 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:36:53.183 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:53.183 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:53.183 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:53.183 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:53.183 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTJiOWI0Y2MxODE5MmE4MDAzZTg4Y2FiZDA1ODRlZDk2K/RY: 00:36:53.183 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjY0Mjc0NDA4ZDhlZmNmOGQ1MDAyZjA4OTZmZGJjZjN5rQ7a: 00:36:53.183 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:53.183 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:53.183 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTJiOWI0Y2MxODE5MmE4MDAzZTg4Y2FiZDA1ODRlZDk2K/RY: 00:36:53.183 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjY0Mjc0NDA4ZDhlZmNmOGQ1MDAyZjA4OTZmZGJjZjN5rQ7a: ]] 00:36:53.183 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjY0Mjc0NDA4ZDhlZmNmOGQ1MDAyZjA4OTZmZGJjZjN5rQ7a: 00:36:53.183 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:36:53.183 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:53.183 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:53.183 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:53.184 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:53.184 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:53.184 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:53.184 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.184 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:53.184 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.184 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:53.184 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:53.184 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:53.184 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:53.184 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:53.184 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:53.184 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:53.184 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:53.184 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:53.184 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:53.184 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:53.184 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:53.184 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.184 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:53.445 nvme0n1 00:36:53.445 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.445 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:53.445 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:53.445 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.445 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:53.445 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.445 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:53.445 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:53.445 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.445 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:53.445 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.445 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:53.445 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:36:53.445 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:53.445 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:53.445 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:53.445 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:53.445 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWMyZTgyNmY0ZmViODY5MTZkYzJiMGIyMmEzMGQzMTU3Y2I4MjM3ZDA0MTUzZmNkz5udcQ==: 00:36:53.445 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzQwYzA3NDMwOTIxMGQzODI0YzRhZWQ3OTJmZDZlYWNfk7c2: 00:36:53.445 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:53.445 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:53.445 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWMyZTgyNmY0ZmViODY5MTZkYzJiMGIyMmEzMGQzMTU3Y2I4MjM3ZDA0MTUzZmNkz5udcQ==: 00:36:53.445 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzQwYzA3NDMwOTIxMGQzODI0YzRhZWQ3OTJmZDZlYWNfk7c2: ]] 00:36:53.445 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzQwYzA3NDMwOTIxMGQzODI0YzRhZWQ3OTJmZDZlYWNfk7c2: 00:36:53.445 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:36:53.445 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:53.445 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:53.445 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:53.445 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:53.445 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:53.445 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:53.445 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.445 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:53.445 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.445 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:53.445 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:53.445 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:53.445 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:53.445 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:53.445 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:53.445 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:53.445 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:53.445 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:53.445 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:53.445 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:53.445 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:53.445 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.445 09:54:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:53.707 nvme0n1 00:36:53.707 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.707 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:53.707 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:53.707 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.707 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:53.707 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.707 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:53.707 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:53.707 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.707 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:53.707 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.707 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:53.707 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:36:53.707 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:53.707 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:53.707 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:36:53.707 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:53.707 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTAyY2RkZWEzYzhjZmVkZGE5YjNkMzdhYmIzNjk4NTg3MjcwMWRiMDM3MGJiMzI1NzQyYjEwNzY5MjJjYjcwNFwfJzg=: 00:36:53.707 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:53.707 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:53.707 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:36:53.707 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTAyY2RkZWEzYzhjZmVkZGE5YjNkMzdhYmIzNjk4NTg3MjcwMWRiMDM3MGJiMzI1NzQyYjEwNzY5MjJjYjcwNFwfJzg=: 00:36:53.707 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:53.707 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:36:53.707 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:53.707 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:53.707 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:36:53.707 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:53.707 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:53.707 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:36:53.707 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.707 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:53.707 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.707 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:53.707 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:53.707 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:53.707 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:53.707 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:53.707 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:53.707 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:53.707 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:53.707 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:53.707 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:53.707 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:53.707 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:53.707 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.707 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:53.968 nvme0n1 00:36:53.968 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.968 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:53.968 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:53.968 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.968 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:54.229 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.229 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:54.229 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:54.229 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.229 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:54.229 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.229 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:54.229 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:54.229 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:36:54.229 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:54.229 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:54.229 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:54.229 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:54.229 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmVlYjA1MmJiNDA4Yjk0MTM5OWM0ODkwZjE2YTc0MTjFh59d: 00:36:54.229 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjE1MjliOTNlNjMzMWUzODVhYWQwZGRhZWU5YjJjNGYzZGM0YTkzNTNhNGVlZmEzMWFhZWE5OWY5NWU0N2RkYaKXeuI=: 00:36:54.229 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:54.229 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:54.229 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmVlYjA1MmJiNDA4Yjk0MTM5OWM0ODkwZjE2YTc0MTjFh59d: 00:36:54.229 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjE1MjliOTNlNjMzMWUzODVhYWQwZGRhZWU5YjJjNGYzZGM0YTkzNTNhNGVlZmEzMWFhZWE5OWY5NWU0N2RkYaKXeuI=: ]] 00:36:54.229 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjE1MjliOTNlNjMzMWUzODVhYWQwZGRhZWU5YjJjNGYzZGM0YTkzNTNhNGVlZmEzMWFhZWE5OWY5NWU0N2RkYaKXeuI=: 00:36:54.229 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:36:54.229 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:54.229 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:54.229 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:54.229 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:54.229 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:54.229 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:54.229 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.229 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:54.229 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.229 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:54.229 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:54.229 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:54.229 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:54.229 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:54.229 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:54.229 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:54.229 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:54.229 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:54.230 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:54.230 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:54.230 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:54.230 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.230 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:54.490 nvme0n1 00:36:54.490 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.490 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:54.490 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:54.490 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.490 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:54.490 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.763 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:54.763 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:54.763 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.763 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:54.763 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.763 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:54.763 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:36:54.763 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:54.763 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:54.763 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:54.763 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:54.763 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2NmZDZiNWI2Njc3NDc0NTM0YzMwMmZmYThkOGIwNmRlYmI3MzhiNDZmYjE4NmVkp8rIAg==: 00:36:54.763 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTAzZjBmNDhhYWIxZGExMGU0YzhlYWQ0MGE0YWViNTc4NmI1ZDEyM2ZkNWU0MTNmdVUh2g==: 00:36:54.763 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:54.763 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:54.763 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2NmZDZiNWI2Njc3NDc0NTM0YzMwMmZmYThkOGIwNmRlYmI3MzhiNDZmYjE4NmVkp8rIAg==: 00:36:54.763 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTAzZjBmNDhhYWIxZGExMGU0YzhlYWQ0MGE0YWViNTc4NmI1ZDEyM2ZkNWU0MTNmdVUh2g==: ]] 00:36:54.763 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTAzZjBmNDhhYWIxZGExMGU0YzhlYWQ0MGE0YWViNTc4NmI1ZDEyM2ZkNWU0MTNmdVUh2g==: 00:36:54.763 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:36:54.763 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:54.763 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:54.763 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:54.763 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:54.763 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:54.763 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:54.763 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.763 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:54.763 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.763 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:54.763 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:54.763 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:54.763 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:54.763 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:54.763 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:54.763 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:54.763 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:54.763 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:54.763 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:54.763 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:54.763 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:54.763 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.763 09:54:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:55.078 nvme0n1 00:36:55.078 09:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:55.078 09:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:55.078 09:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:55.078 09:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:55.078 09:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:55.078 09:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:55.078 09:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:55.078 09:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:55.078 09:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:55.078 09:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:55.078 09:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:55.078 09:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:55.078 09:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:36:55.078 09:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:55.078 09:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:55.078 09:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:55.078 09:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:55.078 09:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTJiOWI0Y2MxODE5MmE4MDAzZTg4Y2FiZDA1ODRlZDk2K/RY: 00:36:55.078 09:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjY0Mjc0NDA4ZDhlZmNmOGQ1MDAyZjA4OTZmZGJjZjN5rQ7a: 00:36:55.078 09:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:55.078 09:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:55.078 09:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTJiOWI0Y2MxODE5MmE4MDAzZTg4Y2FiZDA1ODRlZDk2K/RY: 00:36:55.078 09:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjY0Mjc0NDA4ZDhlZmNmOGQ1MDAyZjA4OTZmZGJjZjN5rQ7a: ]] 00:36:55.078 09:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjY0Mjc0NDA4ZDhlZmNmOGQ1MDAyZjA4OTZmZGJjZjN5rQ7a: 00:36:55.078 09:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:36:55.078 09:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:55.078 09:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:55.078 09:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:55.078 09:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:55.078 09:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:55.078 09:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:55.078 09:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:55.078 09:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:55.078 09:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:55.078 09:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:55.078 09:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:55.078 09:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:55.078 09:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:55.078 09:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:55.078 09:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:55.078 09:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:55.078 09:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:55.078 09:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:55.078 09:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:55.078 09:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:55.078 09:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:55.078 09:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:55.078 09:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:55.672 nvme0n1 00:36:55.672 09:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:55.672 09:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:55.672 09:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:55.672 09:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:55.672 09:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:55.672 09:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:55.672 09:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:55.672 09:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:55.672 09:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:55.672 09:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:55.672 09:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:55.672 09:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:55.672 09:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:36:55.672 09:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:55.672 09:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:55.672 09:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:55.672 09:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:55.672 09:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWMyZTgyNmY0ZmViODY5MTZkYzJiMGIyMmEzMGQzMTU3Y2I4MjM3ZDA0MTUzZmNkz5udcQ==: 00:36:55.672 09:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzQwYzA3NDMwOTIxMGQzODI0YzRhZWQ3OTJmZDZlYWNfk7c2: 00:36:55.672 09:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:55.672 09:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:55.672 09:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWMyZTgyNmY0ZmViODY5MTZkYzJiMGIyMmEzMGQzMTU3Y2I4MjM3ZDA0MTUzZmNkz5udcQ==: 00:36:55.672 09:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzQwYzA3NDMwOTIxMGQzODI0YzRhZWQ3OTJmZDZlYWNfk7c2: ]] 00:36:55.672 09:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzQwYzA3NDMwOTIxMGQzODI0YzRhZWQ3OTJmZDZlYWNfk7c2: 00:36:55.672 09:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:36:55.672 09:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:55.672 09:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:55.672 09:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:55.672 09:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:55.672 09:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:55.672 09:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:55.672 09:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:55.672 09:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:55.672 09:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:55.672 09:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:55.672 09:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:55.672 09:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:55.672 09:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:55.672 09:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:55.672 09:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:55.672 09:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:55.672 09:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:55.672 09:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:55.672 09:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:55.672 09:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:55.672 09:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:55.672 09:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:55.672 09:54:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:55.933 nvme0n1 00:36:55.933 09:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:55.933 09:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:55.933 09:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:55.933 09:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:55.933 09:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:55.933 09:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:55.933 09:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:55.933 09:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:55.933 09:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:55.933 09:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:56.193 09:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:56.193 09:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:56.193 09:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:36:56.193 09:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:56.193 09:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:56.193 09:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:36:56.193 09:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:56.193 09:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTAyY2RkZWEzYzhjZmVkZGE5YjNkMzdhYmIzNjk4NTg3MjcwMWRiMDM3MGJiMzI1NzQyYjEwNzY5MjJjYjcwNFwfJzg=: 00:36:56.193 09:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:56.193 09:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:56.193 09:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:36:56.193 09:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTAyY2RkZWEzYzhjZmVkZGE5YjNkMzdhYmIzNjk4NTg3MjcwMWRiMDM3MGJiMzI1NzQyYjEwNzY5MjJjYjcwNFwfJzg=: 00:36:56.193 09:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:56.193 09:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:36:56.193 09:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:56.193 09:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:56.193 09:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:36:56.193 09:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:56.193 09:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:56.193 09:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:36:56.193 09:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:56.193 09:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:56.193 09:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:56.193 09:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:56.193 09:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:56.193 09:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:56.193 09:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:56.193 09:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:56.193 09:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:56.193 09:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:56.193 09:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:56.193 09:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:56.193 09:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:56.193 09:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:56.193 09:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:56.193 09:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:56.193 09:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:56.453 nvme0n1 00:36:56.453 09:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:56.453 09:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:56.453 09:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:56.453 09:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:56.453 09:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:56.454 09:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:56.454 09:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:56.454 09:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:56.454 09:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:56.454 09:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:56.454 09:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:56.454 09:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:36:56.454 09:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:56.454 09:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:36:56.454 09:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:56.454 09:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:56.454 09:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:56.454 09:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:36:56.454 09:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmVlYjA1MmJiNDA4Yjk0MTM5OWM0ODkwZjE2YTc0MTjFh59d: 00:36:56.454 09:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjE1MjliOTNlNjMzMWUzODVhYWQwZGRhZWU5YjJjNGYzZGM0YTkzNTNhNGVlZmEzMWFhZWE5OWY5NWU0N2RkYaKXeuI=: 00:36:56.454 09:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:56.454 09:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:56.454 09:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmVlYjA1MmJiNDA4Yjk0MTM5OWM0ODkwZjE2YTc0MTjFh59d: 00:36:56.454 09:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjE1MjliOTNlNjMzMWUzODVhYWQwZGRhZWU5YjJjNGYzZGM0YTkzNTNhNGVlZmEzMWFhZWE5OWY5NWU0N2RkYaKXeuI=: ]] 00:36:56.454 09:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjE1MjliOTNlNjMzMWUzODVhYWQwZGRhZWU5YjJjNGYzZGM0YTkzNTNhNGVlZmEzMWFhZWE5OWY5NWU0N2RkYaKXeuI=: 00:36:56.454 09:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:36:56.454 09:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:56.454 09:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:56.454 09:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:56.454 09:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:36:56.454 09:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:56.454 09:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:56.713 09:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:56.713 09:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:56.713 09:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:56.713 09:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:56.713 09:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:56.713 09:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:56.713 09:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:56.713 09:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:56.714 09:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:56.714 09:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:56.714 09:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:56.714 09:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:56.714 09:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:56.714 09:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:56.714 09:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:36:56.714 09:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:56.714 09:54:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:57.283 nvme0n1 00:36:57.283 09:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.283 09:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:57.283 09:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:57.283 09:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.283 09:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:57.283 09:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.283 09:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:57.283 09:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:57.283 09:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.283 09:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:57.283 09:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.283 09:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:57.284 09:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:36:57.284 09:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:57.284 09:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:57.284 09:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:57.284 09:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:36:57.284 09:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2NmZDZiNWI2Njc3NDc0NTM0YzMwMmZmYThkOGIwNmRlYmI3MzhiNDZmYjE4NmVkp8rIAg==: 00:36:57.284 09:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTAzZjBmNDhhYWIxZGExMGU0YzhlYWQ0MGE0YWViNTc4NmI1ZDEyM2ZkNWU0MTNmdVUh2g==: 00:36:57.284 09:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:57.284 09:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:57.284 09:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2NmZDZiNWI2Njc3NDc0NTM0YzMwMmZmYThkOGIwNmRlYmI3MzhiNDZmYjE4NmVkp8rIAg==: 00:36:57.284 09:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTAzZjBmNDhhYWIxZGExMGU0YzhlYWQ0MGE0YWViNTc4NmI1ZDEyM2ZkNWU0MTNmdVUh2g==: ]] 00:36:57.284 09:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTAzZjBmNDhhYWIxZGExMGU0YzhlYWQ0MGE0YWViNTc4NmI1ZDEyM2ZkNWU0MTNmdVUh2g==: 00:36:57.284 09:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:36:57.284 09:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:57.284 09:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:57.284 09:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:57.284 09:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:36:57.284 09:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:57.284 09:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:57.284 09:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.284 09:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:57.284 09:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.284 09:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:57.284 09:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:57.284 09:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:57.284 09:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:57.284 09:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:57.284 09:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:57.284 09:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:57.284 09:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:57.284 09:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:57.284 09:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:57.284 09:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:57.284 09:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:36:57.284 09:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.284 09:54:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:57.854 nvme0n1 00:36:57.854 09:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.854 09:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:57.855 09:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:57.855 09:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.855 09:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:57.855 09:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:58.115 09:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:58.115 09:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:58.115 09:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:58.115 09:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:58.115 09:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:58.115 09:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:58.115 09:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:36:58.115 09:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:58.115 09:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:58.115 09:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:58.115 09:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:36:58.115 09:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTJiOWI0Y2MxODE5MmE4MDAzZTg4Y2FiZDA1ODRlZDk2K/RY: 00:36:58.115 09:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjY0Mjc0NDA4ZDhlZmNmOGQ1MDAyZjA4OTZmZGJjZjN5rQ7a: 00:36:58.115 09:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:58.115 09:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:58.115 09:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTJiOWI0Y2MxODE5MmE4MDAzZTg4Y2FiZDA1ODRlZDk2K/RY: 00:36:58.115 09:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjY0Mjc0NDA4ZDhlZmNmOGQ1MDAyZjA4OTZmZGJjZjN5rQ7a: ]] 00:36:58.115 09:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjY0Mjc0NDA4ZDhlZmNmOGQ1MDAyZjA4OTZmZGJjZjN5rQ7a: 00:36:58.115 09:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:36:58.115 09:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:58.115 09:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:58.115 09:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:58.115 09:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:36:58.115 09:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:58.115 09:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:58.115 09:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:58.115 09:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:58.115 09:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:58.115 09:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:58.115 09:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:58.115 09:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:58.115 09:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:58.115 09:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:58.115 09:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:58.115 09:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:58.115 09:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:58.115 09:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:58.115 09:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:58.115 09:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:58.115 09:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:36:58.115 09:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:58.115 09:54:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:58.685 nvme0n1 00:36:58.685 09:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:58.685 09:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:58.685 09:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:58.685 09:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:58.685 09:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:58.685 09:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:58.685 09:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:58.685 09:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:58.685 09:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:58.685 09:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:58.685 09:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:58.685 09:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:58.685 09:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:36:58.685 09:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:58.685 09:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:58.685 09:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:58.685 09:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:36:58.685 09:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YWMyZTgyNmY0ZmViODY5MTZkYzJiMGIyMmEzMGQzMTU3Y2I4MjM3ZDA0MTUzZmNkz5udcQ==: 00:36:58.685 09:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzQwYzA3NDMwOTIxMGQzODI0YzRhZWQ3OTJmZDZlYWNfk7c2: 00:36:58.685 09:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:58.685 09:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:58.685 09:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YWMyZTgyNmY0ZmViODY5MTZkYzJiMGIyMmEzMGQzMTU3Y2I4MjM3ZDA0MTUzZmNkz5udcQ==: 00:36:58.685 09:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzQwYzA3NDMwOTIxMGQzODI0YzRhZWQ3OTJmZDZlYWNfk7c2: ]] 00:36:58.685 09:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzQwYzA3NDMwOTIxMGQzODI0YzRhZWQ3OTJmZDZlYWNfk7c2: 00:36:58.685 09:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:36:58.685 09:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:58.685 09:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:58.685 09:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:58.685 09:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:36:58.685 09:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:58.685 09:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:58.685 09:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:58.685 09:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:58.685 09:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:58.685 09:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:58.685 09:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:58.685 09:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:58.685 09:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:58.685 09:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:58.685 09:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:58.685 09:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:58.686 09:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:58.686 09:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:58.686 09:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:58.686 09:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:58.686 09:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:36:58.686 09:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:58.686 09:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:59.628 nvme0n1 00:36:59.628 09:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:59.628 09:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:36:59.628 09:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:36:59.628 09:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.628 09:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:59.628 09:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:59.628 09:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:36:59.628 09:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:36:59.628 09:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.628 09:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:59.628 09:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:59.628 09:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:36:59.628 09:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:36:59.628 09:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:36:59.628 09:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:36:59.628 09:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:36:59.628 09:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:36:59.628 09:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTAyY2RkZWEzYzhjZmVkZGE5YjNkMzdhYmIzNjk4NTg3MjcwMWRiMDM3MGJiMzI1NzQyYjEwNzY5MjJjYjcwNFwfJzg=: 00:36:59.628 09:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:36:59.628 09:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:36:59.628 09:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:36:59.628 09:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTAyY2RkZWEzYzhjZmVkZGE5YjNkMzdhYmIzNjk4NTg3MjcwMWRiMDM3MGJiMzI1NzQyYjEwNzY5MjJjYjcwNFwfJzg=: 00:36:59.628 09:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:36:59.628 09:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:36:59.628 09:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:36:59.628 09:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:36:59.628 09:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:36:59.628 09:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:36:59.628 09:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:36:59.628 09:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:36:59.628 09:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.628 09:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:36:59.628 09:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:59.628 09:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:36:59.628 09:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:36:59.628 09:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:59.628 09:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:59.628 09:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:59.628 09:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:59.628 09:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:59.628 09:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:59.628 09:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:59.628 09:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:59.628 09:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:59.628 09:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:36:59.628 09:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.628 09:54:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:00.196 nvme0n1 00:37:00.196 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:00.196 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:37:00.196 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:37:00.196 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:00.196 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:00.196 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:00.196 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:00.196 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:37:00.196 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:00.196 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:00.196 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:00.196 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:37:00.196 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:00.196 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:00.196 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:00.196 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:00.196 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2NmZDZiNWI2Njc3NDc0NTM0YzMwMmZmYThkOGIwNmRlYmI3MzhiNDZmYjE4NmVkp8rIAg==: 00:37:00.196 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTAzZjBmNDhhYWIxZGExMGU0YzhlYWQ0MGE0YWViNTc4NmI1ZDEyM2ZkNWU0MTNmdVUh2g==: 00:37:00.196 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:00.196 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:00.196 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2NmZDZiNWI2Njc3NDc0NTM0YzMwMmZmYThkOGIwNmRlYmI3MzhiNDZmYjE4NmVkp8rIAg==: 00:37:00.196 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTAzZjBmNDhhYWIxZGExMGU0YzhlYWQ0MGE0YWViNTc4NmI1ZDEyM2ZkNWU0MTNmdVUh2g==: ]] 00:37:00.196 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTAzZjBmNDhhYWIxZGExMGU0YzhlYWQ0MGE0YWViNTc4NmI1ZDEyM2ZkNWU0MTNmdVUh2g==: 00:37:00.196 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:37:00.196 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:00.196 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:00.196 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:00.196 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:37:00.196 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:00.196 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:00.196 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:00.196 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:00.196 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:00.196 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:00.196 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:00.196 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:00.196 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:00.196 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:00.196 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:37:00.196 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:37:00.196 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:37:00.196 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:37:00.196 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:00.196 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:37:00.196 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:00.196 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:37:00.196 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:00.196 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:00.197 request: 00:37:00.197 { 00:37:00.197 "name": "nvme0", 00:37:00.197 "trtype": "tcp", 00:37:00.197 "traddr": "10.0.0.1", 00:37:00.197 "adrfam": "ipv4", 00:37:00.197 "trsvcid": "4420", 00:37:00.197 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:37:00.197 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:37:00.197 "prchk_reftag": false, 00:37:00.197 "prchk_guard": false, 00:37:00.197 "hdgst": false, 00:37:00.197 "ddgst": false, 00:37:00.197 "allow_unrecognized_csi": false, 00:37:00.197 "method": "bdev_nvme_attach_controller", 00:37:00.197 "req_id": 1 00:37:00.197 } 00:37:00.197 Got JSON-RPC error response 00:37:00.197 response: 00:37:00.197 { 00:37:00.197 "code": -5, 00:37:00.197 "message": "Input/output error" 00:37:00.197 } 00:37:00.197 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:37:00.197 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:37:00.197 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:00.197 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:00.197 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:00.197 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:37:00.197 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:37:00.197 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:00.197 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:00.197 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:00.197 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:37:00.197 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:37:00.197 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:00.197 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:00.197 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:00.197 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:00.197 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:00.197 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:00.197 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:00.197 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:00.197 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:00.197 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:00.197 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:37:00.197 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:37:00.197 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:37:00.197 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:37:00.197 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:00.197 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:37:00.197 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:00.197 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:37:00.197 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:00.456 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:00.456 request: 00:37:00.456 { 00:37:00.456 "name": "nvme0", 00:37:00.456 "trtype": "tcp", 00:37:00.456 "traddr": "10.0.0.1", 00:37:00.456 "adrfam": "ipv4", 00:37:00.456 "trsvcid": "4420", 00:37:00.456 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:37:00.456 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:37:00.456 "prchk_reftag": false, 00:37:00.456 "prchk_guard": false, 00:37:00.456 "hdgst": false, 00:37:00.456 "ddgst": false, 00:37:00.456 "dhchap_key": "key2", 00:37:00.456 "allow_unrecognized_csi": false, 00:37:00.456 "method": "bdev_nvme_attach_controller", 00:37:00.456 "req_id": 1 00:37:00.456 } 00:37:00.456 Got JSON-RPC error response 00:37:00.456 response: 00:37:00.456 { 00:37:00.456 "code": -5, 00:37:00.456 "message": "Input/output error" 00:37:00.456 } 00:37:00.456 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:37:00.456 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:37:00.456 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:00.456 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:00.456 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:00.456 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:37:00.456 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:37:00.456 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:00.456 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:00.456 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:00.456 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:37:00.456 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:37:00.456 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:00.456 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:00.456 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:00.456 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:00.456 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:00.456 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:00.456 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:00.456 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:00.456 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:00.456 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:00.457 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:37:00.457 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:37:00.457 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:37:00.457 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:37:00.457 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:00.457 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:37:00.457 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:00.457 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:37:00.457 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:00.457 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:00.457 request: 00:37:00.457 { 00:37:00.457 "name": "nvme0", 00:37:00.457 "trtype": "tcp", 00:37:00.457 "traddr": "10.0.0.1", 00:37:00.457 "adrfam": "ipv4", 00:37:00.457 "trsvcid": "4420", 00:37:00.457 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:37:00.457 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:37:00.457 "prchk_reftag": false, 00:37:00.457 "prchk_guard": false, 00:37:00.457 "hdgst": false, 00:37:00.457 "ddgst": false, 00:37:00.457 "dhchap_key": "key1", 00:37:00.457 "dhchap_ctrlr_key": "ckey2", 00:37:00.457 "allow_unrecognized_csi": false, 00:37:00.457 "method": "bdev_nvme_attach_controller", 00:37:00.457 "req_id": 1 00:37:00.457 } 00:37:00.457 Got JSON-RPC error response 00:37:00.457 response: 00:37:00.457 { 00:37:00.457 "code": -5, 00:37:00.457 "message": "Input/output error" 00:37:00.457 } 00:37:00.457 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:37:00.457 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:37:00.457 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:00.457 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:00.457 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:00.457 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:37:00.457 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:00.457 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:00.457 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:00.457 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:00.457 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:00.457 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:00.457 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:00.457 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:00.457 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:00.457 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:00.457 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:37:00.457 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:00.457 09:54:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:00.716 nvme0n1 00:37:00.716 09:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:00.716 09:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:37:00.716 09:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:00.716 09:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:00.716 09:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:00.716 09:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:00.716 09:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTJiOWI0Y2MxODE5MmE4MDAzZTg4Y2FiZDA1ODRlZDk2K/RY: 00:37:00.716 09:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjY0Mjc0NDA4ZDhlZmNmOGQ1MDAyZjA4OTZmZGJjZjN5rQ7a: 00:37:00.716 09:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:00.716 09:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:00.716 09:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTJiOWI0Y2MxODE5MmE4MDAzZTg4Y2FiZDA1ODRlZDk2K/RY: 00:37:00.716 09:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjY0Mjc0NDA4ZDhlZmNmOGQ1MDAyZjA4OTZmZGJjZjN5rQ7a: ]] 00:37:00.716 09:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjY0Mjc0NDA4ZDhlZmNmOGQ1MDAyZjA4OTZmZGJjZjN5rQ7a: 00:37:00.716 09:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:37:00.716 09:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:00.716 09:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:00.716 09:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:00.716 09:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:37:00.716 09:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:37:00.716 09:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:00.716 09:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:00.716 09:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:00.716 09:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:37:00.716 09:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:37:00.716 09:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:37:00.716 09:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:37:00.716 09:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:37:00.716 09:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:00.716 09:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:37:00.717 09:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:00.717 09:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:37:00.717 09:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:00.717 09:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:00.977 request: 00:37:00.977 { 00:37:00.977 "name": "nvme0", 00:37:00.977 "dhchap_key": "key1", 00:37:00.977 "dhchap_ctrlr_key": "ckey2", 00:37:00.977 "method": "bdev_nvme_set_keys", 00:37:00.977 "req_id": 1 00:37:00.977 } 00:37:00.977 Got JSON-RPC error response 00:37:00.977 response: 00:37:00.977 { 00:37:00.977 "code": -13, 00:37:00.977 "message": "Permission denied" 00:37:00.977 } 00:37:00.977 09:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:37:00.977 09:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:37:00.977 09:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:00.977 09:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:00.977 09:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:00.977 09:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:37:00.977 09:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:37:00.977 09:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:00.977 09:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:00.977 09:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:00.977 09:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:37:00.977 09:54:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:37:01.916 09:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:37:01.916 09:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:37:01.916 09:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:01.916 09:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:01.916 09:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:01.916 09:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:37:01.916 09:54:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:37:02.856 09:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:37:02.856 09:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:37:02.856 09:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:02.856 09:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:02.856 09:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:03.117 09:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:37:03.117 09:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:37:03.117 09:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:03.117 09:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:03.117 09:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:03.117 09:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:37:03.117 09:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2NmZDZiNWI2Njc3NDc0NTM0YzMwMmZmYThkOGIwNmRlYmI3MzhiNDZmYjE4NmVkp8rIAg==: 00:37:03.117 09:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTAzZjBmNDhhYWIxZGExMGU0YzhlYWQ0MGE0YWViNTc4NmI1ZDEyM2ZkNWU0MTNmdVUh2g==: 00:37:03.117 09:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:03.117 09:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:03.117 09:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2NmZDZiNWI2Njc3NDc0NTM0YzMwMmZmYThkOGIwNmRlYmI3MzhiNDZmYjE4NmVkp8rIAg==: 00:37:03.117 09:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTAzZjBmNDhhYWIxZGExMGU0YzhlYWQ0MGE0YWViNTc4NmI1ZDEyM2ZkNWU0MTNmdVUh2g==: ]] 00:37:03.117 09:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTAzZjBmNDhhYWIxZGExMGU0YzhlYWQ0MGE0YWViNTc4NmI1ZDEyM2ZkNWU0MTNmdVUh2g==: 00:37:03.117 09:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:37:03.117 09:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:37:03.117 09:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:03.117 09:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:03.117 09:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:03.117 09:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:03.117 09:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:03.117 09:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:03.117 09:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:03.117 09:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:03.117 09:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:03.117 09:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:37:03.117 09:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:03.118 09:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:03.118 nvme0n1 00:37:03.118 09:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:03.118 09:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:37:03.118 09:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:37:03.118 09:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:37:03.118 09:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:37:03.118 09:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:37:03.118 09:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTJiOWI0Y2MxODE5MmE4MDAzZTg4Y2FiZDA1ODRlZDk2K/RY: 00:37:03.118 09:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjY0Mjc0NDA4ZDhlZmNmOGQ1MDAyZjA4OTZmZGJjZjN5rQ7a: 00:37:03.118 09:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:37:03.118 09:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:37:03.118 09:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTJiOWI0Y2MxODE5MmE4MDAzZTg4Y2FiZDA1ODRlZDk2K/RY: 00:37:03.118 09:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjY0Mjc0NDA4ZDhlZmNmOGQ1MDAyZjA4OTZmZGJjZjN5rQ7a: ]] 00:37:03.118 09:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjY0Mjc0NDA4ZDhlZmNmOGQ1MDAyZjA4OTZmZGJjZjN5rQ7a: 00:37:03.118 09:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:37:03.118 09:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:37:03.118 09:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:37:03.118 09:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:37:03.118 09:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:03.118 09:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:37:03.118 09:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:03.118 09:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:37:03.118 09:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:03.118 09:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:03.118 request: 00:37:03.118 { 00:37:03.118 "name": "nvme0", 00:37:03.118 "dhchap_key": "key2", 00:37:03.118 "dhchap_ctrlr_key": "ckey1", 00:37:03.118 "method": "bdev_nvme_set_keys", 00:37:03.118 "req_id": 1 00:37:03.118 } 00:37:03.118 Got JSON-RPC error response 00:37:03.118 response: 00:37:03.118 { 00:37:03.118 "code": -13, 00:37:03.118 "message": "Permission denied" 00:37:03.118 } 00:37:03.118 09:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:37:03.118 09:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:37:03.118 09:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:03.118 09:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:03.118 09:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:03.118 09:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:37:03.118 09:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:37:03.118 09:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:03.118 09:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:03.379 09:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:03.379 09:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:37:03.379 09:54:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:37:04.319 09:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:37:04.319 09:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:37:04.319 09:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:04.319 09:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:04.319 09:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:04.319 09:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:37:04.319 09:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:37:04.319 09:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:37:04.319 09:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:37:04.319 09:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:04.319 09:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:37:04.319 09:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:04.319 09:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:37:04.319 09:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:04.319 09:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:04.319 rmmod nvme_tcp 00:37:04.319 rmmod nvme_fabrics 00:37:04.319 09:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:04.319 09:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:37:04.319 09:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:37:04.319 09:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 3030978 ']' 00:37:04.319 09:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 3030978 00:37:04.319 09:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 3030978 ']' 00:37:04.319 09:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 3030978 00:37:04.319 09:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:37:04.319 09:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:04.319 09:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3030978 00:37:04.580 09:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:04.580 09:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:04.580 09:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3030978' 00:37:04.580 killing process with pid 3030978 00:37:04.580 09:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 3030978 00:37:04.580 09:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 3030978 00:37:04.580 09:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:04.580 09:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:04.580 09:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:04.580 09:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:37:04.580 09:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:37:04.580 09:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:04.580 09:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:37:04.580 09:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:04.580 09:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:04.580 09:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:04.580 09:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:04.580 09:54:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:06.508 09:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:06.508 09:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:37:06.770 09:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:37:06.770 09:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:37:06.770 09:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:37:06.770 09:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:37:06.770 09:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:37:06.770 09:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:37:06.770 09:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:37:06.770 09:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:37:06.770 09:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:37:06.770 09:54:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:37:06.770 09:54:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:10.070 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:10.330 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:10.330 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:10.330 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:10.330 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:10.330 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:10.330 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:10.330 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:10.330 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:10.330 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:10.330 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:10.330 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:10.330 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:10.330 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:10.330 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:10.330 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:10.330 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:37:10.901 09:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.Pd5 /tmp/spdk.key-null.K09 /tmp/spdk.key-sha256.UJY /tmp/spdk.key-sha384.TpT /tmp/spdk.key-sha512.kor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:37:10.901 09:54:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:14.204 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:37:14.204 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:37:14.204 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:37:14.204 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:37:14.204 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:37:14.204 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:37:14.204 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:37:14.204 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:37:14.204 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:37:14.204 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:37:14.204 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:37:14.205 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:37:14.205 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:37:14.205 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:37:14.205 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:37:14.205 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:37:14.205 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:37:14.780 00:37:14.780 real 0m59.638s 00:37:14.780 user 0m53.527s 00:37:14.780 sys 0m15.671s 00:37:14.780 09:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:14.780 09:54:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:37:14.780 ************************************ 00:37:14.780 END TEST nvmf_auth_host 00:37:14.780 ************************************ 00:37:14.780 09:54:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:37:14.780 09:54:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:37:14.780 09:54:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:37:14.780 09:54:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:14.780 09:54:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:37:14.780 ************************************ 00:37:14.780 START TEST nvmf_digest 00:37:14.780 ************************************ 00:37:14.780 09:54:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:37:14.780 * Looking for test storage... 00:37:14.780 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:37:14.780 09:54:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:14.780 09:54:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lcov --version 00:37:14.780 09:54:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:14.780 09:54:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:14.780 09:54:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:14.780 09:54:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:14.780 09:54:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:14.780 09:54:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:37:14.780 09:54:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:37:14.780 09:54:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:37:14.780 09:54:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:37:14.780 09:54:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:37:14.780 09:54:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:37:14.780 09:54:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:37:14.780 09:54:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:14.780 09:54:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:37:14.780 09:54:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:37:14.780 09:54:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:14.780 09:54:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:14.780 09:54:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:37:14.780 09:54:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:37:14.780 09:54:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:14.780 09:54:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:37:14.780 09:54:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:37:14.780 09:54:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:37:14.780 09:54:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:37:14.780 09:54:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:14.780 09:54:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:37:14.780 09:54:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:37:14.780 09:54:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:14.781 09:54:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:14.781 09:54:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:37:14.781 09:54:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:14.781 09:54:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:14.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:14.781 --rc genhtml_branch_coverage=1 00:37:14.781 --rc genhtml_function_coverage=1 00:37:14.781 --rc genhtml_legend=1 00:37:14.781 --rc geninfo_all_blocks=1 00:37:14.781 --rc geninfo_unexecuted_blocks=1 00:37:14.781 00:37:14.781 ' 00:37:14.781 09:54:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:14.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:14.781 --rc genhtml_branch_coverage=1 00:37:14.781 --rc genhtml_function_coverage=1 00:37:14.781 --rc genhtml_legend=1 00:37:14.781 --rc geninfo_all_blocks=1 00:37:14.781 --rc geninfo_unexecuted_blocks=1 00:37:14.781 00:37:14.781 ' 00:37:14.781 09:54:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:14.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:14.781 --rc genhtml_branch_coverage=1 00:37:14.781 --rc genhtml_function_coverage=1 00:37:14.781 --rc genhtml_legend=1 00:37:14.781 --rc geninfo_all_blocks=1 00:37:14.781 --rc geninfo_unexecuted_blocks=1 00:37:14.781 00:37:14.781 ' 00:37:14.781 09:54:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:14.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:14.781 --rc genhtml_branch_coverage=1 00:37:14.781 --rc genhtml_function_coverage=1 00:37:14.781 --rc genhtml_legend=1 00:37:14.781 --rc geninfo_all_blocks=1 00:37:14.781 --rc geninfo_unexecuted_blocks=1 00:37:14.781 00:37:14.781 ' 00:37:14.781 09:54:50 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:14.781 09:54:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:37:15.043 09:54:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:15.043 09:54:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:15.043 09:54:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:15.043 09:54:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:15.043 09:54:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:15.043 09:54:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:15.043 09:54:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:15.043 09:54:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:15.043 09:54:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:15.043 09:54:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:15.043 09:54:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:15.043 09:54:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:15.043 09:54:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:15.043 09:54:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:15.043 09:54:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:15.043 09:54:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:15.043 09:54:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:15.043 09:54:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:37:15.043 09:54:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:15.043 09:54:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:15.043 09:54:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:15.043 09:54:50 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:15.043 09:54:50 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:15.043 09:54:50 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:15.043 09:54:50 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:37:15.043 09:54:50 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:15.043 09:54:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:37:15.043 09:54:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:15.043 09:54:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:15.043 09:54:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:15.043 09:54:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:15.043 09:54:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:15.043 09:54:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:15.043 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:15.043 09:54:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:15.043 09:54:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:15.043 09:54:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:15.043 09:54:50 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:37:15.043 09:54:50 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:37:15.043 09:54:50 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:37:15.043 09:54:50 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:37:15.043 09:54:50 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:37:15.043 09:54:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:15.043 09:54:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:15.044 09:54:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:15.044 09:54:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:15.044 09:54:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:15.044 09:54:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:15.044 09:54:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:15.044 09:54:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:15.044 09:54:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:15.044 09:54:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:15.044 09:54:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:37:15.044 09:54:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:37:23.186 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:23.186 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:37:23.186 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:23.186 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:23.186 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:23.186 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:23.186 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:23.186 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:37:23.186 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:23.186 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:37:23.186 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:37:23.186 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:37:23.186 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:37:23.186 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:37:23.186 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:37:23.186 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:23.186 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:23.186 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:23.186 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:23.186 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:23.186 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:23.186 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:23.186 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:23.186 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:23.186 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:23.186 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:23.186 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:23.186 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:23.186 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:23.186 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:23.186 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:23.186 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:23.186 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:23.186 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:23.186 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:37:23.186 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:37:23.186 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:23.186 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:23.186 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:23.186 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:23.186 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:23.186 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:23.186 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:37:23.186 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:37:23.186 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:23.186 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:23.186 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:23.186 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:23.186 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:23.186 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:23.186 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:23.186 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:23.186 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:23.186 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:23.186 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:23.186 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:23.186 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:23.186 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:23.186 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:23.186 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:37:23.186 Found net devices under 0000:4b:00.0: cvl_0_0 00:37:23.186 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:23.186 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:23.186 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:23.186 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:23.186 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:23.186 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:23.186 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:23.186 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:23.186 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:37:23.186 Found net devices under 0000:4b:00.1: cvl_0_1 00:37:23.186 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:23.186 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:23.186 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:37:23.186 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:23.186 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:23.186 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:23.186 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:23.186 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:23.186 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:23.186 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:23.186 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:23.186 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:23.186 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:23.186 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:23.186 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:23.186 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:23.186 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:23.186 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:23.186 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:23.186 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:23.186 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:23.187 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:23.187 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:23.187 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:23.187 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:23.187 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:23.187 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:23.187 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:23.187 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:23.187 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:23.187 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.627 ms 00:37:23.187 00:37:23.187 --- 10.0.0.2 ping statistics --- 00:37:23.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:23.187 rtt min/avg/max/mdev = 0.627/0.627/0.627/0.000 ms 00:37:23.187 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:23.187 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:23.187 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:37:23.187 00:37:23.187 --- 10.0.0.1 ping statistics --- 00:37:23.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:23.187 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:37:23.187 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:23.187 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:37:23.187 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:23.187 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:23.187 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:23.187 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:23.187 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:23.187 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:23.187 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:23.187 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:37:23.187 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:37:23.187 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:37:23.187 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:23.187 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:23.187 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:37:23.187 ************************************ 00:37:23.187 START TEST nvmf_digest_clean 00:37:23.187 ************************************ 00:37:23.187 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:37:23.187 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:37:23.187 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:37:23.187 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:37:23.187 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:37:23.187 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:37:23.187 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:23.187 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:23.187 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:23.187 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=3048195 00:37:23.187 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 3048195 00:37:23.187 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:37:23.187 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3048195 ']' 00:37:23.187 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:23.187 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:23.187 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:23.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:23.187 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:23.187 09:54:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:23.187 [2024-12-09 09:54:57.543206] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:37:23.187 [2024-12-09 09:54:57.543269] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:23.187 [2024-12-09 09:54:57.642428] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:23.187 [2024-12-09 09:54:57.668837] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:23.187 [2024-12-09 09:54:57.668885] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:23.187 [2024-12-09 09:54:57.668894] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:23.187 [2024-12-09 09:54:57.668901] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:23.187 [2024-12-09 09:54:57.668907] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:23.187 [2024-12-09 09:54:57.669677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:23.187 09:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:23.187 09:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:37:23.187 09:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:23.187 09:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:23.187 09:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:23.187 09:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:23.187 09:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:37:23.187 09:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:37:23.187 09:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:37:23.187 09:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:23.187 09:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:23.187 null0 00:37:23.187 [2024-12-09 09:54:58.485359] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:23.187 [2024-12-09 09:54:58.509687] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:23.187 09:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:23.187 09:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:37:23.187 09:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:37:23.187 09:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:37:23.187 09:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:37:23.187 09:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:37:23.187 09:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:37:23.187 09:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:37:23.187 09:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3048269 00:37:23.187 09:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3048269 /var/tmp/bperf.sock 00:37:23.187 09:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3048269 ']' 00:37:23.187 09:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:23.187 09:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:37:23.187 09:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:23.187 09:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:23.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:23.187 09:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:23.187 09:54:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:23.187 [2024-12-09 09:54:58.568282] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:37:23.187 [2024-12-09 09:54:58.568347] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3048269 ] 00:37:23.449 [2024-12-09 09:54:58.658687] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:23.449 [2024-12-09 09:54:58.686319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:24.023 09:54:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:24.023 09:54:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:37:24.023 09:54:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:37:24.023 09:54:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:37:24.023 09:54:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:37:24.285 09:54:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:24.285 09:54:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:24.546 nvme0n1 00:37:24.546 09:54:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:37:24.546 09:54:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:24.808 Running I/O for 2 seconds... 00:37:26.696 18584.00 IOPS, 72.59 MiB/s [2024-12-09T08:55:02.149Z] 20192.50 IOPS, 78.88 MiB/s 00:37:26.696 Latency(us) 00:37:26.696 [2024-12-09T08:55:02.149Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:26.696 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:37:26.696 nvme0n1 : 2.00 20222.53 78.99 0.00 0.00 6321.24 2839.89 17039.36 00:37:26.696 [2024-12-09T08:55:02.149Z] =================================================================================================================== 00:37:26.696 [2024-12-09T08:55:02.149Z] Total : 20222.53 78.99 0.00 0.00 6321.24 2839.89 17039.36 00:37:26.696 { 00:37:26.696 "results": [ 00:37:26.696 { 00:37:26.696 "job": "nvme0n1", 00:37:26.696 "core_mask": "0x2", 00:37:26.696 "workload": "randread", 00:37:26.696 "status": "finished", 00:37:26.696 "queue_depth": 128, 00:37:26.696 "io_size": 4096, 00:37:26.696 "runtime": 2.004546, 00:37:26.696 "iops": 20222.534179809292, 00:37:26.696 "mibps": 78.99427413988005, 00:37:26.696 "io_failed": 0, 00:37:26.696 "io_timeout": 0, 00:37:26.696 "avg_latency_us": 6321.243587504419, 00:37:26.696 "min_latency_us": 2839.8933333333334, 00:37:26.696 "max_latency_us": 17039.36 00:37:26.696 } 00:37:26.696 ], 00:37:26.696 "core_count": 1 00:37:26.696 } 00:37:26.696 09:55:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:37:26.696 09:55:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:37:26.696 09:55:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:37:26.696 09:55:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:37:26.696 | select(.opcode=="crc32c") 00:37:26.696 | "\(.module_name) \(.executed)"' 00:37:26.696 09:55:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:37:26.958 09:55:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:37:26.958 09:55:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:37:26.958 09:55:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:37:26.958 09:55:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:37:26.958 09:55:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3048269 00:37:26.958 09:55:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3048269 ']' 00:37:26.958 09:55:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3048269 00:37:26.958 09:55:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:37:26.958 09:55:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:26.958 09:55:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3048269 00:37:26.958 09:55:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:26.958 09:55:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:26.958 09:55:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3048269' 00:37:26.958 killing process with pid 3048269 00:37:26.958 09:55:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3048269 00:37:26.958 Received shutdown signal, test time was about 2.000000 seconds 00:37:26.958 00:37:26.958 Latency(us) 00:37:26.958 [2024-12-09T08:55:02.411Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:26.958 [2024-12-09T08:55:02.411Z] =================================================================================================================== 00:37:26.958 [2024-12-09T08:55:02.411Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:26.958 09:55:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3048269 00:37:27.219 09:55:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:37:27.219 09:55:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:37:27.219 09:55:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:37:27.219 09:55:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:37:27.219 09:55:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:37:27.219 09:55:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:37:27.219 09:55:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:37:27.219 09:55:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3049072 00:37:27.219 09:55:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3049072 /var/tmp/bperf.sock 00:37:27.219 09:55:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3049072 ']' 00:37:27.219 09:55:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:37:27.219 09:55:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:27.219 09:55:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:27.219 09:55:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:27.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:27.219 09:55:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:27.219 09:55:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:27.219 [2024-12-09 09:55:02.515836] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:37:27.220 [2024-12-09 09:55:02.515893] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3049072 ] 00:37:27.220 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:27.220 Zero copy mechanism will not be used. 00:37:27.220 [2024-12-09 09:55:02.600913] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:27.220 [2024-12-09 09:55:02.616959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:27.220 09:55:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:27.220 09:55:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:37:27.220 09:55:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:37:27.220 09:55:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:37:27.220 09:55:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:37:27.481 09:55:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:27.481 09:55:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:28.053 nvme0n1 00:37:28.053 09:55:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:37:28.053 09:55:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:28.053 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:28.053 Zero copy mechanism will not be used. 00:37:28.053 Running I/O for 2 seconds... 00:37:29.933 3926.00 IOPS, 490.75 MiB/s [2024-12-09T08:55:05.386Z] 4017.00 IOPS, 502.12 MiB/s 00:37:29.933 Latency(us) 00:37:29.933 [2024-12-09T08:55:05.386Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:29.933 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:37:29.933 nvme0n1 : 2.01 4013.33 501.67 0.00 0.00 3983.83 648.53 12561.07 00:37:29.933 [2024-12-09T08:55:05.386Z] =================================================================================================================== 00:37:29.933 [2024-12-09T08:55:05.386Z] Total : 4013.33 501.67 0.00 0.00 3983.83 648.53 12561.07 00:37:29.933 { 00:37:29.933 "results": [ 00:37:29.933 { 00:37:29.933 "job": "nvme0n1", 00:37:29.933 "core_mask": "0x2", 00:37:29.933 "workload": "randread", 00:37:29.933 "status": "finished", 00:37:29.933 "queue_depth": 16, 00:37:29.933 "io_size": 131072, 00:37:29.933 "runtime": 2.005818, 00:37:29.933 "iops": 4013.325236885899, 00:37:29.933 "mibps": 501.66565461073736, 00:37:29.933 "io_failed": 0, 00:37:29.933 "io_timeout": 0, 00:37:29.933 "avg_latency_us": 3983.8306583850926, 00:37:29.933 "min_latency_us": 648.5333333333333, 00:37:29.933 "max_latency_us": 12561.066666666668 00:37:29.933 } 00:37:29.933 ], 00:37:29.933 "core_count": 1 00:37:29.933 } 00:37:29.933 09:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:37:29.933 09:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:37:29.933 09:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:37:29.933 09:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:37:29.933 | select(.opcode=="crc32c") 00:37:29.933 | "\(.module_name) \(.executed)"' 00:37:29.933 09:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:37:30.193 09:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:37:30.193 09:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:37:30.193 09:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:37:30.193 09:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:37:30.193 09:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3049072 00:37:30.193 09:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3049072 ']' 00:37:30.193 09:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3049072 00:37:30.193 09:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:37:30.193 09:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:30.193 09:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3049072 00:37:30.193 09:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:30.193 09:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:30.193 09:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3049072' 00:37:30.193 killing process with pid 3049072 00:37:30.193 09:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3049072 00:37:30.193 Received shutdown signal, test time was about 2.000000 seconds 00:37:30.193 00:37:30.193 Latency(us) 00:37:30.193 [2024-12-09T08:55:05.646Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:30.193 [2024-12-09T08:55:05.646Z] =================================================================================================================== 00:37:30.193 [2024-12-09T08:55:05.646Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:30.193 09:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3049072 00:37:30.454 09:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:37:30.454 09:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:37:30.454 09:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:37:30.454 09:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:37:30.454 09:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:37:30.454 09:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:37:30.454 09:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:37:30.454 09:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3049599 00:37:30.454 09:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3049599 /var/tmp/bperf.sock 00:37:30.454 09:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3049599 ']' 00:37:30.454 09:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:37:30.454 09:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:30.454 09:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:30.454 09:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:30.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:30.454 09:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:30.454 09:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:30.454 [2024-12-09 09:55:05.744752] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:37:30.454 [2024-12-09 09:55:05.744812] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3049599 ] 00:37:30.454 [2024-12-09 09:55:05.826356] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:30.454 [2024-12-09 09:55:05.842353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:30.454 09:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:30.454 09:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:37:30.454 09:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:37:30.454 09:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:37:30.454 09:55:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:37:30.716 09:55:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:30.717 09:55:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:31.290 nvme0n1 00:37:31.290 09:55:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:37:31.290 09:55:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:31.290 Running I/O for 2 seconds... 00:37:33.169 30534.00 IOPS, 119.27 MiB/s [2024-12-09T08:55:08.622Z] 30659.50 IOPS, 119.76 MiB/s 00:37:33.169 Latency(us) 00:37:33.169 [2024-12-09T08:55:08.622Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:33.169 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:33.169 nvme0n1 : 2.01 30677.67 119.83 0.00 0.00 4167.03 2252.80 11632.64 00:37:33.169 [2024-12-09T08:55:08.622Z] =================================================================================================================== 00:37:33.169 [2024-12-09T08:55:08.622Z] Total : 30677.67 119.83 0.00 0.00 4167.03 2252.80 11632.64 00:37:33.169 { 00:37:33.169 "results": [ 00:37:33.169 { 00:37:33.169 "job": "nvme0n1", 00:37:33.169 "core_mask": "0x2", 00:37:33.169 "workload": "randwrite", 00:37:33.169 "status": "finished", 00:37:33.169 "queue_depth": 128, 00:37:33.169 "io_size": 4096, 00:37:33.169 "runtime": 2.005009, 00:37:33.169 "iops": 30677.66778104238, 00:37:33.169 "mibps": 119.83463976969679, 00:37:33.169 "io_failed": 0, 00:37:33.169 "io_timeout": 0, 00:37:33.169 "avg_latency_us": 4167.03078465482, 00:37:33.169 "min_latency_us": 2252.8, 00:37:33.169 "max_latency_us": 11632.64 00:37:33.169 } 00:37:33.169 ], 00:37:33.169 "core_count": 1 00:37:33.169 } 00:37:33.430 09:55:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:37:33.430 09:55:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:37:33.430 09:55:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:37:33.430 09:55:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:37:33.430 | select(.opcode=="crc32c") 00:37:33.430 | "\(.module_name) \(.executed)"' 00:37:33.430 09:55:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:37:33.430 09:55:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:37:33.430 09:55:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:37:33.430 09:55:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:37:33.430 09:55:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:37:33.430 09:55:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3049599 00:37:33.430 09:55:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3049599 ']' 00:37:33.430 09:55:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3049599 00:37:33.430 09:55:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:37:33.430 09:55:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:33.430 09:55:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3049599 00:37:33.430 09:55:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:33.430 09:55:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:33.430 09:55:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3049599' 00:37:33.430 killing process with pid 3049599 00:37:33.430 09:55:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3049599 00:37:33.430 Received shutdown signal, test time was about 2.000000 seconds 00:37:33.430 00:37:33.430 Latency(us) 00:37:33.430 [2024-12-09T08:55:08.883Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:33.430 [2024-12-09T08:55:08.883Z] =================================================================================================================== 00:37:33.430 [2024-12-09T08:55:08.883Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:33.431 09:55:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3049599 00:37:33.692 09:55:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:37:33.692 09:55:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:37:33.692 09:55:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:37:33.692 09:55:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:37:33.692 09:55:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:37:33.692 09:55:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:37:33.692 09:55:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:37:33.692 09:55:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3050258 00:37:33.692 09:55:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3050258 /var/tmp/bperf.sock 00:37:33.692 09:55:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3050258 ']' 00:37:33.692 09:55:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:37:33.692 09:55:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:33.692 09:55:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:33.692 09:55:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:33.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:33.692 09:55:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:33.692 09:55:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:33.692 [2024-12-09 09:55:09.031720] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:37:33.692 [2024-12-09 09:55:09.031788] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3050258 ] 00:37:33.692 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:33.692 Zero copy mechanism will not be used. 00:37:33.692 [2024-12-09 09:55:09.115232] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:33.692 [2024-12-09 09:55:09.129796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:34.632 09:55:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:34.632 09:55:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:37:34.632 09:55:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:37:34.632 09:55:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:37:34.632 09:55:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:37:34.632 09:55:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:34.632 09:55:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:34.892 nvme0n1 00:37:34.892 09:55:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:37:34.892 09:55:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:34.892 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:34.892 Zero copy mechanism will not be used. 00:37:34.892 Running I/O for 2 seconds... 00:37:37.216 5810.00 IOPS, 726.25 MiB/s [2024-12-09T08:55:12.669Z] 4747.50 IOPS, 593.44 MiB/s 00:37:37.216 Latency(us) 00:37:37.216 [2024-12-09T08:55:12.669Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:37.216 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:37:37.216 nvme0n1 : 2.01 4742.47 592.81 0.00 0.00 3367.14 1119.57 11523.41 00:37:37.216 [2024-12-09T08:55:12.669Z] =================================================================================================================== 00:37:37.216 [2024-12-09T08:55:12.669Z] Total : 4742.47 592.81 0.00 0.00 3367.14 1119.57 11523.41 00:37:37.216 { 00:37:37.216 "results": [ 00:37:37.216 { 00:37:37.216 "job": "nvme0n1", 00:37:37.216 "core_mask": "0x2", 00:37:37.216 "workload": "randwrite", 00:37:37.216 "status": "finished", 00:37:37.216 "queue_depth": 16, 00:37:37.216 "io_size": 131072, 00:37:37.216 "runtime": 2.005493, 00:37:37.216 "iops": 4742.474792981077, 00:37:37.216 "mibps": 592.8093491226347, 00:37:37.216 "io_failed": 0, 00:37:37.216 "io_timeout": 0, 00:37:37.216 "avg_latency_us": 3367.1361581326883, 00:37:37.216 "min_latency_us": 1119.5733333333333, 00:37:37.216 "max_latency_us": 11523.413333333334 00:37:37.216 } 00:37:37.216 ], 00:37:37.216 "core_count": 1 00:37:37.216 } 00:37:37.216 09:55:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:37:37.216 09:55:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:37:37.216 09:55:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:37:37.216 09:55:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:37:37.216 | select(.opcode=="crc32c") 00:37:37.216 | "\(.module_name) \(.executed)"' 00:37:37.216 09:55:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:37:37.216 09:55:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:37:37.216 09:55:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:37:37.216 09:55:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:37:37.216 09:55:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:37:37.216 09:55:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3050258 00:37:37.216 09:55:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3050258 ']' 00:37:37.216 09:55:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3050258 00:37:37.217 09:55:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:37:37.217 09:55:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:37.217 09:55:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3050258 00:37:37.217 09:55:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:37.217 09:55:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:37.217 09:55:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3050258' 00:37:37.217 killing process with pid 3050258 00:37:37.217 09:55:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3050258 00:37:37.217 Received shutdown signal, test time was about 2.000000 seconds 00:37:37.217 00:37:37.217 Latency(us) 00:37:37.217 [2024-12-09T08:55:12.670Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:37.217 [2024-12-09T08:55:12.670Z] =================================================================================================================== 00:37:37.217 [2024-12-09T08:55:12.670Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:37.217 09:55:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3050258 00:37:37.478 09:55:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 3048195 00:37:37.478 09:55:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3048195 ']' 00:37:37.478 09:55:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3048195 00:37:37.478 09:55:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:37:37.478 09:55:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:37.478 09:55:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3048195 00:37:37.478 09:55:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:37.478 09:55:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:37.478 09:55:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3048195' 00:37:37.478 killing process with pid 3048195 00:37:37.478 09:55:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3048195 00:37:37.478 09:55:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3048195 00:37:37.478 00:37:37.478 real 0m15.390s 00:37:37.478 user 0m30.085s 00:37:37.478 sys 0m3.700s 00:37:37.478 09:55:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:37.478 09:55:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:37:37.478 ************************************ 00:37:37.478 END TEST nvmf_digest_clean 00:37:37.478 ************************************ 00:37:37.478 09:55:12 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:37:37.478 09:55:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:37.478 09:55:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:37.478 09:55:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:37:37.740 ************************************ 00:37:37.740 START TEST nvmf_digest_error 00:37:37.740 ************************************ 00:37:37.740 09:55:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:37:37.740 09:55:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:37:37.740 09:55:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:37.740 09:55:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:37.740 09:55:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:37.740 09:55:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=3050995 00:37:37.740 09:55:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 3050995 00:37:37.740 09:55:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:37:37.740 09:55:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3050995 ']' 00:37:37.740 09:55:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:37.741 09:55:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:37.741 09:55:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:37.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:37.741 09:55:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:37.741 09:55:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:37.741 [2024-12-09 09:55:13.008098] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:37:37.741 [2024-12-09 09:55:13.008153] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:37.741 [2024-12-09 09:55:13.098899] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:37.741 [2024-12-09 09:55:13.114729] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:37.741 [2024-12-09 09:55:13.114760] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:37.741 [2024-12-09 09:55:13.114766] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:37.741 [2024-12-09 09:55:13.114771] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:37.741 [2024-12-09 09:55:13.114775] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:37.741 [2024-12-09 09:55:13.115258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:38.685 09:55:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:38.685 09:55:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:37:38.685 09:55:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:38.685 09:55:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:38.685 09:55:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:38.685 09:55:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:38.685 09:55:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:37:38.685 09:55:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:38.685 09:55:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:38.685 [2024-12-09 09:55:13.837240] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:37:38.685 09:55:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:38.685 09:55:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:37:38.685 09:55:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:37:38.685 09:55:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:38.685 09:55:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:38.685 null0 00:37:38.685 [2024-12-09 09:55:13.910303] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:38.685 [2024-12-09 09:55:13.934517] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:38.685 09:55:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:38.685 09:55:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:37:38.685 09:55:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:37:38.685 09:55:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:37:38.685 09:55:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:37:38.685 09:55:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:37:38.685 09:55:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3051318 00:37:38.685 09:55:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3051318 /var/tmp/bperf.sock 00:37:38.685 09:55:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3051318 ']' 00:37:38.685 09:55:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:37:38.685 09:55:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:38.685 09:55:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:38.685 09:55:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:38.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:38.685 09:55:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:38.685 09:55:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:38.685 [2024-12-09 09:55:13.991478] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:37:38.686 [2024-12-09 09:55:13.991526] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3051318 ] 00:37:38.686 [2024-12-09 09:55:14.073870] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:38.686 [2024-12-09 09:55:14.090136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:38.946 09:55:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:38.946 09:55:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:37:38.946 09:55:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:38.946 09:55:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:38.946 09:55:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:37:38.946 09:55:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:38.946 09:55:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:38.946 09:55:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:38.946 09:55:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:38.947 09:55:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:39.519 nvme0n1 00:37:39.519 09:55:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:37:39.519 09:55:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:39.519 09:55:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:39.519 09:55:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:39.519 09:55:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:37:39.519 09:55:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:39.519 Running I/O for 2 seconds... 00:37:39.519 [2024-12-09 09:55:14.866705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:39.519 [2024-12-09 09:55:14.866736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:2426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.519 [2024-12-09 09:55:14.866745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:39.519 [2024-12-09 09:55:14.877819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:39.519 [2024-12-09 09:55:14.877841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:17060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.519 [2024-12-09 09:55:14.877848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:39.519 [2024-12-09 09:55:14.886690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:39.519 [2024-12-09 09:55:14.886710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:21909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.519 [2024-12-09 09:55:14.886717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:39.519 [2024-12-09 09:55:14.895016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:39.519 [2024-12-09 09:55:14.895036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.519 [2024-12-09 09:55:14.895042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:39.519 [2024-12-09 09:55:14.904393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:39.519 [2024-12-09 09:55:14.904411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:20186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.519 [2024-12-09 09:55:14.904425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:39.519 [2024-12-09 09:55:14.914064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:39.519 [2024-12-09 09:55:14.914082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:17069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.519 [2024-12-09 09:55:14.914089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:39.519 [2024-12-09 09:55:14.922011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:39.519 [2024-12-09 09:55:14.922029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:19288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.519 [2024-12-09 09:55:14.922036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:39.519 [2024-12-09 09:55:14.933443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:39.519 [2024-12-09 09:55:14.933462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:12824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.519 [2024-12-09 09:55:14.933468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:39.519 [2024-12-09 09:55:14.943959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:39.519 [2024-12-09 09:55:14.943977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:17948 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.519 [2024-12-09 09:55:14.943984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:39.519 [2024-12-09 09:55:14.953503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:39.519 [2024-12-09 09:55:14.953521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:3513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.519 [2024-12-09 09:55:14.953528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:39.519 [2024-12-09 09:55:14.961662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:39.519 [2024-12-09 09:55:14.961679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:25003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.519 [2024-12-09 09:55:14.961686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:39.782 [2024-12-09 09:55:14.971151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:39.783 [2024-12-09 09:55:14.971169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.783 [2024-12-09 09:55:14.971176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:39.783 [2024-12-09 09:55:14.984042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:39.783 [2024-12-09 09:55:14.984062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:8556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.783 [2024-12-09 09:55:14.984070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:39.783 [2024-12-09 09:55:14.992647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:39.783 [2024-12-09 09:55:14.992669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:12593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.783 [2024-12-09 09:55:14.992676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:39.783 [2024-12-09 09:55:15.001600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:39.783 [2024-12-09 09:55:15.001619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:5653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.783 [2024-12-09 09:55:15.001625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:39.783 [2024-12-09 09:55:15.010779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:39.783 [2024-12-09 09:55:15.010797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.783 [2024-12-09 09:55:15.010803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:39.783 [2024-12-09 09:55:15.018073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:39.783 [2024-12-09 09:55:15.018093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:2339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.783 [2024-12-09 09:55:15.018099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:39.783 [2024-12-09 09:55:15.028902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:39.783 [2024-12-09 09:55:15.028920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:6701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.783 [2024-12-09 09:55:15.028927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:39.783 [2024-12-09 09:55:15.038222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:39.783 [2024-12-09 09:55:15.038240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:24390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.783 [2024-12-09 09:55:15.038247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:39.783 [2024-12-09 09:55:15.048775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:39.783 [2024-12-09 09:55:15.048794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:9655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.783 [2024-12-09 09:55:15.048801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:39.783 [2024-12-09 09:55:15.057883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:39.783 [2024-12-09 09:55:15.057901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.783 [2024-12-09 09:55:15.057908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:39.783 [2024-12-09 09:55:15.066925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:39.783 [2024-12-09 09:55:15.066943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:17267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.784 [2024-12-09 09:55:15.066953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:39.784 [2024-12-09 09:55:15.075221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:39.784 [2024-12-09 09:55:15.075239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:10020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.784 [2024-12-09 09:55:15.075246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:39.784 [2024-12-09 09:55:15.084985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:39.784 [2024-12-09 09:55:15.085003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.784 [2024-12-09 09:55:15.085009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:39.784 [2024-12-09 09:55:15.094329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:39.784 [2024-12-09 09:55:15.094347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:17964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.784 [2024-12-09 09:55:15.094353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:39.784 [2024-12-09 09:55:15.102457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:39.784 [2024-12-09 09:55:15.102475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:13179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.784 [2024-12-09 09:55:15.102481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:39.784 [2024-12-09 09:55:15.112996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:39.784 [2024-12-09 09:55:15.113015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:9410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.784 [2024-12-09 09:55:15.113021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:39.784 [2024-12-09 09:55:15.122452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:39.784 [2024-12-09 09:55:15.122470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:5401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.784 [2024-12-09 09:55:15.122480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:39.784 [2024-12-09 09:55:15.130287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:39.784 [2024-12-09 09:55:15.130306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:7460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.784 [2024-12-09 09:55:15.130313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:39.784 [2024-12-09 09:55:15.139829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:39.784 [2024-12-09 09:55:15.139847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:20077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.784 [2024-12-09 09:55:15.139854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:39.784 [2024-12-09 09:55:15.149140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:39.784 [2024-12-09 09:55:15.149158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.784 [2024-12-09 09:55:15.149168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:39.784 [2024-12-09 09:55:15.157393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:39.784 [2024-12-09 09:55:15.157412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:2091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.784 [2024-12-09 09:55:15.157418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:39.785 [2024-12-09 09:55:15.166196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:39.785 [2024-12-09 09:55:15.166214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.785 [2024-12-09 09:55:15.166221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:39.785 [2024-12-09 09:55:15.175340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:39.785 [2024-12-09 09:55:15.175359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:4129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.785 [2024-12-09 09:55:15.175366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:39.785 [2024-12-09 09:55:15.185208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:39.785 [2024-12-09 09:55:15.185226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:24442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.785 [2024-12-09 09:55:15.185233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:39.785 [2024-12-09 09:55:15.194246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:39.785 [2024-12-09 09:55:15.194265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.785 [2024-12-09 09:55:15.194272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:39.785 [2024-12-09 09:55:15.202989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:39.785 [2024-12-09 09:55:15.203007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:21380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.785 [2024-12-09 09:55:15.203014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:39.785 [2024-12-09 09:55:15.211787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:39.785 [2024-12-09 09:55:15.211805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.785 [2024-12-09 09:55:15.211812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:39.785 [2024-12-09 09:55:15.220160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:39.785 [2024-12-09 09:55:15.220179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:2732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.785 [2024-12-09 09:55:15.220185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:39.785 [2024-12-09 09:55:15.229464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:39.785 [2024-12-09 09:55:15.229482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:39.785 [2024-12-09 09:55:15.229489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.049 [2024-12-09 09:55:15.238947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.049 [2024-12-09 09:55:15.238966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.049 [2024-12-09 09:55:15.238972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.049 [2024-12-09 09:55:15.248143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.049 [2024-12-09 09:55:15.248162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:21915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.049 [2024-12-09 09:55:15.248170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.049 [2024-12-09 09:55:15.256849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.049 [2024-12-09 09:55:15.256867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:8139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.049 [2024-12-09 09:55:15.256874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.049 [2024-12-09 09:55:15.266416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.049 [2024-12-09 09:55:15.266434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:22054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.049 [2024-12-09 09:55:15.266441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.050 [2024-12-09 09:55:15.274709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.050 [2024-12-09 09:55:15.274727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:19467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.050 [2024-12-09 09:55:15.274734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.050 [2024-12-09 09:55:15.284356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.050 [2024-12-09 09:55:15.284374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:1407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.050 [2024-12-09 09:55:15.284381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.050 [2024-12-09 09:55:15.293425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.050 [2024-12-09 09:55:15.293443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:18907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.050 [2024-12-09 09:55:15.293450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.050 [2024-12-09 09:55:15.301586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.050 [2024-12-09 09:55:15.301604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.050 [2024-12-09 09:55:15.301614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.050 [2024-12-09 09:55:15.312704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.050 [2024-12-09 09:55:15.312722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.050 [2024-12-09 09:55:15.312729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.050 [2024-12-09 09:55:15.325049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.050 [2024-12-09 09:55:15.325067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:7267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.050 [2024-12-09 09:55:15.325074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.050 [2024-12-09 09:55:15.332929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.050 [2024-12-09 09:55:15.332947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.050 [2024-12-09 09:55:15.332954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.050 [2024-12-09 09:55:15.341661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.050 [2024-12-09 09:55:15.341679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:14152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.050 [2024-12-09 09:55:15.341686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.050 [2024-12-09 09:55:15.350983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.050 [2024-12-09 09:55:15.351001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.050 [2024-12-09 09:55:15.351008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.050 [2024-12-09 09:55:15.360051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.050 [2024-12-09 09:55:15.360069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.050 [2024-12-09 09:55:15.360076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.050 [2024-12-09 09:55:15.367918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.050 [2024-12-09 09:55:15.367937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:5326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.050 [2024-12-09 09:55:15.367943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.050 [2024-12-09 09:55:15.377916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.050 [2024-12-09 09:55:15.377934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:22540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.050 [2024-12-09 09:55:15.377940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.050 [2024-12-09 09:55:15.388484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.050 [2024-12-09 09:55:15.388506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:6032 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.050 [2024-12-09 09:55:15.388514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.050 [2024-12-09 09:55:15.400108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.050 [2024-12-09 09:55:15.400126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:20470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.050 [2024-12-09 09:55:15.400132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.050 [2024-12-09 09:55:15.408268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.050 [2024-12-09 09:55:15.408287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:14759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.050 [2024-12-09 09:55:15.408293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.050 [2024-12-09 09:55:15.418286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.050 [2024-12-09 09:55:15.418305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:14027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.050 [2024-12-09 09:55:15.418311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.050 [2024-12-09 09:55:15.427095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.050 [2024-12-09 09:55:15.427114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:24321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.050 [2024-12-09 09:55:15.427121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.050 [2024-12-09 09:55:15.437576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.050 [2024-12-09 09:55:15.437595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.050 [2024-12-09 09:55:15.437602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.050 [2024-12-09 09:55:15.447080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.050 [2024-12-09 09:55:15.447099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:19295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.050 [2024-12-09 09:55:15.447105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.050 [2024-12-09 09:55:15.457488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.050 [2024-12-09 09:55:15.457506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:24903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.050 [2024-12-09 09:55:15.457513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.050 [2024-12-09 09:55:15.466542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.050 [2024-12-09 09:55:15.466560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:13765 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.050 [2024-12-09 09:55:15.466567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.050 [2024-12-09 09:55:15.475655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.050 [2024-12-09 09:55:15.475673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:18129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.050 [2024-12-09 09:55:15.475680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.050 [2024-12-09 09:55:15.483847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.050 [2024-12-09 09:55:15.483864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:8938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.050 [2024-12-09 09:55:15.483871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.050 [2024-12-09 09:55:15.495300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.050 [2024-12-09 09:55:15.495318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:23533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.050 [2024-12-09 09:55:15.495325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.311 [2024-12-09 09:55:15.508020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.311 [2024-12-09 09:55:15.508039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.311 [2024-12-09 09:55:15.508046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.311 [2024-12-09 09:55:15.519996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.311 [2024-12-09 09:55:15.520014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:10401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.311 [2024-12-09 09:55:15.520021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.311 [2024-12-09 09:55:15.527516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.311 [2024-12-09 09:55:15.527534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:9323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.311 [2024-12-09 09:55:15.527541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.311 [2024-12-09 09:55:15.538076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.311 [2024-12-09 09:55:15.538095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:6110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.311 [2024-12-09 09:55:15.538102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.311 [2024-12-09 09:55:15.547308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.311 [2024-12-09 09:55:15.547326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:19108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.311 [2024-12-09 09:55:15.547333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.311 [2024-12-09 09:55:15.556547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.311 [2024-12-09 09:55:15.556565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:21103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.311 [2024-12-09 09:55:15.556574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.311 [2024-12-09 09:55:15.564798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.311 [2024-12-09 09:55:15.564815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.311 [2024-12-09 09:55:15.564822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.311 [2024-12-09 09:55:15.573899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.311 [2024-12-09 09:55:15.573916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:22761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.311 [2024-12-09 09:55:15.573923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.311 [2024-12-09 09:55:15.583554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.311 [2024-12-09 09:55:15.583572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:18150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.311 [2024-12-09 09:55:15.583579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.311 [2024-12-09 09:55:15.591729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.311 [2024-12-09 09:55:15.591746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:7327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.311 [2024-12-09 09:55:15.591753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.311 [2024-12-09 09:55:15.600951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.311 [2024-12-09 09:55:15.600969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:22989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.312 [2024-12-09 09:55:15.600976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.312 [2024-12-09 09:55:15.609555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.312 [2024-12-09 09:55:15.609572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:17294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.312 [2024-12-09 09:55:15.609579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.312 [2024-12-09 09:55:15.618391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.312 [2024-12-09 09:55:15.618408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:18120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.312 [2024-12-09 09:55:15.618415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.312 [2024-12-09 09:55:15.626963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.312 [2024-12-09 09:55:15.626981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:17228 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.312 [2024-12-09 09:55:15.626988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.312 [2024-12-09 09:55:15.635707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.312 [2024-12-09 09:55:15.635729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:4071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.312 [2024-12-09 09:55:15.635735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.312 [2024-12-09 09:55:15.644634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.312 [2024-12-09 09:55:15.644664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:2811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.312 [2024-12-09 09:55:15.644672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.312 [2024-12-09 09:55:15.654156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.312 [2024-12-09 09:55:15.654174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.312 [2024-12-09 09:55:15.654181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.312 [2024-12-09 09:55:15.663321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.312 [2024-12-09 09:55:15.663338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:3131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.312 [2024-12-09 09:55:15.663344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.312 [2024-12-09 09:55:15.671800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.312 [2024-12-09 09:55:15.671817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:9315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.312 [2024-12-09 09:55:15.671824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.312 [2024-12-09 09:55:15.680724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.312 [2024-12-09 09:55:15.680742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.312 [2024-12-09 09:55:15.680748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.312 [2024-12-09 09:55:15.688550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.312 [2024-12-09 09:55:15.688567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:25040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.312 [2024-12-09 09:55:15.688573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.312 [2024-12-09 09:55:15.699990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.312 [2024-12-09 09:55:15.700008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:21732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.312 [2024-12-09 09:55:15.700015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.312 [2024-12-09 09:55:15.708020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.312 [2024-12-09 09:55:15.708038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:2391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.312 [2024-12-09 09:55:15.708047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.312 [2024-12-09 09:55:15.718942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.312 [2024-12-09 09:55:15.718960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:15967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.312 [2024-12-09 09:55:15.718966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.312 [2024-12-09 09:55:15.728205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.312 [2024-12-09 09:55:15.728223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:21658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.312 [2024-12-09 09:55:15.728231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.312 [2024-12-09 09:55:15.737437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.312 [2024-12-09 09:55:15.737455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.312 [2024-12-09 09:55:15.737462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.312 [2024-12-09 09:55:15.746290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.312 [2024-12-09 09:55:15.746308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.312 [2024-12-09 09:55:15.746315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.312 [2024-12-09 09:55:15.755366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.312 [2024-12-09 09:55:15.755383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:4134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.312 [2024-12-09 09:55:15.755390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.574 [2024-12-09 09:55:15.762939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.574 [2024-12-09 09:55:15.762958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:23123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.574 [2024-12-09 09:55:15.762965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.574 [2024-12-09 09:55:15.773388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.574 [2024-12-09 09:55:15.773406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:6579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.574 [2024-12-09 09:55:15.773413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.574 [2024-12-09 09:55:15.783815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.574 [2024-12-09 09:55:15.783833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:1765 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.574 [2024-12-09 09:55:15.783840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.574 [2024-12-09 09:55:15.791566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.574 [2024-12-09 09:55:15.791587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:7763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.574 [2024-12-09 09:55:15.791594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.574 [2024-12-09 09:55:15.801823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.574 [2024-12-09 09:55:15.801840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:21319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.574 [2024-12-09 09:55:15.801847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.574 [2024-12-09 09:55:15.812148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.574 [2024-12-09 09:55:15.812166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:8504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.574 [2024-12-09 09:55:15.812173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.574 [2024-12-09 09:55:15.821463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.574 [2024-12-09 09:55:15.821480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:21398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.574 [2024-12-09 09:55:15.821487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.574 [2024-12-09 09:55:15.830367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.574 [2024-12-09 09:55:15.830384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.574 [2024-12-09 09:55:15.830391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.574 [2024-12-09 09:55:15.840542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.574 [2024-12-09 09:55:15.840560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:5617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.574 [2024-12-09 09:55:15.840567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.574 [2024-12-09 09:55:15.849739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.574 [2024-12-09 09:55:15.849756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:16182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.574 [2024-12-09 09:55:15.849763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.574 27049.00 IOPS, 105.66 MiB/s [2024-12-09T08:55:16.027Z] [2024-12-09 09:55:15.858977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.574 [2024-12-09 09:55:15.858995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:12524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.574 [2024-12-09 09:55:15.859001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.574 [2024-12-09 09:55:15.869716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.574 [2024-12-09 09:55:15.869734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.574 [2024-12-09 09:55:15.869741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.574 [2024-12-09 09:55:15.877724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.574 [2024-12-09 09:55:15.877741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:13564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.574 [2024-12-09 09:55:15.877748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.574 [2024-12-09 09:55:15.886742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.574 [2024-12-09 09:55:15.886760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:24979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.574 [2024-12-09 09:55:15.886766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.574 [2024-12-09 09:55:15.896568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.574 [2024-12-09 09:55:15.896586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:17642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.574 [2024-12-09 09:55:15.896592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.574 [2024-12-09 09:55:15.906004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.574 [2024-12-09 09:55:15.906022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:18670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.574 [2024-12-09 09:55:15.906030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.574 [2024-12-09 09:55:15.914087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.574 [2024-12-09 09:55:15.914107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:2372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.574 [2024-12-09 09:55:15.914114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.574 [2024-12-09 09:55:15.923196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.574 [2024-12-09 09:55:15.923214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:13800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.574 [2024-12-09 09:55:15.923220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.574 [2024-12-09 09:55:15.931936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.574 [2024-12-09 09:55:15.931954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.574 [2024-12-09 09:55:15.931961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.574 [2024-12-09 09:55:15.940968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.574 [2024-12-09 09:55:15.940987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:8146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.575 [2024-12-09 09:55:15.940994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.575 [2024-12-09 09:55:15.948848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.575 [2024-12-09 09:55:15.948866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.575 [2024-12-09 09:55:15.948876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.575 [2024-12-09 09:55:15.959039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.575 [2024-12-09 09:55:15.959059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.575 [2024-12-09 09:55:15.959065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.575 [2024-12-09 09:55:15.968152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.575 [2024-12-09 09:55:15.968169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:1251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.575 [2024-12-09 09:55:15.968176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.575 [2024-12-09 09:55:15.976839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.575 [2024-12-09 09:55:15.976857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:3866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.575 [2024-12-09 09:55:15.976863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.575 [2024-12-09 09:55:15.985890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.575 [2024-12-09 09:55:15.985908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:17385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.575 [2024-12-09 09:55:15.985915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.575 [2024-12-09 09:55:15.993894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.575 [2024-12-09 09:55:15.993911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:14661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.575 [2024-12-09 09:55:15.993917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.575 [2024-12-09 09:55:16.003729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.575 [2024-12-09 09:55:16.003746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:3795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.575 [2024-12-09 09:55:16.003753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.575 [2024-12-09 09:55:16.012946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.575 [2024-12-09 09:55:16.012965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:8919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.575 [2024-12-09 09:55:16.012972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.575 [2024-12-09 09:55:16.021775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.575 [2024-12-09 09:55:16.021792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:22765 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.575 [2024-12-09 09:55:16.021799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.836 [2024-12-09 09:55:16.030390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.836 [2024-12-09 09:55:16.030407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:10220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.836 [2024-12-09 09:55:16.030414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.836 [2024-12-09 09:55:16.039815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.836 [2024-12-09 09:55:16.039832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:2905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.836 [2024-12-09 09:55:16.039839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.836 [2024-12-09 09:55:16.047794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.836 [2024-12-09 09:55:16.047811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:17351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.837 [2024-12-09 09:55:16.047818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.837 [2024-12-09 09:55:16.057330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.837 [2024-12-09 09:55:16.057347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.837 [2024-12-09 09:55:16.057354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.837 [2024-12-09 09:55:16.066742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.837 [2024-12-09 09:55:16.066760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:20425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.837 [2024-12-09 09:55:16.066767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.837 [2024-12-09 09:55:16.075388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.837 [2024-12-09 09:55:16.075406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:11883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.837 [2024-12-09 09:55:16.075412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.837 [2024-12-09 09:55:16.083680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.837 [2024-12-09 09:55:16.083698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:18778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.837 [2024-12-09 09:55:16.083704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.837 [2024-12-09 09:55:16.093390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.837 [2024-12-09 09:55:16.093407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:18529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.837 [2024-12-09 09:55:16.093414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.837 [2024-12-09 09:55:16.104489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.837 [2024-12-09 09:55:16.104507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:16098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.837 [2024-12-09 09:55:16.104520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.837 [2024-12-09 09:55:16.112925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.837 [2024-12-09 09:55:16.112942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:24363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.837 [2024-12-09 09:55:16.112949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.837 [2024-12-09 09:55:16.124877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.837 [2024-12-09 09:55:16.124895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:21424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.837 [2024-12-09 09:55:16.124902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.837 [2024-12-09 09:55:16.136281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.837 [2024-12-09 09:55:16.136300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:18585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.837 [2024-12-09 09:55:16.136306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.837 [2024-12-09 09:55:16.148473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.837 [2024-12-09 09:55:16.148490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:13040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.837 [2024-12-09 09:55:16.148497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.837 [2024-12-09 09:55:16.157698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.837 [2024-12-09 09:55:16.157716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:2540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.837 [2024-12-09 09:55:16.157723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.837 [2024-12-09 09:55:16.166770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.837 [2024-12-09 09:55:16.166788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:15768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.837 [2024-12-09 09:55:16.166795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.837 [2024-12-09 09:55:16.174670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.837 [2024-12-09 09:55:16.174688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:11597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.837 [2024-12-09 09:55:16.174694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.837 [2024-12-09 09:55:16.184172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.837 [2024-12-09 09:55:16.184189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.837 [2024-12-09 09:55:16.184196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.837 [2024-12-09 09:55:16.191910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.837 [2024-12-09 09:55:16.191931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:15923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.837 [2024-12-09 09:55:16.191938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.837 [2024-12-09 09:55:16.201703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.837 [2024-12-09 09:55:16.201721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:3730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.837 [2024-12-09 09:55:16.201728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.837 [2024-12-09 09:55:16.210651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.837 [2024-12-09 09:55:16.210668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:4315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.837 [2024-12-09 09:55:16.210675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.837 [2024-12-09 09:55:16.219095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.837 [2024-12-09 09:55:16.219112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:3894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.837 [2024-12-09 09:55:16.219119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.837 [2024-12-09 09:55:16.228257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.837 [2024-12-09 09:55:16.228275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.837 [2024-12-09 09:55:16.228282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.837 [2024-12-09 09:55:16.236734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.837 [2024-12-09 09:55:16.236751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:19660 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.837 [2024-12-09 09:55:16.236758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.837 [2024-12-09 09:55:16.245464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.837 [2024-12-09 09:55:16.245482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:14796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.837 [2024-12-09 09:55:16.245489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.837 [2024-12-09 09:55:16.254788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.837 [2024-12-09 09:55:16.254806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.837 [2024-12-09 09:55:16.254812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.837 [2024-12-09 09:55:16.263398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.837 [2024-12-09 09:55:16.263415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.837 [2024-12-09 09:55:16.263422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.837 [2024-12-09 09:55:16.272597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.837 [2024-12-09 09:55:16.272614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:25383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.837 [2024-12-09 09:55:16.272621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:40.837 [2024-12-09 09:55:16.283481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:40.837 [2024-12-09 09:55:16.283500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:2271 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:40.837 [2024-12-09 09:55:16.283506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.100 [2024-12-09 09:55:16.291123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:41.100 [2024-12-09 09:55:16.291142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:4120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.100 [2024-12-09 09:55:16.291148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.100 [2024-12-09 09:55:16.300675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:41.100 [2024-12-09 09:55:16.300693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:10521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.100 [2024-12-09 09:55:16.300700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.100 [2024-12-09 09:55:16.310010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:41.100 [2024-12-09 09:55:16.310028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:14130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.100 [2024-12-09 09:55:16.310035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.100 [2024-12-09 09:55:16.318196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:41.100 [2024-12-09 09:55:16.318213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.100 [2024-12-09 09:55:16.318219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.100 [2024-12-09 09:55:16.327116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:41.100 [2024-12-09 09:55:16.327133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:22460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.100 [2024-12-09 09:55:16.327140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.100 [2024-12-09 09:55:16.335383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:41.100 [2024-12-09 09:55:16.335400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:9303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.100 [2024-12-09 09:55:16.335407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.100 [2024-12-09 09:55:16.345234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:41.100 [2024-12-09 09:55:16.345251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:16172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.100 [2024-12-09 09:55:16.345262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.100 [2024-12-09 09:55:16.354512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:41.100 [2024-12-09 09:55:16.354529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:22336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.100 [2024-12-09 09:55:16.354536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.100 [2024-12-09 09:55:16.361861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:41.100 [2024-12-09 09:55:16.361879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:2065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.100 [2024-12-09 09:55:16.361886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.100 [2024-12-09 09:55:16.372808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:41.100 [2024-12-09 09:55:16.372826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:22652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.100 [2024-12-09 09:55:16.372833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.100 [2024-12-09 09:55:16.381555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:41.100 [2024-12-09 09:55:16.381573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:18438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.100 [2024-12-09 09:55:16.381580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.100 [2024-12-09 09:55:16.391117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:41.100 [2024-12-09 09:55:16.391134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:18679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.100 [2024-12-09 09:55:16.391142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.100 [2024-12-09 09:55:16.399095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:41.100 [2024-12-09 09:55:16.399113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:23711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.100 [2024-12-09 09:55:16.399120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.100 [2024-12-09 09:55:16.408342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:41.100 [2024-12-09 09:55:16.408360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:6700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.100 [2024-12-09 09:55:16.408367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.100 [2024-12-09 09:55:16.417346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:41.100 [2024-12-09 09:55:16.417363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:15198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.100 [2024-12-09 09:55:16.417370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.100 [2024-12-09 09:55:16.426359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:41.100 [2024-12-09 09:55:16.426380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:3483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.100 [2024-12-09 09:55:16.426387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.100 [2024-12-09 09:55:16.435554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:41.100 [2024-12-09 09:55:16.435571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.100 [2024-12-09 09:55:16.435578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.100 [2024-12-09 09:55:16.444311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:41.100 [2024-12-09 09:55:16.444329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:4920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.100 [2024-12-09 09:55:16.444336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.100 [2024-12-09 09:55:16.453386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:41.100 [2024-12-09 09:55:16.453402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.100 [2024-12-09 09:55:16.453409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.100 [2024-12-09 09:55:16.462651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:41.100 [2024-12-09 09:55:16.462669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:19161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.101 [2024-12-09 09:55:16.462675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.101 [2024-12-09 09:55:16.471499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:41.101 [2024-12-09 09:55:16.471517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:6387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.101 [2024-12-09 09:55:16.471524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.101 [2024-12-09 09:55:16.480243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:41.101 [2024-12-09 09:55:16.480262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:13479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.101 [2024-12-09 09:55:16.480272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.101 [2024-12-09 09:55:16.489096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:41.101 [2024-12-09 09:55:16.489114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:2936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.101 [2024-12-09 09:55:16.489122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.101 [2024-12-09 09:55:16.497404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:41.101 [2024-12-09 09:55:16.497422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:4236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.101 [2024-12-09 09:55:16.497429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.101 [2024-12-09 09:55:16.506660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:41.101 [2024-12-09 09:55:16.506678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:20025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.101 [2024-12-09 09:55:16.506685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.101 [2024-12-09 09:55:16.514706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:41.101 [2024-12-09 09:55:16.514723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.101 [2024-12-09 09:55:16.514729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.101 [2024-12-09 09:55:16.523821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:41.101 [2024-12-09 09:55:16.523839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:20713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.101 [2024-12-09 09:55:16.523846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.101 [2024-12-09 09:55:16.534041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:41.101 [2024-12-09 09:55:16.534059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:20159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.101 [2024-12-09 09:55:16.534066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.101 [2024-12-09 09:55:16.541689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:41.101 [2024-12-09 09:55:16.541707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:7861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.101 [2024-12-09 09:55:16.541713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.363 [2024-12-09 09:55:16.551746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:41.363 [2024-12-09 09:55:16.551765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:22692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.363 [2024-12-09 09:55:16.551772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.363 [2024-12-09 09:55:16.560171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:41.363 [2024-12-09 09:55:16.560190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.363 [2024-12-09 09:55:16.560197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.363 [2024-12-09 09:55:16.570300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:41.363 [2024-12-09 09:55:16.570318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:6063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.363 [2024-12-09 09:55:16.570326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.363 [2024-12-09 09:55:16.579777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:41.363 [2024-12-09 09:55:16.579799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.363 [2024-12-09 09:55:16.579806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.363 [2024-12-09 09:55:16.588482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:41.363 [2024-12-09 09:55:16.588500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:18219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.363 [2024-12-09 09:55:16.588507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.363 [2024-12-09 09:55:16.598624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:41.363 [2024-12-09 09:55:16.598647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.363 [2024-12-09 09:55:16.598655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.363 [2024-12-09 09:55:16.608176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:41.364 [2024-12-09 09:55:16.608194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:24005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.364 [2024-12-09 09:55:16.608201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.364 [2024-12-09 09:55:16.618726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:41.364 [2024-12-09 09:55:16.618744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.364 [2024-12-09 09:55:16.618750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.364 [2024-12-09 09:55:16.627488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:41.364 [2024-12-09 09:55:16.627506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:12800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.364 [2024-12-09 09:55:16.627513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.364 [2024-12-09 09:55:16.636680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:41.364 [2024-12-09 09:55:16.636699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:21844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.364 [2024-12-09 09:55:16.636705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.364 [2024-12-09 09:55:16.646499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:41.364 [2024-12-09 09:55:16.646517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:13727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.364 [2024-12-09 09:55:16.646524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.364 [2024-12-09 09:55:16.653831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:41.364 [2024-12-09 09:55:16.653850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:13709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.364 [2024-12-09 09:55:16.653856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.364 [2024-12-09 09:55:16.663592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:41.364 [2024-12-09 09:55:16.663610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:12313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.364 [2024-12-09 09:55:16.663617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.364 [2024-12-09 09:55:16.673711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:41.364 [2024-12-09 09:55:16.673729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:12628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.364 [2024-12-09 09:55:16.673736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.364 [2024-12-09 09:55:16.681144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:41.364 [2024-12-09 09:55:16.681162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:6151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.364 [2024-12-09 09:55:16.681169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.364 [2024-12-09 09:55:16.691049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:41.364 [2024-12-09 09:55:16.691067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:20848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.364 [2024-12-09 09:55:16.691073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.364 [2024-12-09 09:55:16.699606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:41.364 [2024-12-09 09:55:16.699625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:13207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.364 [2024-12-09 09:55:16.699631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.364 [2024-12-09 09:55:16.707839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:41.364 [2024-12-09 09:55:16.707858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:3471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.364 [2024-12-09 09:55:16.707865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.364 [2024-12-09 09:55:16.719715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:41.364 [2024-12-09 09:55:16.719733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.364 [2024-12-09 09:55:16.719740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.364 [2024-12-09 09:55:16.729367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:41.364 [2024-12-09 09:55:16.729386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:15166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.364 [2024-12-09 09:55:16.729392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.364 [2024-12-09 09:55:16.738691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:41.364 [2024-12-09 09:55:16.738710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:21455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.364 [2024-12-09 09:55:16.738720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.364 [2024-12-09 09:55:16.746505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:41.364 [2024-12-09 09:55:16.746523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:11219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.364 [2024-12-09 09:55:16.746530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.364 [2024-12-09 09:55:16.757050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:41.364 [2024-12-09 09:55:16.757069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:13472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.364 [2024-12-09 09:55:16.757076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.364 [2024-12-09 09:55:16.766775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:41.364 [2024-12-09 09:55:16.766793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:1377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.364 [2024-12-09 09:55:16.766799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.364 [2024-12-09 09:55:16.774911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:41.364 [2024-12-09 09:55:16.774928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:22341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.364 [2024-12-09 09:55:16.774935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.364 [2024-12-09 09:55:16.784977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:41.364 [2024-12-09 09:55:16.784995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:20129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.364 [2024-12-09 09:55:16.785002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.364 [2024-12-09 09:55:16.793288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:41.364 [2024-12-09 09:55:16.793307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:9049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.364 [2024-12-09 09:55:16.793314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.364 [2024-12-09 09:55:16.801390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:41.364 [2024-12-09 09:55:16.801408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:3566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.364 [2024-12-09 09:55:16.801414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.364 [2024-12-09 09:55:16.811020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:41.364 [2024-12-09 09:55:16.811040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:19883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.364 [2024-12-09 09:55:16.811047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.626 [2024-12-09 09:55:16.819843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:41.626 [2024-12-09 09:55:16.819866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.626 [2024-12-09 09:55:16.819873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.626 [2024-12-09 09:55:16.828994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:41.626 [2024-12-09 09:55:16.829011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.626 [2024-12-09 09:55:16.829017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.626 [2024-12-09 09:55:16.837441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:41.626 [2024-12-09 09:55:16.837459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:24751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.626 [2024-12-09 09:55:16.837465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.626 [2024-12-09 09:55:16.846476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:41.626 [2024-12-09 09:55:16.846494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:24037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.626 [2024-12-09 09:55:16.846501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.626 27526.00 IOPS, 107.52 MiB/s [2024-12-09T08:55:17.079Z] [2024-12-09 09:55:16.855434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e3e910) 00:37:41.627 [2024-12-09 09:55:16.855452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:1864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:41.627 [2024-12-09 09:55:16.855459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:37:41.627 00:37:41.627 Latency(us) 00:37:41.627 [2024-12-09T08:55:17.080Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:41.627 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:37:41.627 nvme0n1 : 2.00 27532.21 107.55 0.00 0.00 4644.00 2198.19 15291.73 00:37:41.627 [2024-12-09T08:55:17.080Z] =================================================================================================================== 00:37:41.627 [2024-12-09T08:55:17.080Z] Total : 27532.21 107.55 0.00 0.00 4644.00 2198.19 15291.73 00:37:41.627 { 00:37:41.627 "results": [ 00:37:41.627 { 00:37:41.627 "job": "nvme0n1", 00:37:41.627 "core_mask": "0x2", 00:37:41.627 "workload": "randread", 00:37:41.627 "status": "finished", 00:37:41.627 "queue_depth": 128, 00:37:41.627 "io_size": 4096, 00:37:41.627 "runtime": 2.004198, 00:37:41.627 "iops": 27532.209891437873, 00:37:41.627 "mibps": 107.54769488842919, 00:37:41.627 "io_failed": 0, 00:37:41.627 "io_timeout": 0, 00:37:41.627 "avg_latency_us": 4644.001078651685, 00:37:41.627 "min_latency_us": 2198.1866666666665, 00:37:41.627 "max_latency_us": 15291.733333333334 00:37:41.627 } 00:37:41.627 ], 00:37:41.627 "core_count": 1 00:37:41.627 } 00:37:41.627 09:55:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:37:41.627 09:55:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:37:41.627 09:55:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:37:41.627 | .driver_specific 00:37:41.627 | .nvme_error 00:37:41.627 | .status_code 00:37:41.627 | .command_transient_transport_error' 00:37:41.627 09:55:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:37:41.627 09:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 216 > 0 )) 00:37:41.627 09:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3051318 00:37:41.627 09:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3051318 ']' 00:37:41.627 09:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3051318 00:37:41.627 09:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:37:41.627 09:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:41.627 09:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3051318 00:37:41.888 09:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:41.888 09:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:41.888 09:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3051318' 00:37:41.888 killing process with pid 3051318 00:37:41.888 09:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3051318 00:37:41.888 Received shutdown signal, test time was about 2.000000 seconds 00:37:41.888 00:37:41.888 Latency(us) 00:37:41.888 [2024-12-09T08:55:17.341Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:41.888 [2024-12-09T08:55:17.341Z] =================================================================================================================== 00:37:41.888 [2024-12-09T08:55:17.341Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:41.888 09:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3051318 00:37:41.888 09:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:37:41.888 09:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:37:41.888 09:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:37:41.888 09:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:37:41.888 09:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:37:41.888 09:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3051980 00:37:41.888 09:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3051980 /var/tmp/bperf.sock 00:37:41.888 09:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3051980 ']' 00:37:41.889 09:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:37:41.889 09:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:41.889 09:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:41.889 09:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:41.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:41.889 09:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:41.889 09:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:41.889 [2024-12-09 09:55:17.272733] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:37:41.889 [2024-12-09 09:55:17.272793] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3051980 ] 00:37:41.889 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:41.889 Zero copy mechanism will not be used. 00:37:42.150 [2024-12-09 09:55:17.354170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:42.150 [2024-12-09 09:55:17.370113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:42.150 09:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:42.150 09:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:37:42.150 09:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:42.150 09:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:42.410 09:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:37:42.410 09:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:42.411 09:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:42.411 09:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:42.411 09:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:42.411 09:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:42.703 nvme0n1 00:37:42.704 09:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:37:42.704 09:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:42.704 09:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:42.704 09:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:42.704 09:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:37:42.704 09:55:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:42.704 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:42.704 Zero copy mechanism will not be used. 00:37:42.704 Running I/O for 2 seconds... 00:37:42.704 [2024-12-09 09:55:18.032801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:42.704 [2024-12-09 09:55:18.032832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.704 [2024-12-09 09:55:18.032842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:42.704 [2024-12-09 09:55:18.043729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:42.704 [2024-12-09 09:55:18.043752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.704 [2024-12-09 09:55:18.043760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:42.704 [2024-12-09 09:55:18.050696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:42.704 [2024-12-09 09:55:18.050715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.704 [2024-12-09 09:55:18.050729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:42.704 [2024-12-09 09:55:18.060467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:42.704 [2024-12-09 09:55:18.060487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.704 [2024-12-09 09:55:18.060494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:42.704 [2024-12-09 09:55:18.066951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:42.704 [2024-12-09 09:55:18.066970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.704 [2024-12-09 09:55:18.066977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:42.704 [2024-12-09 09:55:18.069532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:42.704 [2024-12-09 09:55:18.069551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.704 [2024-12-09 09:55:18.069557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:42.704 [2024-12-09 09:55:18.074774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:42.704 [2024-12-09 09:55:18.074794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.704 [2024-12-09 09:55:18.074800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:42.704 [2024-12-09 09:55:18.084927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:42.704 [2024-12-09 09:55:18.084946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.704 [2024-12-09 09:55:18.084953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:42.704 [2024-12-09 09:55:18.094865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:42.704 [2024-12-09 09:55:18.094883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.704 [2024-12-09 09:55:18.094890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:42.704 [2024-12-09 09:55:18.103000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:42.704 [2024-12-09 09:55:18.103019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.704 [2024-12-09 09:55:18.103026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:42.704 [2024-12-09 09:55:18.111094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:42.704 [2024-12-09 09:55:18.111112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.704 [2024-12-09 09:55:18.111119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:42.704 [2024-12-09 09:55:18.115793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:42.704 [2024-12-09 09:55:18.115815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.704 [2024-12-09 09:55:18.115822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:42.704 [2024-12-09 09:55:18.124852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:42.704 [2024-12-09 09:55:18.124870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.704 [2024-12-09 09:55:18.124877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:42.989 [2024-12-09 09:55:18.134521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:42.989 [2024-12-09 09:55:18.134541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.989 [2024-12-09 09:55:18.134547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:42.989 [2024-12-09 09:55:18.142293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:42.989 [2024-12-09 09:55:18.142312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.989 [2024-12-09 09:55:18.142319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:42.989 [2024-12-09 09:55:18.149784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:42.989 [2024-12-09 09:55:18.149803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.989 [2024-12-09 09:55:18.149809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:42.989 [2024-12-09 09:55:18.159397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:42.989 [2024-12-09 09:55:18.159416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.989 [2024-12-09 09:55:18.159423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:42.989 [2024-12-09 09:55:18.165086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:42.989 [2024-12-09 09:55:18.165105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.989 [2024-12-09 09:55:18.165111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:42.989 [2024-12-09 09:55:18.169984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:42.989 [2024-12-09 09:55:18.170002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.989 [2024-12-09 09:55:18.170009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:42.989 [2024-12-09 09:55:18.178184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:42.989 [2024-12-09 09:55:18.178203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.989 [2024-12-09 09:55:18.178209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:42.989 [2024-12-09 09:55:18.187297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:42.989 [2024-12-09 09:55:18.187316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.989 [2024-12-09 09:55:18.187323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:42.989 [2024-12-09 09:55:18.195061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:42.989 [2024-12-09 09:55:18.195079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.990 [2024-12-09 09:55:18.195086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:42.990 [2024-12-09 09:55:18.206260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:42.990 [2024-12-09 09:55:18.206279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.990 [2024-12-09 09:55:18.206286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:42.990 [2024-12-09 09:55:18.216677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:42.990 [2024-12-09 09:55:18.216696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.990 [2024-12-09 09:55:18.216702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:42.990 [2024-12-09 09:55:18.228592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:42.990 [2024-12-09 09:55:18.228611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.990 [2024-12-09 09:55:18.228617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:42.990 [2024-12-09 09:55:18.240504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:42.990 [2024-12-09 09:55:18.240523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.990 [2024-12-09 09:55:18.240529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:42.990 [2024-12-09 09:55:18.253432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:42.990 [2024-12-09 09:55:18.253451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.990 [2024-12-09 09:55:18.253458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:42.990 [2024-12-09 09:55:18.266227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:42.990 [2024-12-09 09:55:18.266247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.990 [2024-12-09 09:55:18.266254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:42.990 [2024-12-09 09:55:18.279317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:42.990 [2024-12-09 09:55:18.279337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.990 [2024-12-09 09:55:18.279347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:42.990 [2024-12-09 09:55:18.292184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:42.990 [2024-12-09 09:55:18.292203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.990 [2024-12-09 09:55:18.292209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:42.990 [2024-12-09 09:55:18.304871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:42.990 [2024-12-09 09:55:18.304890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.990 [2024-12-09 09:55:18.304899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:42.990 [2024-12-09 09:55:18.318098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:42.990 [2024-12-09 09:55:18.318117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.990 [2024-12-09 09:55:18.318124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:42.990 [2024-12-09 09:55:18.331031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:42.990 [2024-12-09 09:55:18.331051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.990 [2024-12-09 09:55:18.331057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:42.990 [2024-12-09 09:55:18.343800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:42.990 [2024-12-09 09:55:18.343819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.990 [2024-12-09 09:55:18.343826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:42.990 [2024-12-09 09:55:18.356095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:42.990 [2024-12-09 09:55:18.356114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.990 [2024-12-09 09:55:18.356121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:42.990 [2024-12-09 09:55:18.368874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:42.990 [2024-12-09 09:55:18.368893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.990 [2024-12-09 09:55:18.368900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:42.990 [2024-12-09 09:55:18.379690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:42.990 [2024-12-09 09:55:18.379709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.990 [2024-12-09 09:55:18.379716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:42.990 [2024-12-09 09:55:18.391580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:42.990 [2024-12-09 09:55:18.391602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.990 [2024-12-09 09:55:18.391608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:42.990 [2024-12-09 09:55:18.403420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:42.990 [2024-12-09 09:55:18.403441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.990 [2024-12-09 09:55:18.403449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:42.990 [2024-12-09 09:55:18.415527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:42.990 [2024-12-09 09:55:18.415546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.990 [2024-12-09 09:55:18.415553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:42.990 [2024-12-09 09:55:18.428172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:42.990 [2024-12-09 09:55:18.428192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:42.990 [2024-12-09 09:55:18.428198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:43.286 [2024-12-09 09:55:18.440213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:43.286 [2024-12-09 09:55:18.440233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.286 [2024-12-09 09:55:18.440240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:43.286 [2024-12-09 09:55:18.451572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:43.286 [2024-12-09 09:55:18.451592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.286 [2024-12-09 09:55:18.451598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:43.286 [2024-12-09 09:55:18.462921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:43.286 [2024-12-09 09:55:18.462940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.286 [2024-12-09 09:55:18.462947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:43.286 [2024-12-09 09:55:18.468609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:43.286 [2024-12-09 09:55:18.468626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.286 [2024-12-09 09:55:18.468633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:43.286 [2024-12-09 09:55:18.475666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:43.286 [2024-12-09 09:55:18.475685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.286 [2024-12-09 09:55:18.475692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:43.286 [2024-12-09 09:55:18.480781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:43.286 [2024-12-09 09:55:18.480800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.286 [2024-12-09 09:55:18.480806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:43.286 [2024-12-09 09:55:18.487251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:43.286 [2024-12-09 09:55:18.487270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.286 [2024-12-09 09:55:18.487277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:43.286 [2024-12-09 09:55:18.491982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:43.286 [2024-12-09 09:55:18.492001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.286 [2024-12-09 09:55:18.492007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:43.286 [2024-12-09 09:55:18.501801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:43.286 [2024-12-09 09:55:18.501820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.286 [2024-12-09 09:55:18.501827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:43.286 [2024-12-09 09:55:18.507787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:43.286 [2024-12-09 09:55:18.507806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.286 [2024-12-09 09:55:18.507813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:43.286 [2024-12-09 09:55:18.512650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:43.286 [2024-12-09 09:55:18.512668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.286 [2024-12-09 09:55:18.512675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:43.286 [2024-12-09 09:55:18.519226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:43.286 [2024-12-09 09:55:18.519246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.286 [2024-12-09 09:55:18.519252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:43.286 [2024-12-09 09:55:18.524921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:43.286 [2024-12-09 09:55:18.524940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.286 [2024-12-09 09:55:18.524947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:43.286 [2024-12-09 09:55:18.530004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:43.286 [2024-12-09 09:55:18.530023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.286 [2024-12-09 09:55:18.530032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:43.286 [2024-12-09 09:55:18.539839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:43.286 [2024-12-09 09:55:18.539858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.286 [2024-12-09 09:55:18.539865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:43.286 [2024-12-09 09:55:18.550269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:43.286 [2024-12-09 09:55:18.550288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.286 [2024-12-09 09:55:18.550295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:43.286 [2024-12-09 09:55:18.556970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:43.286 [2024-12-09 09:55:18.556990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.286 [2024-12-09 09:55:18.556996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:43.286 [2024-12-09 09:55:18.566419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:43.286 [2024-12-09 09:55:18.566438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.286 [2024-12-09 09:55:18.566444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:43.286 [2024-12-09 09:55:18.571067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:43.286 [2024-12-09 09:55:18.571085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.286 [2024-12-09 09:55:18.571092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:43.286 [2024-12-09 09:55:18.575688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:43.286 [2024-12-09 09:55:18.575706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.286 [2024-12-09 09:55:18.575712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:43.286 [2024-12-09 09:55:18.580326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:43.286 [2024-12-09 09:55:18.580345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.286 [2024-12-09 09:55:18.580351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:43.286 [2024-12-09 09:55:18.585044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:43.286 [2024-12-09 09:55:18.585063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.286 [2024-12-09 09:55:18.585070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:43.286 [2024-12-09 09:55:18.589702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:43.286 [2024-12-09 09:55:18.589721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.286 [2024-12-09 09:55:18.589727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:43.286 [2024-12-09 09:55:18.594177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:43.286 [2024-12-09 09:55:18.594196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.286 [2024-12-09 09:55:18.594203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:43.286 [2024-12-09 09:55:18.599759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:43.286 [2024-12-09 09:55:18.599777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.286 [2024-12-09 09:55:18.599784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:43.286 [2024-12-09 09:55:18.607168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:43.287 [2024-12-09 09:55:18.607187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.287 [2024-12-09 09:55:18.607194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:43.287 [2024-12-09 09:55:18.616768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:43.287 [2024-12-09 09:55:18.616787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.287 [2024-12-09 09:55:18.616793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:43.287 [2024-12-09 09:55:18.627244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:43.287 [2024-12-09 09:55:18.627264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.287 [2024-12-09 09:55:18.627270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:43.287 [2024-12-09 09:55:18.637563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:43.287 [2024-12-09 09:55:18.637582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.287 [2024-12-09 09:55:18.637589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:43.287 [2024-12-09 09:55:18.647882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:43.287 [2024-12-09 09:55:18.647901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.287 [2024-12-09 09:55:18.647908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:43.287 [2024-12-09 09:55:18.659653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:43.287 [2024-12-09 09:55:18.659671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.287 [2024-12-09 09:55:18.659680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:43.287 [2024-12-09 09:55:18.667636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:43.287 [2024-12-09 09:55:18.667661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.287 [2024-12-09 09:55:18.667667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:43.287 [2024-12-09 09:55:18.672406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:43.287 [2024-12-09 09:55:18.672425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.287 [2024-12-09 09:55:18.672432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:43.287 [2024-12-09 09:55:18.678635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:43.287 [2024-12-09 09:55:18.678666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.287 [2024-12-09 09:55:18.678672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:43.287 [2024-12-09 09:55:18.685262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:43.287 [2024-12-09 09:55:18.685281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.287 [2024-12-09 09:55:18.685287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:43.287 [2024-12-09 09:55:18.690022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:43.287 [2024-12-09 09:55:18.690041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.287 [2024-12-09 09:55:18.690047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:43.287 [2024-12-09 09:55:18.694813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:43.287 [2024-12-09 09:55:18.694832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.287 [2024-12-09 09:55:18.694838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:43.287 [2024-12-09 09:55:18.703387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:43.287 [2024-12-09 09:55:18.703406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.287 [2024-12-09 09:55:18.703413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:43.287 [2024-12-09 09:55:18.709690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:43.287 [2024-12-09 09:55:18.709710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.287 [2024-12-09 09:55:18.709717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:43.287 [2024-12-09 09:55:18.717874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:43.287 [2024-12-09 09:55:18.717896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.287 [2024-12-09 09:55:18.717903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:43.287 [2024-12-09 09:55:18.728628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:43.287 [2024-12-09 09:55:18.728652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.287 [2024-12-09 09:55:18.728658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:43.287 [2024-12-09 09:55:18.733232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:43.287 [2024-12-09 09:55:18.733251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.287 [2024-12-09 09:55:18.733258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:43.549 [2024-12-09 09:55:18.737746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:43.549 [2024-12-09 09:55:18.737765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.549 [2024-12-09 09:55:18.737771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:43.549 [2024-12-09 09:55:18.743904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:43.549 [2024-12-09 09:55:18.743923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.549 [2024-12-09 09:55:18.743930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:43.549 [2024-12-09 09:55:18.750449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:43.549 [2024-12-09 09:55:18.750468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.549 [2024-12-09 09:55:18.750475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:43.549 [2024-12-09 09:55:18.757687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:43.549 [2024-12-09 09:55:18.757707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.549 [2024-12-09 09:55:18.757713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:43.549 [2024-12-09 09:55:18.765020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:43.549 [2024-12-09 09:55:18.765040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.549 [2024-12-09 09:55:18.765047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:43.549 [2024-12-09 09:55:18.776611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:43.549 [2024-12-09 09:55:18.776630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.549 [2024-12-09 09:55:18.776641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:43.549 [2024-12-09 09:55:18.786410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:43.549 [2024-12-09 09:55:18.786429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.549 [2024-12-09 09:55:18.786435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:43.549 [2024-12-09 09:55:18.798739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:43.549 [2024-12-09 09:55:18.798758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.549 [2024-12-09 09:55:18.798765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:43.549 [2024-12-09 09:55:18.811756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:43.549 [2024-12-09 09:55:18.811775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.549 [2024-12-09 09:55:18.811782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:43.549 [2024-12-09 09:55:18.820060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:43.549 [2024-12-09 09:55:18.820079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.549 [2024-12-09 09:55:18.820086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:43.549 [2024-12-09 09:55:18.824885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:43.549 [2024-12-09 09:55:18.824903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.549 [2024-12-09 09:55:18.824910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:43.549 [2024-12-09 09:55:18.832144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:43.549 [2024-12-09 09:55:18.832163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.549 [2024-12-09 09:55:18.832169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:43.549 [2024-12-09 09:55:18.836992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:43.549 [2024-12-09 09:55:18.837011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.549 [2024-12-09 09:55:18.837018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:43.549 [2024-12-09 09:55:18.841678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:43.549 [2024-12-09 09:55:18.841696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.549 [2024-12-09 09:55:18.841702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:43.549 [2024-12-09 09:55:18.850549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:43.549 [2024-12-09 09:55:18.850568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.549 [2024-12-09 09:55:18.850577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:43.549 [2024-12-09 09:55:18.859852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:43.549 [2024-12-09 09:55:18.859871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.549 [2024-12-09 09:55:18.859877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:43.549 [2024-12-09 09:55:18.869659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:43.549 [2024-12-09 09:55:18.869679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.549 [2024-12-09 09:55:18.869685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:43.549 [2024-12-09 09:55:18.877928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:43.549 [2024-12-09 09:55:18.877947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.549 [2024-12-09 09:55:18.877954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:43.549 [2024-12-09 09:55:18.886830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:43.549 [2024-12-09 09:55:18.886849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.549 [2024-12-09 09:55:18.886855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:43.549 [2024-12-09 09:55:18.895256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:43.549 [2024-12-09 09:55:18.895275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.550 [2024-12-09 09:55:18.895281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:43.550 [2024-12-09 09:55:18.902648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:43.550 [2024-12-09 09:55:18.902667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.550 [2024-12-09 09:55:18.902674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:43.550 [2024-12-09 09:55:18.906230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:43.550 [2024-12-09 09:55:18.906249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.550 [2024-12-09 09:55:18.906255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:43.550 [2024-12-09 09:55:18.910114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:43.550 [2024-12-09 09:55:18.910134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.550 [2024-12-09 09:55:18.910140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:43.550 [2024-12-09 09:55:18.914792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:43.550 [2024-12-09 09:55:18.914814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.550 [2024-12-09 09:55:18.914820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:43.550 [2024-12-09 09:55:18.923542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:43.550 [2024-12-09 09:55:18.923561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.550 [2024-12-09 09:55:18.923567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:43.550 [2024-12-09 09:55:18.927952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:43.550 [2024-12-09 09:55:18.927971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.550 [2024-12-09 09:55:18.927978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:43.550 [2024-12-09 09:55:18.932740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:43.550 [2024-12-09 09:55:18.932758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.550 [2024-12-09 09:55:18.932765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:43.550 [2024-12-09 09:55:18.942553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:43.550 [2024-12-09 09:55:18.942572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.550 [2024-12-09 09:55:18.942579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:43.550 [2024-12-09 09:55:18.949751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:43.550 [2024-12-09 09:55:18.949770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.550 [2024-12-09 09:55:18.949776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:43.550 [2024-12-09 09:55:18.957522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:43.550 [2024-12-09 09:55:18.957540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.550 [2024-12-09 09:55:18.957547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:43.550 [2024-12-09 09:55:18.964625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:43.550 [2024-12-09 09:55:18.964649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.550 [2024-12-09 09:55:18.964655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:43.550 [2024-12-09 09:55:18.971342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:43.550 [2024-12-09 09:55:18.971361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.550 [2024-12-09 09:55:18.971367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:43.550 [2024-12-09 09:55:18.981689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:43.550 [2024-12-09 09:55:18.981708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.550 [2024-12-09 09:55:18.981714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:43.550 [2024-12-09 09:55:18.987294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:43.550 [2024-12-09 09:55:18.987314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.550 [2024-12-09 09:55:18.987320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:43.550 [2024-12-09 09:55:18.994885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:43.550 [2024-12-09 09:55:18.994904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.550 [2024-12-09 09:55:18.994911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:43.811 [2024-12-09 09:55:19.005116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:43.811 [2024-12-09 09:55:19.005136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.811 [2024-12-09 09:55:19.005142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:43.811 [2024-12-09 09:55:19.015274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:43.811 [2024-12-09 09:55:19.015294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.811 [2024-12-09 09:55:19.015300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:43.811 [2024-12-09 09:55:19.023814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:43.811 [2024-12-09 09:55:19.023833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.811 [2024-12-09 09:55:19.023839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:43.811 3727.00 IOPS, 465.88 MiB/s [2024-12-09T08:55:19.264Z] [2024-12-09 09:55:19.033591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:43.811 [2024-12-09 09:55:19.033610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.811 [2024-12-09 09:55:19.033617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:43.811 [2024-12-09 09:55:19.038317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:43.811 [2024-12-09 09:55:19.038335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.811 [2024-12-09 09:55:19.038342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:43.811 [2024-12-09 09:55:19.045737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:43.811 [2024-12-09 09:55:19.045760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.811 [2024-12-09 09:55:19.045766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:43.811 [2024-12-09 09:55:19.055565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:43.811 [2024-12-09 09:55:19.055584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.811 [2024-12-09 09:55:19.055591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:43.811 [2024-12-09 09:55:19.066491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:43.811 [2024-12-09 09:55:19.066510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.811 [2024-12-09 09:55:19.066516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:43.811 [2024-12-09 09:55:19.073367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:43.811 [2024-12-09 09:55:19.073386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.811 [2024-12-09 09:55:19.073394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:43.811 [2024-12-09 09:55:19.079601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:43.811 [2024-12-09 09:55:19.079620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.811 [2024-12-09 09:55:19.079626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:43.811 [2024-12-09 09:55:19.090182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:43.812 [2024-12-09 09:55:19.090200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.812 [2024-12-09 09:55:19.090206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:43.812 [2024-12-09 09:55:19.100093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:43.812 [2024-12-09 09:55:19.100112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.812 [2024-12-09 09:55:19.100118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:43.812 [2024-12-09 09:55:19.110685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:43.812 [2024-12-09 09:55:19.110703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.812 [2024-12-09 09:55:19.110709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:43.812 [2024-12-09 09:55:19.120835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:43.812 [2024-12-09 09:55:19.120854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.812 [2024-12-09 09:55:19.120860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:43.812 [2024-12-09 09:55:19.132797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:43.812 [2024-12-09 09:55:19.132816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.812 [2024-12-09 09:55:19.132822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:43.812 [2024-12-09 09:55:19.144250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:43.812 [2024-12-09 09:55:19.144268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.812 [2024-12-09 09:55:19.144275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:43.812 [2024-12-09 09:55:19.155785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:43.812 [2024-12-09 09:55:19.155804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.812 [2024-12-09 09:55:19.155811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:43.812 [2024-12-09 09:55:19.163757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:43.812 [2024-12-09 09:55:19.163775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.812 [2024-12-09 09:55:19.163782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:43.812 [2024-12-09 09:55:19.175087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:43.812 [2024-12-09 09:55:19.175106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.812 [2024-12-09 09:55:19.175112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:43.812 [2024-12-09 09:55:19.186381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:43.812 [2024-12-09 09:55:19.186399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.812 [2024-12-09 09:55:19.186406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:43.812 [2024-12-09 09:55:19.196793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:43.812 [2024-12-09 09:55:19.196812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.812 [2024-12-09 09:55:19.196818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:43.812 [2024-12-09 09:55:19.208243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:43.812 [2024-12-09 09:55:19.208263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.812 [2024-12-09 09:55:19.208269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:43.812 [2024-12-09 09:55:19.220281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:43.812 [2024-12-09 09:55:19.220300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.812 [2024-12-09 09:55:19.220309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:43.812 [2024-12-09 09:55:19.232860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:43.812 [2024-12-09 09:55:19.232878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.812 [2024-12-09 09:55:19.232885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:43.812 [2024-12-09 09:55:19.245525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:43.812 [2024-12-09 09:55:19.245544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.812 [2024-12-09 09:55:19.245550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:43.812 [2024-12-09 09:55:19.257759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:43.812 [2024-12-09 09:55:19.257777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:43.812 [2024-12-09 09:55:19.257784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:44.073 [2024-12-09 09:55:19.269091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:44.073 [2024-12-09 09:55:19.269109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.073 [2024-12-09 09:55:19.269115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:44.073 [2024-12-09 09:55:19.280661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:44.073 [2024-12-09 09:55:19.280679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.073 [2024-12-09 09:55:19.280686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:44.073 [2024-12-09 09:55:19.289265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:44.073 [2024-12-09 09:55:19.289284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.073 [2024-12-09 09:55:19.289290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:44.073 [2024-12-09 09:55:19.299275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:44.073 [2024-12-09 09:55:19.299294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.073 [2024-12-09 09:55:19.299300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:44.073 [2024-12-09 09:55:19.309050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:44.073 [2024-12-09 09:55:19.309068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.073 [2024-12-09 09:55:19.309075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:44.073 [2024-12-09 09:55:19.318395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:44.073 [2024-12-09 09:55:19.318417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.073 [2024-12-09 09:55:19.318423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:44.073 [2024-12-09 09:55:19.329286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:44.073 [2024-12-09 09:55:19.329304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.073 [2024-12-09 09:55:19.329310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:44.073 [2024-12-09 09:55:19.339308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:44.073 [2024-12-09 09:55:19.339326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.073 [2024-12-09 09:55:19.339332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:44.073 [2024-12-09 09:55:19.350581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:44.073 [2024-12-09 09:55:19.350599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.073 [2024-12-09 09:55:19.350606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:44.073 [2024-12-09 09:55:19.362163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:44.073 [2024-12-09 09:55:19.362181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.073 [2024-12-09 09:55:19.362188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:44.073 [2024-12-09 09:55:19.372390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:44.073 [2024-12-09 09:55:19.372409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.073 [2024-12-09 09:55:19.372416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:44.073 [2024-12-09 09:55:19.382717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:44.073 [2024-12-09 09:55:19.382735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.073 [2024-12-09 09:55:19.382742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:44.073 [2024-12-09 09:55:19.393045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:44.073 [2024-12-09 09:55:19.393063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.073 [2024-12-09 09:55:19.393069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:44.073 [2024-12-09 09:55:19.404143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:44.073 [2024-12-09 09:55:19.404162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.073 [2024-12-09 09:55:19.404168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:44.074 [2024-12-09 09:55:19.414052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:44.074 [2024-12-09 09:55:19.414070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.074 [2024-12-09 09:55:19.414076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:44.074 [2024-12-09 09:55:19.423052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:44.074 [2024-12-09 09:55:19.423071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.074 [2024-12-09 09:55:19.423078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:44.074 [2024-12-09 09:55:19.434185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:44.074 [2024-12-09 09:55:19.434203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.074 [2024-12-09 09:55:19.434209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:44.074 [2024-12-09 09:55:19.445773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:44.074 [2024-12-09 09:55:19.445791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.074 [2024-12-09 09:55:19.445798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:44.074 [2024-12-09 09:55:19.455980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:44.074 [2024-12-09 09:55:19.455998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.074 [2024-12-09 09:55:19.456005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:44.074 [2024-12-09 09:55:19.466977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:44.074 [2024-12-09 09:55:19.466995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.074 [2024-12-09 09:55:19.467002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:44.074 [2024-12-09 09:55:19.477625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:44.074 [2024-12-09 09:55:19.477647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.074 [2024-12-09 09:55:19.477654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:44.074 [2024-12-09 09:55:19.488438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:44.074 [2024-12-09 09:55:19.488456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.074 [2024-12-09 09:55:19.488463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:44.074 [2024-12-09 09:55:19.498491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:44.074 [2024-12-09 09:55:19.498510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.074 [2024-12-09 09:55:19.498519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:44.074 [2024-12-09 09:55:19.508257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:44.074 [2024-12-09 09:55:19.508275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.074 [2024-12-09 09:55:19.508281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:44.074 [2024-12-09 09:55:19.517257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:44.074 [2024-12-09 09:55:19.517276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.074 [2024-12-09 09:55:19.517283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:44.335 [2024-12-09 09:55:19.527990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:44.335 [2024-12-09 09:55:19.528009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.335 [2024-12-09 09:55:19.528015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:44.335 [2024-12-09 09:55:19.538118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:44.335 [2024-12-09 09:55:19.538136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.335 [2024-12-09 09:55:19.538143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:44.335 [2024-12-09 09:55:19.548039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:44.335 [2024-12-09 09:55:19.548057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.335 [2024-12-09 09:55:19.548064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:44.335 [2024-12-09 09:55:19.557587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:44.335 [2024-12-09 09:55:19.557606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.335 [2024-12-09 09:55:19.557613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:44.335 [2024-12-09 09:55:19.568264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:44.335 [2024-12-09 09:55:19.568283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.335 [2024-12-09 09:55:19.568289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:44.335 [2024-12-09 09:55:19.579816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:44.335 [2024-12-09 09:55:19.579834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.335 [2024-12-09 09:55:19.579840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:44.335 [2024-12-09 09:55:19.585766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:44.335 [2024-12-09 09:55:19.585783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.336 [2024-12-09 09:55:19.585790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:44.336 [2024-12-09 09:55:19.596227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:44.336 [2024-12-09 09:55:19.596245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.336 [2024-12-09 09:55:19.596252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:44.336 [2024-12-09 09:55:19.603988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:44.336 [2024-12-09 09:55:19.604006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.336 [2024-12-09 09:55:19.604012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:44.336 [2024-12-09 09:55:19.614851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:44.336 [2024-12-09 09:55:19.614869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.336 [2024-12-09 09:55:19.614875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:44.336 [2024-12-09 09:55:19.623308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:44.336 [2024-12-09 09:55:19.623326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.336 [2024-12-09 09:55:19.623332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:44.336 [2024-12-09 09:55:19.633279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:44.336 [2024-12-09 09:55:19.633297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.336 [2024-12-09 09:55:19.633303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:44.336 [2024-12-09 09:55:19.644737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:44.336 [2024-12-09 09:55:19.644755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.336 [2024-12-09 09:55:19.644762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:44.336 [2024-12-09 09:55:19.651481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:44.336 [2024-12-09 09:55:19.651500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.336 [2024-12-09 09:55:19.651506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:44.336 [2024-12-09 09:55:19.660585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:44.336 [2024-12-09 09:55:19.660604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.336 [2024-12-09 09:55:19.660614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:44.336 [2024-12-09 09:55:19.672750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:44.336 [2024-12-09 09:55:19.672768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.336 [2024-12-09 09:55:19.672775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:44.336 [2024-12-09 09:55:19.683055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:44.336 [2024-12-09 09:55:19.683074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.336 [2024-12-09 09:55:19.683080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:44.336 [2024-12-09 09:55:19.693884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:44.336 [2024-12-09 09:55:19.693903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.336 [2024-12-09 09:55:19.693909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:44.336 [2024-12-09 09:55:19.704009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:44.336 [2024-12-09 09:55:19.704028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.336 [2024-12-09 09:55:19.704034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:44.336 [2024-12-09 09:55:19.715192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:44.336 [2024-12-09 09:55:19.715210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.336 [2024-12-09 09:55:19.715217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:44.336 [2024-12-09 09:55:19.725303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:44.336 [2024-12-09 09:55:19.725321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.336 [2024-12-09 09:55:19.725328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:44.336 [2024-12-09 09:55:19.734010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:44.336 [2024-12-09 09:55:19.734028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.336 [2024-12-09 09:55:19.734035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:44.336 [2024-12-09 09:55:19.741835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:44.336 [2024-12-09 09:55:19.741854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.336 [2024-12-09 09:55:19.741860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:44.336 [2024-12-09 09:55:19.752693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:44.336 [2024-12-09 09:55:19.752714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.336 [2024-12-09 09:55:19.752721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:44.336 [2024-12-09 09:55:19.764217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:44.336 [2024-12-09 09:55:19.764236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.336 [2024-12-09 09:55:19.764242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:44.336 [2024-12-09 09:55:19.775919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:44.336 [2024-12-09 09:55:19.775937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.336 [2024-12-09 09:55:19.775944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:44.336 [2024-12-09 09:55:19.785998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:44.336 [2024-12-09 09:55:19.786017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.336 [2024-12-09 09:55:19.786023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:44.599 [2024-12-09 09:55:19.797872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:44.599 [2024-12-09 09:55:19.797891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.599 [2024-12-09 09:55:19.797897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:44.599 [2024-12-09 09:55:19.808207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:44.599 [2024-12-09 09:55:19.808225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.599 [2024-12-09 09:55:19.808232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:44.599 [2024-12-09 09:55:19.817348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:44.599 [2024-12-09 09:55:19.817366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.599 [2024-12-09 09:55:19.817373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:44.599 [2024-12-09 09:55:19.827830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:44.599 [2024-12-09 09:55:19.827849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.599 [2024-12-09 09:55:19.827856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:44.599 [2024-12-09 09:55:19.839334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:44.599 [2024-12-09 09:55:19.839352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.599 [2024-12-09 09:55:19.839359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:44.599 [2024-12-09 09:55:19.851250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:44.599 [2024-12-09 09:55:19.851269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.599 [2024-12-09 09:55:19.851276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:44.599 [2024-12-09 09:55:19.861098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:44.599 [2024-12-09 09:55:19.861116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.599 [2024-12-09 09:55:19.861122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:44.599 [2024-12-09 09:55:19.871750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:44.599 [2024-12-09 09:55:19.871769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.599 [2024-12-09 09:55:19.871776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:44.599 [2024-12-09 09:55:19.883177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:44.599 [2024-12-09 09:55:19.883195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.599 [2024-12-09 09:55:19.883201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:44.599 [2024-12-09 09:55:19.891544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:44.599 [2024-12-09 09:55:19.891562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.599 [2024-12-09 09:55:19.891569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:44.599 [2024-12-09 09:55:19.900674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:44.599 [2024-12-09 09:55:19.900693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.599 [2024-12-09 09:55:19.900699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:44.599 [2024-12-09 09:55:19.912399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:44.599 [2024-12-09 09:55:19.912418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.599 [2024-12-09 09:55:19.912425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:44.599 [2024-12-09 09:55:19.923828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:44.599 [2024-12-09 09:55:19.923847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.599 [2024-12-09 09:55:19.923853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:44.599 [2024-12-09 09:55:19.934289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:44.599 [2024-12-09 09:55:19.934308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.599 [2024-12-09 09:55:19.934317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:44.599 [2024-12-09 09:55:19.943150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:44.599 [2024-12-09 09:55:19.943169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.599 [2024-12-09 09:55:19.943175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:44.599 [2024-12-09 09:55:19.953754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:44.599 [2024-12-09 09:55:19.953772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.599 [2024-12-09 09:55:19.953779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:44.599 [2024-12-09 09:55:19.964885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:44.599 [2024-12-09 09:55:19.964903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.599 [2024-12-09 09:55:19.964910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:44.599 [2024-12-09 09:55:19.974733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:44.599 [2024-12-09 09:55:19.974752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.599 [2024-12-09 09:55:19.974759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:44.599 [2024-12-09 09:55:19.986304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:44.599 [2024-12-09 09:55:19.986323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.599 [2024-12-09 09:55:19.986330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:44.599 [2024-12-09 09:55:19.996537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:44.599 [2024-12-09 09:55:19.996555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.599 [2024-12-09 09:55:19.996562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:44.599 [2024-12-09 09:55:20.008289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:44.599 [2024-12-09 09:55:20.008308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.599 [2024-12-09 09:55:20.008316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:44.600 [2024-12-09 09:55:20.019774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:44.600 [2024-12-09 09:55:20.019793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.600 [2024-12-09 09:55:20.019799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:44.600 3376.00 IOPS, 422.00 MiB/s [2024-12-09T08:55:20.053Z] [2024-12-09 09:55:20.030911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x248fce0) 00:37:44.600 [2024-12-09 09:55:20.030930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:44.600 [2024-12-09 09:55:20.030937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:44.600 00:37:44.600 Latency(us) 00:37:44.600 [2024-12-09T08:55:20.053Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:44.600 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:37:44.600 nvme0n1 : 2.01 3374.82 421.85 0.00 0.00 4736.81 600.75 16602.45 00:37:44.600 [2024-12-09T08:55:20.053Z] =================================================================================================================== 00:37:44.600 [2024-12-09T08:55:20.053Z] Total : 3374.82 421.85 0.00 0.00 4736.81 600.75 16602.45 00:37:44.600 { 00:37:44.600 "results": [ 00:37:44.600 { 00:37:44.600 "job": "nvme0n1", 00:37:44.600 "core_mask": "0x2", 00:37:44.600 "workload": "randread", 00:37:44.600 "status": "finished", 00:37:44.600 "queue_depth": 16, 00:37:44.600 "io_size": 131072, 00:37:44.600 "runtime": 2.005442, 00:37:44.600 "iops": 3374.81712260938, 00:37:44.600 "mibps": 421.8521403261725, 00:37:44.600 "io_failed": 0, 00:37:44.600 "io_timeout": 0, 00:37:44.600 "avg_latency_us": 4736.810717100079, 00:37:44.600 "min_latency_us": 600.7466666666667, 00:37:44.600 "max_latency_us": 16602.453333333335 00:37:44.600 } 00:37:44.600 ], 00:37:44.600 "core_count": 1 00:37:44.600 } 00:37:44.861 09:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:37:44.861 09:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:37:44.861 09:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:37:44.861 | .driver_specific 00:37:44.861 | .nvme_error 00:37:44.861 | .status_code 00:37:44.861 | .command_transient_transport_error' 00:37:44.861 09:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:37:44.861 09:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 219 > 0 )) 00:37:44.861 09:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3051980 00:37:44.861 09:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3051980 ']' 00:37:44.861 09:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3051980 00:37:44.861 09:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:37:44.861 09:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:44.861 09:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3051980 00:37:44.861 09:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:44.861 09:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:44.861 09:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3051980' 00:37:44.861 killing process with pid 3051980 00:37:44.861 09:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3051980 00:37:44.861 Received shutdown signal, test time was about 2.000000 seconds 00:37:44.861 00:37:44.861 Latency(us) 00:37:44.861 [2024-12-09T08:55:20.314Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:44.861 [2024-12-09T08:55:20.314Z] =================================================================================================================== 00:37:44.861 [2024-12-09T08:55:20.314Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:44.861 09:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3051980 00:37:45.121 09:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:37:45.121 09:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:37:45.121 09:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:37:45.121 09:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:37:45.121 09:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:37:45.121 09:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3052463 00:37:45.121 09:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3052463 /var/tmp/bperf.sock 00:37:45.121 09:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3052463 ']' 00:37:45.121 09:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:37:45.121 09:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:45.121 09:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:45.122 09:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:45.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:45.122 09:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:45.122 09:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:45.122 [2024-12-09 09:55:20.446306] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:37:45.122 [2024-12-09 09:55:20.446365] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3052463 ] 00:37:45.122 [2024-12-09 09:55:20.527872] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:45.122 [2024-12-09 09:55:20.543895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:45.382 09:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:45.382 09:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:37:45.382 09:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:45.382 09:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:45.382 09:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:37:45.382 09:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:45.382 09:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:45.382 09:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:45.382 09:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:45.382 09:55:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:45.645 nvme0n1 00:37:45.645 09:55:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:37:45.645 09:55:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:45.645 09:55:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:45.645 09:55:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:45.645 09:55:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:37:45.645 09:55:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:45.909 Running I/O for 2 seconds... 00:37:45.909 [2024-12-09 09:55:21.146696] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ee1f80 00:37:45.909 [2024-12-09 09:55:21.147739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:19091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:45.909 [2024-12-09 09:55:21.147767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:37:45.909 [2024-12-09 09:55:21.155329] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ee3060 00:37:45.909 [2024-12-09 09:55:21.156330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:4207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:45.909 [2024-12-09 09:55:21.156350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:37:45.909 [2024-12-09 09:55:21.163798] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ee4140 00:37:45.909 [2024-12-09 09:55:21.164836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:18094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:45.909 [2024-12-09 09:55:21.164853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:37:45.909 [2024-12-09 09:55:21.172237] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ee5220 00:37:45.909 [2024-12-09 09:55:21.173281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:45.909 [2024-12-09 09:55:21.173299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:37:45.909 [2024-12-09 09:55:21.180705] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ee6300 00:37:45.909 [2024-12-09 09:55:21.181702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:8259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:45.909 [2024-12-09 09:55:21.181719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:37:45.909 [2024-12-09 09:55:21.189131] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016eed0b0 00:37:45.909 [2024-12-09 09:55:21.190160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:13269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:45.909 [2024-12-09 09:55:21.190177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:37:45.909 [2024-12-09 09:55:21.197545] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016eee190 00:37:45.909 [2024-12-09 09:55:21.198580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:24232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:45.909 [2024-12-09 09:55:21.198597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:37:45.909 [2024-12-09 09:55:21.205979] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016eef270 00:37:45.909 [2024-12-09 09:55:21.207015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:21625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:45.909 [2024-12-09 09:55:21.207032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:37:45.909 [2024-12-09 09:55:21.214428] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ef0350 00:37:45.909 [2024-12-09 09:55:21.215461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:45.909 [2024-12-09 09:55:21.215478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:37:45.909 [2024-12-09 09:55:21.222871] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ef1430 00:37:45.909 [2024-12-09 09:55:21.223902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:10335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:45.909 [2024-12-09 09:55:21.223919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:37:45.909 [2024-12-09 09:55:21.231285] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ef2510 00:37:45.909 [2024-12-09 09:55:21.232294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:13614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:45.909 [2024-12-09 09:55:21.232311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:37:45.909 [2024-12-09 09:55:21.239706] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ef35f0 00:37:45.909 [2024-12-09 09:55:21.240702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:20872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:45.909 [2024-12-09 09:55:21.240718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:37:45.909 [2024-12-09 09:55:21.248111] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ef46d0 00:37:45.909 [2024-12-09 09:55:21.249154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:2682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:45.909 [2024-12-09 09:55:21.249171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:37:45.909 [2024-12-09 09:55:21.256565] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ee7c50 00:37:45.909 [2024-12-09 09:55:21.257601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:6603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:45.909 [2024-12-09 09:55:21.257618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:37:45.909 [2024-12-09 09:55:21.264234] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016efef90 00:37:45.909 [2024-12-09 09:55:21.265594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:45.909 [2024-12-09 09:55:21.265611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:37:45.909 [2024-12-09 09:55:21.272164] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016eed4e8 00:37:45.909 [2024-12-09 09:55:21.272804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:3459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:45.909 [2024-12-09 09:55:21.272820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:45.909 [2024-12-09 09:55:21.280587] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ef9b30 00:37:45.909 [2024-12-09 09:55:21.281261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:13461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:45.909 [2024-12-09 09:55:21.281277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:45.909 [2024-12-09 09:55:21.289023] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016efac10 00:37:45.909 [2024-12-09 09:55:21.289695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:16710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:45.909 [2024-12-09 09:55:21.289711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:45.909 [2024-12-09 09:55:21.297437] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016efbcf0 00:37:45.909 [2024-12-09 09:55:21.298104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:4857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:45.909 [2024-12-09 09:55:21.298121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:45.910 [2024-12-09 09:55:21.305839] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016eddc00 00:37:45.910 [2024-12-09 09:55:21.306508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:45.910 [2024-12-09 09:55:21.306525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:45.910 [2024-12-09 09:55:21.314239] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016edece0 00:37:45.910 [2024-12-09 09:55:21.314898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:22263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:45.910 [2024-12-09 09:55:21.314915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:45.910 [2024-12-09 09:55:21.322659] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016efcdd0 00:37:45.910 [2024-12-09 09:55:21.323332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:45.910 [2024-12-09 09:55:21.323349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:45.910 [2024-12-09 09:55:21.331065] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016efe720 00:37:45.910 [2024-12-09 09:55:21.331715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:1823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:45.910 [2024-12-09 09:55:21.331731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:45.910 [2024-12-09 09:55:21.339468] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016eff3c8 00:37:45.910 [2024-12-09 09:55:21.340132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:3390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:45.910 [2024-12-09 09:55:21.340148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:45.910 [2024-12-09 09:55:21.347867] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016eeb328 00:37:45.910 [2024-12-09 09:55:21.348559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:24385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:45.910 [2024-12-09 09:55:21.348578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:45.910 [2024-12-09 09:55:21.356281] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016eea248 00:37:45.910 [2024-12-09 09:55:21.356962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:14927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:45.910 [2024-12-09 09:55:21.356979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:46.174 [2024-12-09 09:55:21.364685] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ee9168 00:37:46.174 [2024-12-09 09:55:21.365350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:1946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.174 [2024-12-09 09:55:21.365366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:46.174 [2024-12-09 09:55:21.373118] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ef1ca0 00:37:46.174 [2024-12-09 09:55:21.373794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.174 [2024-12-09 09:55:21.373811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:46.174 [2024-12-09 09:55:21.381530] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ef0bc0 00:37:46.174 [2024-12-09 09:55:21.382201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.174 [2024-12-09 09:55:21.382217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:46.174 [2024-12-09 09:55:21.389944] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016eefae0 00:37:46.174 [2024-12-09 09:55:21.390621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.174 [2024-12-09 09:55:21.390640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:46.174 [2024-12-09 09:55:21.398520] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016eeea00 00:37:46.174 [2024-12-09 09:55:21.399190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.174 [2024-12-09 09:55:21.399207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:46.174 [2024-12-09 09:55:21.406914] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016eed920 00:37:46.174 [2024-12-09 09:55:21.407589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:11797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.174 [2024-12-09 09:55:21.407605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:46.174 [2024-12-09 09:55:21.415335] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ef96f8 00:37:46.174 [2024-12-09 09:55:21.416006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.174 [2024-12-09 09:55:21.416023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:46.174 [2024-12-09 09:55:21.423787] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016efa7d8 00:37:46.174 [2024-12-09 09:55:21.424460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.174 [2024-12-09 09:55:21.424477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:46.174 [2024-12-09 09:55:21.432204] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016efb8b8 00:37:46.174 [2024-12-09 09:55:21.432871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.174 [2024-12-09 09:55:21.432888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:46.174 [2024-12-09 09:55:21.440672] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016efc998 00:37:46.174 [2024-12-09 09:55:21.441332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.174 [2024-12-09 09:55:21.441349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:46.174 [2024-12-09 09:55:21.449098] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ede8a8 00:37:46.174 [2024-12-09 09:55:21.449770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.174 [2024-12-09 09:55:21.449786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:46.174 [2024-12-09 09:55:21.457501] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016edf988 00:37:46.174 [2024-12-09 09:55:21.458173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:4970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.174 [2024-12-09 09:55:21.458191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:46.174 [2024-12-09 09:55:21.466208] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016efd640 00:37:46.174 [2024-12-09 09:55:21.466987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.174 [2024-12-09 09:55:21.467004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:37:46.174 [2024-12-09 09:55:21.474747] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016efdeb0 00:37:46.174 [2024-12-09 09:55:21.475534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:10612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.174 [2024-12-09 09:55:21.475550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:37:46.174 [2024-12-09 09:55:21.483166] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016eebb98 00:37:46.174 [2024-12-09 09:55:21.483949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:22508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.174 [2024-12-09 09:55:21.483966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:37:46.174 [2024-12-09 09:55:21.491586] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016eeaab8 00:37:46.174 [2024-12-09 09:55:21.492387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:3196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.174 [2024-12-09 09:55:21.492403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:37:46.174 [2024-12-09 09:55:21.499989] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ee99d8 00:37:46.174 [2024-12-09 09:55:21.500768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.174 [2024-12-09 09:55:21.500785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:37:46.174 [2024-12-09 09:55:21.508399] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ee3060 00:37:46.174 [2024-12-09 09:55:21.509184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:5746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.174 [2024-12-09 09:55:21.509201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:37:46.174 [2024-12-09 09:55:21.516826] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ee1f80 00:37:46.174 [2024-12-09 09:55:21.517609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:12806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.174 [2024-12-09 09:55:21.517626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:37:46.174 [2024-12-09 09:55:21.525246] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ee0ea0 00:37:46.174 [2024-12-09 09:55:21.526043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:7195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.174 [2024-12-09 09:55:21.526060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:37:46.174 [2024-12-09 09:55:21.533653] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016edfdc0 00:37:46.174 [2024-12-09 09:55:21.534438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:24119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.174 [2024-12-09 09:55:21.534455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:37:46.174 [2024-12-09 09:55:21.542048] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ef2d80 00:37:46.174 [2024-12-09 09:55:21.542819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:24518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.174 [2024-12-09 09:55:21.542836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:37:46.174 [2024-12-09 09:55:21.550450] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ef3e60 00:37:46.174 [2024-12-09 09:55:21.551237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:17538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.174 [2024-12-09 09:55:21.551254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:37:46.174 [2024-12-09 09:55:21.558865] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ef4f40 00:37:46.174 [2024-12-09 09:55:21.559655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:23286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.174 [2024-12-09 09:55:21.559671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:37:46.174 [2024-12-09 09:55:21.567350] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ee73e0 00:37:46.174 [2024-12-09 09:55:21.568152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:25339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.174 [2024-12-09 09:55:21.568172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:37:46.174 [2024-12-09 09:55:21.575770] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ef57b0 00:37:46.174 [2024-12-09 09:55:21.576551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:2838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.175 [2024-12-09 09:55:21.576568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:37:46.175 [2024-12-09 09:55:21.584179] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ef6890 00:37:46.175 [2024-12-09 09:55:21.584974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:7825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.175 [2024-12-09 09:55:21.584990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:37:46.175 [2024-12-09 09:55:21.592602] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ef7970 00:37:46.175 [2024-12-09 09:55:21.593405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:23849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.175 [2024-12-09 09:55:21.593421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:37:46.175 [2024-12-09 09:55:21.601018] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ef8a50 00:37:46.175 [2024-12-09 09:55:21.601777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:3141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.175 [2024-12-09 09:55:21.601794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:37:46.175 [2024-12-09 09:55:21.609430] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016efe2e8 00:37:46.175 [2024-12-09 09:55:21.610183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:15480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.175 [2024-12-09 09:55:21.610200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:37:46.175 [2024-12-09 09:55:21.617842] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016efeb58 00:37:46.175 [2024-12-09 09:55:21.618624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:24916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.175 [2024-12-09 09:55:21.618643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:37:46.438 [2024-12-09 09:55:21.626245] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016eeaef0 00:37:46.438 [2024-12-09 09:55:21.626991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:7235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.438 [2024-12-09 09:55:21.627008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:37:46.438 [2024-12-09 09:55:21.634652] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ee9e10 00:37:46.438 [2024-12-09 09:55:21.635447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.438 [2024-12-09 09:55:21.635465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:37:46.438 [2024-12-09 09:55:21.643050] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ee8d30 00:37:46.438 [2024-12-09 09:55:21.643849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:8306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.438 [2024-12-09 09:55:21.643868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:37:46.438 [2024-12-09 09:55:21.651455] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ee2c28 00:37:46.438 [2024-12-09 09:55:21.652236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:13789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.438 [2024-12-09 09:55:21.652253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:37:46.438 [2024-12-09 09:55:21.659871] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ee1b48 00:37:46.438 [2024-12-09 09:55:21.660647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:5441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.438 [2024-12-09 09:55:21.660664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:37:46.438 [2024-12-09 09:55:21.668287] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ee0a68 00:37:46.438 [2024-12-09 09:55:21.669068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:5311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.439 [2024-12-09 09:55:21.669084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:37:46.439 [2024-12-09 09:55:21.676680] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ef2948 00:37:46.439 [2024-12-09 09:55:21.677481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:6647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.439 [2024-12-09 09:55:21.677498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:37:46.439 [2024-12-09 09:55:21.685077] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ef3a28 00:37:46.439 [2024-12-09 09:55:21.685867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:12078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.439 [2024-12-09 09:55:21.685883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:37:46.439 [2024-12-09 09:55:21.693484] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ef4b08 00:37:46.439 [2024-12-09 09:55:21.694287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:7824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.439 [2024-12-09 09:55:21.694303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:37:46.439 [2024-12-09 09:55:21.701916] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ee8088 00:37:46.439 [2024-12-09 09:55:21.702673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:11123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.439 [2024-12-09 09:55:21.702690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:37:46.439 [2024-12-09 09:55:21.710325] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ee6fa8 00:37:46.439 [2024-12-09 09:55:21.711111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:6255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.439 [2024-12-09 09:55:21.711127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:37:46.439 [2024-12-09 09:55:21.718739] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ef6458 00:37:46.439 [2024-12-09 09:55:21.719524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:7223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.439 [2024-12-09 09:55:21.719540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:37:46.439 [2024-12-09 09:55:21.727154] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ef7538 00:37:46.439 [2024-12-09 09:55:21.727898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:10765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.439 [2024-12-09 09:55:21.727915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:37:46.439 [2024-12-09 09:55:21.735552] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ef8618 00:37:46.439 [2024-12-09 09:55:21.736339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.439 [2024-12-09 09:55:21.736355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:37:46.439 [2024-12-09 09:55:21.743966] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016efd640 00:37:46.439 [2024-12-09 09:55:21.744754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.439 [2024-12-09 09:55:21.744771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:37:46.439 [2024-12-09 09:55:21.752372] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016efdeb0 00:37:46.439 [2024-12-09 09:55:21.753153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.439 [2024-12-09 09:55:21.753170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:37:46.439 [2024-12-09 09:55:21.760789] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016eebb98 00:37:46.439 [2024-12-09 09:55:21.761564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:3847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.439 [2024-12-09 09:55:21.761581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:37:46.439 [2024-12-09 09:55:21.769181] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016eeaab8 00:37:46.439 [2024-12-09 09:55:21.769969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.439 [2024-12-09 09:55:21.769986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:37:46.439 [2024-12-09 09:55:21.777587] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ee99d8 00:37:46.439 [2024-12-09 09:55:21.778391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:5114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.439 [2024-12-09 09:55:21.778407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:37:46.439 [2024-12-09 09:55:21.786015] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ee3060 00:37:46.439 [2024-12-09 09:55:21.786816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:23786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.439 [2024-12-09 09:55:21.786833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:37:46.439 [2024-12-09 09:55:21.794425] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ee1f80 00:37:46.439 [2024-12-09 09:55:21.795214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:18491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.439 [2024-12-09 09:55:21.795230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:37:46.439 [2024-12-09 09:55:21.802842] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ee0ea0 00:37:46.439 [2024-12-09 09:55:21.803625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:24617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.439 [2024-12-09 09:55:21.803644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:37:46.439 [2024-12-09 09:55:21.811237] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016edfdc0 00:37:46.439 [2024-12-09 09:55:21.812024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:8200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.439 [2024-12-09 09:55:21.812040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:37:46.439 [2024-12-09 09:55:21.819645] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ef2d80 00:37:46.439 [2024-12-09 09:55:21.820441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:24615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.439 [2024-12-09 09:55:21.820457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:37:46.439 [2024-12-09 09:55:21.828064] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ef3e60 00:37:46.439 [2024-12-09 09:55:21.828814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:20949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.439 [2024-12-09 09:55:21.828831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:37:46.439 [2024-12-09 09:55:21.836479] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ef4f40 00:37:46.439 [2024-12-09 09:55:21.837264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:12132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.439 [2024-12-09 09:55:21.837280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:37:46.439 [2024-12-09 09:55:21.844889] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ee73e0 00:37:46.439 [2024-12-09 09:55:21.845672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:15103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.439 [2024-12-09 09:55:21.845689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:37:46.439 [2024-12-09 09:55:21.853281] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ef57b0 00:37:46.439 [2024-12-09 09:55:21.854040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:7746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.439 [2024-12-09 09:55:21.854057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:37:46.439 [2024-12-09 09:55:21.861687] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ef6890 00:37:46.439 [2024-12-09 09:55:21.862481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:20341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.439 [2024-12-09 09:55:21.862500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:37:46.439 [2024-12-09 09:55:21.870102] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ef7970 00:37:46.439 [2024-12-09 09:55:21.870898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:10289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.439 [2024-12-09 09:55:21.870915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:37:46.439 [2024-12-09 09:55:21.878545] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ef8a50 00:37:46.439 [2024-12-09 09:55:21.879296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:6258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.439 [2024-12-09 09:55:21.879312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:37:46.439 [2024-12-09 09:55:21.886965] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016efe2e8 00:37:46.439 [2024-12-09 09:55:21.887744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:15326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.439 [2024-12-09 09:55:21.887761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:37:46.703 [2024-12-09 09:55:21.895373] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016efeb58 00:37:46.703 [2024-12-09 09:55:21.896173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:16935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.703 [2024-12-09 09:55:21.896190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:37:46.703 [2024-12-09 09:55:21.903787] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016eeaef0 00:37:46.703 [2024-12-09 09:55:21.904531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:3688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.703 [2024-12-09 09:55:21.904547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:37:46.703 [2024-12-09 09:55:21.912204] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ee9e10 00:37:46.703 [2024-12-09 09:55:21.912989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:7274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.703 [2024-12-09 09:55:21.913007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:37:46.703 [2024-12-09 09:55:21.920605] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ee8d30 00:37:46.703 [2024-12-09 09:55:21.921405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:2445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.703 [2024-12-09 09:55:21.921422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:37:46.703 [2024-12-09 09:55:21.929042] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ee2c28 00:37:46.703 [2024-12-09 09:55:21.929839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:15976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.703 [2024-12-09 09:55:21.929856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:37:46.703 [2024-12-09 09:55:21.937457] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ee1b48 00:37:46.703 [2024-12-09 09:55:21.938255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:1237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.703 [2024-12-09 09:55:21.938272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:37:46.703 [2024-12-09 09:55:21.945315] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016efac10 00:37:46.704 [2024-12-09 09:55:21.946085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:17753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.704 [2024-12-09 09:55:21.946101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:37:46.704 [2024-12-09 09:55:21.954707] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016eecc78 00:37:46.704 [2024-12-09 09:55:21.955606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:21966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.704 [2024-12-09 09:55:21.955623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:37:46.704 [2024-12-09 09:55:21.963382] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016edfdc0 00:37:46.704 [2024-12-09 09:55:21.964179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:11815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.704 [2024-12-09 09:55:21.964196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:37:46.704 [2024-12-09 09:55:21.971923] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ef8a50 00:37:46.704 [2024-12-09 09:55:21.972904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:7984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.704 [2024-12-09 09:55:21.972921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:37:46.704 [2024-12-09 09:55:21.980349] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ef7970 00:37:46.704 [2024-12-09 09:55:21.981340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:6694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.704 [2024-12-09 09:55:21.981357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:37:46.704 [2024-12-09 09:55:21.988840] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ef6890 00:37:46.704 [2024-12-09 09:55:21.989847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:11027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.704 [2024-12-09 09:55:21.989862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:37:46.704 [2024-12-09 09:55:21.997337] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ef57b0 00:37:46.704 [2024-12-09 09:55:21.998341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:1728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.704 [2024-12-09 09:55:21.998357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:37:46.704 [2024-12-09 09:55:22.005783] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ee8d30 00:37:46.704 [2024-12-09 09:55:22.006773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:12322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.704 [2024-12-09 09:55:22.006789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:37:46.704 [2024-12-09 09:55:22.014218] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ee2c28 00:37:46.704 [2024-12-09 09:55:22.015234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:12326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.704 [2024-12-09 09:55:22.015250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:37:46.704 [2024-12-09 09:55:22.022636] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ee1b48 00:37:46.704 [2024-12-09 09:55:22.023661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:8640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.704 [2024-12-09 09:55:22.023678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:37:46.704 [2024-12-09 09:55:22.031071] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ee0a68 00:37:46.704 [2024-12-09 09:55:22.032078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.704 [2024-12-09 09:55:22.032095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:37:46.704 [2024-12-09 09:55:22.039469] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ee8088 00:37:46.704 [2024-12-09 09:55:22.040477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.704 [2024-12-09 09:55:22.040494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:37:46.704 [2024-12-09 09:55:22.047895] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ef4b08 00:37:46.704 [2024-12-09 09:55:22.048908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:4385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.704 [2024-12-09 09:55:22.048925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:37:46.704 [2024-12-09 09:55:22.056305] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ef2d80 00:37:46.704 [2024-12-09 09:55:22.057320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:18980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.704 [2024-12-09 09:55:22.057337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:37:46.704 [2024-12-09 09:55:22.064728] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ef3e60 00:37:46.704 [2024-12-09 09:55:22.065708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:12585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.704 [2024-12-09 09:55:22.065724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:37:46.704 [2024-12-09 09:55:22.073158] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ef1ca0 00:37:46.704 [2024-12-09 09:55:22.074172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:18210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.704 [2024-12-09 09:55:22.074189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:37:46.704 [2024-12-09 09:55:22.081577] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ef0bc0 00:37:46.704 [2024-12-09 09:55:22.082578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:16713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.704 [2024-12-09 09:55:22.082598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:37:46.704 [2024-12-09 09:55:22.090013] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016eefae0 00:37:46.704 [2024-12-09 09:55:22.091018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:10882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.704 [2024-12-09 09:55:22.091035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:37:46.704 [2024-12-09 09:55:22.098417] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016eee190 00:37:46.704 [2024-12-09 09:55:22.099417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:17636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.704 [2024-12-09 09:55:22.099433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:37:46.704 [2024-12-09 09:55:22.106253] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016eebfd0 00:37:46.704 [2024-12-09 09:55:22.107246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:20276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.704 [2024-12-09 09:55:22.107262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:37:46.704 [2024-12-09 09:55:22.115511] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016efef90 00:37:46.704 [2024-12-09 09:55:22.116621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.704 [2024-12-09 09:55:22.116640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:37:46.704 [2024-12-09 09:55:22.124095] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016eddc00 00:37:46.704 [2024-12-09 09:55:22.125223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:21770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.704 [2024-12-09 09:55:22.125240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:37:46.704 [2024-12-09 09:55:22.132494] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016efbcf0 00:37:46.704 [2024-12-09 09:55:22.133811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.704 [2024-12-09 09:55:22.133830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:37:46.704 30079.00 IOPS, 117.50 MiB/s [2024-12-09T08:55:22.157Z] [2024-12-09 09:55:22.140884] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ee6738 00:37:46.704 [2024-12-09 09:55:22.142004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:6627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.704 [2024-12-09 09:55:22.142021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:37:46.704 [2024-12-09 09:55:22.149299] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ee4578 00:37:46.704 [2024-12-09 09:55:22.150428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:18475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.704 [2024-12-09 09:55:22.150445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:37:46.967 [2024-12-09 09:55:22.157738] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016eebfd0 00:37:46.967 [2024-12-09 09:55:22.158841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:7647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.967 [2024-12-09 09:55:22.158859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:37:46.967 [2024-12-09 09:55:22.166164] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016eea248 00:37:46.967 [2024-12-09 09:55:22.167286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:3833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.967 [2024-12-09 09:55:22.167303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:37:46.967 [2024-12-09 09:55:22.174590] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016eff3c8 00:37:46.967 [2024-12-09 09:55:22.175709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:22155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.967 [2024-12-09 09:55:22.175726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:37:46.967 [2024-12-09 09:55:22.182989] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016edf118 00:37:46.967 [2024-12-09 09:55:22.184109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:24959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.967 [2024-12-09 09:55:22.184126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:37:46.967 [2024-12-09 09:55:22.191399] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ef9b30 00:37:46.967 [2024-12-09 09:55:22.192522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:10515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.967 [2024-12-09 09:55:22.192539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:37:46.967 [2024-12-09 09:55:22.199816] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016efc998 00:37:46.968 [2024-12-09 09:55:22.200939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:10718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.968 [2024-12-09 09:55:22.200956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:37:46.968 [2024-12-09 09:55:22.208231] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016efb480 00:37:46.968 [2024-12-09 09:55:22.209349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:25458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.968 [2024-12-09 09:55:22.209365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:37:46.968 [2024-12-09 09:55:22.216651] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ee5ec8 00:37:46.968 [2024-12-09 09:55:22.217781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:13113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.968 [2024-12-09 09:55:22.217798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:37:46.968 [2024-12-09 09:55:22.225080] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ee3d08 00:37:46.968 [2024-12-09 09:55:22.226201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:12452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.968 [2024-12-09 09:55:22.226218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:37:46.968 [2024-12-09 09:55:22.233485] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ef6890 00:37:46.968 [2024-12-09 09:55:22.234612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:20332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.968 [2024-12-09 09:55:22.234628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:37:46.968 [2024-12-09 09:55:22.241927] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016eeaab8 00:37:46.968 [2024-12-09 09:55:22.243049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:20435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.968 [2024-12-09 09:55:22.243067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:37:46.968 [2024-12-09 09:55:22.250355] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016efdeb0 00:37:46.968 [2024-12-09 09:55:22.251491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:24508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.968 [2024-12-09 09:55:22.251508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:37:46.968 [2024-12-09 09:55:22.258778] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016edf988 00:37:46.968 [2024-12-09 09:55:22.259891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:18055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.968 [2024-12-09 09:55:22.259908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:37:46.968 [2024-12-09 09:55:22.267187] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ef92c0 00:37:46.968 [2024-12-09 09:55:22.268314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:19586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.968 [2024-12-09 09:55:22.268331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:37:46.968 [2024-12-09 09:55:22.275594] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016efbcf0 00:37:46.968 [2024-12-09 09:55:22.276706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:11736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.968 [2024-12-09 09:55:22.276723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:37:46.968 [2024-12-09 09:55:22.283997] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ee6738 00:37:46.968 [2024-12-09 09:55:22.285083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:12255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.968 [2024-12-09 09:55:22.285100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:37:46.968 [2024-12-09 09:55:22.292442] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ee4578 00:37:46.968 [2024-12-09 09:55:22.293562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:20125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.968 [2024-12-09 09:55:22.293579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:37:46.968 [2024-12-09 09:55:22.300849] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016eebfd0 00:37:46.968 [2024-12-09 09:55:22.301971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:24428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.968 [2024-12-09 09:55:22.301990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:37:46.968 [2024-12-09 09:55:22.309259] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016eea248 00:37:46.968 [2024-12-09 09:55:22.310377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:14491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.968 [2024-12-09 09:55:22.310394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:37:46.968 [2024-12-09 09:55:22.317657] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016eff3c8 00:37:46.968 [2024-12-09 09:55:22.318742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:12518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.968 [2024-12-09 09:55:22.318759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:37:46.968 [2024-12-09 09:55:22.326077] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016edf118 00:37:46.968 [2024-12-09 09:55:22.327160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:10422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.968 [2024-12-09 09:55:22.327176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:37:46.968 [2024-12-09 09:55:22.334526] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ef9b30 00:37:46.968 [2024-12-09 09:55:22.335659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:14823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.968 [2024-12-09 09:55:22.335676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:37:46.968 [2024-12-09 09:55:22.342952] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016efc998 00:37:46.968 [2024-12-09 09:55:22.344072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:21026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.968 [2024-12-09 09:55:22.344088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:37:46.968 [2024-12-09 09:55:22.351366] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016efb480 00:37:46.968 [2024-12-09 09:55:22.352485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:5769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.968 [2024-12-09 09:55:22.352502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:37:46.968 [2024-12-09 09:55:22.359774] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ee5ec8 00:37:46.968 [2024-12-09 09:55:22.360861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:19093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.968 [2024-12-09 09:55:22.360878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:37:46.968 [2024-12-09 09:55:22.368194] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ee3d08 00:37:46.968 [2024-12-09 09:55:22.369319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:17185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.968 [2024-12-09 09:55:22.369335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:37:46.968 [2024-12-09 09:55:22.376645] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ef6890 00:37:46.968 [2024-12-09 09:55:22.377771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:3436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.968 [2024-12-09 09:55:22.377788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:37:46.968 [2024-12-09 09:55:22.385052] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016eeaab8 00:37:46.968 [2024-12-09 09:55:22.386173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:1259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.968 [2024-12-09 09:55:22.386190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:37:46.968 [2024-12-09 09:55:22.393459] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016efdeb0 00:37:46.968 [2024-12-09 09:55:22.394451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:10729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.968 [2024-12-09 09:55:22.394468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:37:46.968 [2024-12-09 09:55:22.402046] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016edf988 00:37:46.968 [2024-12-09 09:55:22.403155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:4536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.968 [2024-12-09 09:55:22.403172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:37:46.968 [2024-12-09 09:55:22.410475] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ef92c0 00:37:46.968 [2024-12-09 09:55:22.411598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:17502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:46.968 [2024-12-09 09:55:22.411616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:37:47.230 [2024-12-09 09:55:22.418899] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016efbcf0 00:37:47.230 [2024-12-09 09:55:22.420023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:2723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.230 [2024-12-09 09:55:22.420040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:37:47.230 [2024-12-09 09:55:22.427318] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ee6738 00:37:47.230 [2024-12-09 09:55:22.428442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:15959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.230 [2024-12-09 09:55:22.428458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:37:47.230 [2024-12-09 09:55:22.435775] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ee4578 00:37:47.230 [2024-12-09 09:55:22.436905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:21946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.230 [2024-12-09 09:55:22.436922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:37:47.230 [2024-12-09 09:55:22.444225] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016eebfd0 00:37:47.230 [2024-12-09 09:55:22.445350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:21971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.230 [2024-12-09 09:55:22.445367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:37:47.230 [2024-12-09 09:55:22.452663] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016eea248 00:37:47.230 [2024-12-09 09:55:22.453744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:11957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.230 [2024-12-09 09:55:22.453760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:37:47.230 [2024-12-09 09:55:22.461090] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016eff3c8 00:37:47.230 [2024-12-09 09:55:22.462214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:13167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.231 [2024-12-09 09:55:22.462231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:37:47.231 [2024-12-09 09:55:22.469493] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016edf118 00:37:47.231 [2024-12-09 09:55:22.470495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.231 [2024-12-09 09:55:22.470511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:37:47.231 [2024-12-09 09:55:22.477903] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ef9b30 00:37:47.231 [2024-12-09 09:55:22.478886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:9110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.231 [2024-12-09 09:55:22.478902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:37:47.231 [2024-12-09 09:55:22.485550] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ee9168 00:37:47.231 [2024-12-09 09:55:22.486863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.231 [2024-12-09 09:55:22.486880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:37:47.231 [2024-12-09 09:55:22.493336] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ee84c0 00:37:47.231 [2024-12-09 09:55:22.494086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:14002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.231 [2024-12-09 09:55:22.494102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:37:47.231 [2024-12-09 09:55:22.501793] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ee9168 00:37:47.231 [2024-12-09 09:55:22.502533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:19810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.231 [2024-12-09 09:55:22.502549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:37:47.231 [2024-12-09 09:55:22.511417] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ef1430 00:37:47.231 [2024-12-09 09:55:22.512512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.231 [2024-12-09 09:55:22.512528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:37:47.231 [2024-12-09 09:55:22.520002] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016eebfd0 00:37:47.231 [2024-12-09 09:55:22.521107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:49 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.231 [2024-12-09 09:55:22.521127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:37:47.231 [2024-12-09 09:55:22.528485] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016edece0 00:37:47.231 [2024-12-09 09:55:22.529580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.231 [2024-12-09 09:55:22.529597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:37:47.231 [2024-12-09 09:55:22.536920] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ee8088 00:37:47.231 [2024-12-09 09:55:22.537987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:2642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.231 [2024-12-09 09:55:22.538004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:37:47.231 [2024-12-09 09:55:22.545314] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ef0bc0 00:37:47.231 [2024-12-09 09:55:22.546281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:20950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.231 [2024-12-09 09:55:22.546298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:37:47.231 [2024-12-09 09:55:22.554028] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016eeee38 00:37:47.231 [2024-12-09 09:55:22.555106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:23510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.231 [2024-12-09 09:55:22.555123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:37:47.231 [2024-12-09 09:55:22.561888] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ee3498 00:37:47.231 [2024-12-09 09:55:22.562970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:17264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.231 [2024-12-09 09:55:22.562988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:37:47.231 [2024-12-09 09:55:22.570444] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ef1868 00:37:47.231 [2024-12-09 09:55:22.571534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:13124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.231 [2024-12-09 09:55:22.571551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:37:47.231 [2024-12-09 09:55:22.578897] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ef4f40 00:37:47.231 [2024-12-09 09:55:22.579990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:21143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.231 [2024-12-09 09:55:22.580006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:37:47.231 [2024-12-09 09:55:22.587320] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ee3498 00:37:47.231 [2024-12-09 09:55:22.588416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:21068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.231 [2024-12-09 09:55:22.588433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:37:47.231 [2024-12-09 09:55:22.594754] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ee5658 00:37:47.231 [2024-12-09 09:55:22.595569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:5023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.231 [2024-12-09 09:55:22.595585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:37:47.231 [2024-12-09 09:55:22.603239] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016eeee38 00:37:47.231 [2024-12-09 09:55:22.604055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:17779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.231 [2024-12-09 09:55:22.604071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:37:47.231 [2024-12-09 09:55:22.611674] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016efb480 00:37:47.231 [2024-12-09 09:55:22.612485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.231 [2024-12-09 09:55:22.612501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:37:47.231 [2024-12-09 09:55:22.620120] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ee88f8 00:37:47.231 [2024-12-09 09:55:22.620934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:21192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.231 [2024-12-09 09:55:22.620951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:37:47.231 [2024-12-09 09:55:22.628577] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ee5658 00:37:47.231 [2024-12-09 09:55:22.629392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:7005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.231 [2024-12-09 09:55:22.629409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:37:47.231 [2024-12-09 09:55:22.636524] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ee49b0 00:37:47.231 [2024-12-09 09:55:22.637260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:9102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.231 [2024-12-09 09:55:22.637277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:37:47.231 [2024-12-09 09:55:22.645098] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ee5a90 00:37:47.231 [2024-12-09 09:55:22.645695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:10506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.231 [2024-12-09 09:55:22.645711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:37:47.231 [2024-12-09 09:55:22.653671] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016efe2e8 00:37:47.231 [2024-12-09 09:55:22.654380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:9456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.231 [2024-12-09 09:55:22.654398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:37:47.231 [2024-12-09 09:55:22.662138] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016eebfd0 00:37:47.231 [2024-12-09 09:55:22.662892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:21472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.231 [2024-12-09 09:55:22.662909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:37:47.231 [2024-12-09 09:55:22.670570] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ee3060 00:37:47.231 [2024-12-09 09:55:22.671321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:25521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.231 [2024-12-09 09:55:22.671338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:37:47.231 [2024-12-09 09:55:22.679041] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016eec840 00:37:47.231 [2024-12-09 09:55:22.679784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:16031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.231 [2024-12-09 09:55:22.679802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:37:47.493 [2024-12-09 09:55:22.687493] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ee5658 00:37:47.493 [2024-12-09 09:55:22.688238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:18137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.493 [2024-12-09 09:55:22.688254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:37:47.493 [2024-12-09 09:55:22.695942] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ee5a90 00:37:47.493 [2024-12-09 09:55:22.696683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:23623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.493 [2024-12-09 09:55:22.696700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:37:47.493 [2024-12-09 09:55:22.704365] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016efe2e8 00:37:47.493 [2024-12-09 09:55:22.705102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:12650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.493 [2024-12-09 09:55:22.705119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:37:47.493 [2024-12-09 09:55:22.712831] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016eebfd0 00:37:47.493 [2024-12-09 09:55:22.713573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:6979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.493 [2024-12-09 09:55:22.713589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:37:47.493 [2024-12-09 09:55:22.721257] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ee3060 00:37:47.493 [2024-12-09 09:55:22.722005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:23306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.493 [2024-12-09 09:55:22.722022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:37:47.493 [2024-12-09 09:55:22.729727] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016eec840 00:37:47.493 [2024-12-09 09:55:22.730459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:16291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.493 [2024-12-09 09:55:22.730476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:37:47.493 [2024-12-09 09:55:22.738169] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ee5658 00:37:47.493 [2024-12-09 09:55:22.738901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:14263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.493 [2024-12-09 09:55:22.738923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:37:47.493 [2024-12-09 09:55:22.746595] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ee5a90 00:37:47.493 [2024-12-09 09:55:22.747341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:13377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.493 [2024-12-09 09:55:22.747358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:37:47.493 [2024-12-09 09:55:22.755047] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016efe2e8 00:37:47.493 [2024-12-09 09:55:22.755793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:3703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.493 [2024-12-09 09:55:22.755810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:37:47.493 [2024-12-09 09:55:22.763505] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016eebfd0 00:37:47.493 [2024-12-09 09:55:22.764240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:3831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.493 [2024-12-09 09:55:22.764257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:37:47.493 [2024-12-09 09:55:22.771960] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ee3060 00:37:47.493 [2024-12-09 09:55:22.772693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:8965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.493 [2024-12-09 09:55:22.772710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:37:47.494 [2024-12-09 09:55:22.780392] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016eec840 00:37:47.494 [2024-12-09 09:55:22.781135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:19408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.494 [2024-12-09 09:55:22.781152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:37:47.494 [2024-12-09 09:55:22.788819] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ee5658 00:37:47.494 [2024-12-09 09:55:22.789559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:18054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.494 [2024-12-09 09:55:22.789575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:37:47.494 [2024-12-09 09:55:22.797277] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ee5a90 00:37:47.494 [2024-12-09 09:55:22.797999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.494 [2024-12-09 09:55:22.798016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:37:47.494 [2024-12-09 09:55:22.805707] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016efe2e8 00:37:47.494 [2024-12-09 09:55:22.806407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:21298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.494 [2024-12-09 09:55:22.806423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:37:47.494 [2024-12-09 09:55:22.814158] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016eebfd0 00:37:47.494 [2024-12-09 09:55:22.814901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:4890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.494 [2024-12-09 09:55:22.814921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:37:47.494 [2024-12-09 09:55:22.822553] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ee3060 00:37:47.494 [2024-12-09 09:55:22.823305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:16141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.494 [2024-12-09 09:55:22.823321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:37:47.494 [2024-12-09 09:55:22.830976] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016eec840 00:37:47.494 [2024-12-09 09:55:22.831724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:19259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.494 [2024-12-09 09:55:22.831741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:37:47.494 [2024-12-09 09:55:22.839407] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ee5658 00:37:47.494 [2024-12-09 09:55:22.840156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.494 [2024-12-09 09:55:22.840173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:37:47.494 [2024-12-09 09:55:22.847857] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ee5a90 00:37:47.494 [2024-12-09 09:55:22.848598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:11322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.494 [2024-12-09 09:55:22.848614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:37:47.494 [2024-12-09 09:55:22.856304] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016efe2e8 00:37:47.494 [2024-12-09 09:55:22.857047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:2128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.494 [2024-12-09 09:55:22.857064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:37:47.494 [2024-12-09 09:55:22.864744] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016eebfd0 00:37:47.494 [2024-12-09 09:55:22.865480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:15838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.494 [2024-12-09 09:55:22.865496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:37:47.494 [2024-12-09 09:55:22.873131] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ee3060 00:37:47.494 [2024-12-09 09:55:22.873915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:25295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.494 [2024-12-09 09:55:22.873932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:37:47.494 [2024-12-09 09:55:22.881556] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016eec840 00:37:47.494 [2024-12-09 09:55:22.882282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:6201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.494 [2024-12-09 09:55:22.882299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:37:47.494 [2024-12-09 09:55:22.889994] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ee5658 00:37:47.494 [2024-12-09 09:55:22.890711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:17670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.494 [2024-12-09 09:55:22.890728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:37:47.494 [2024-12-09 09:55:22.898436] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ee5a90 00:37:47.494 [2024-12-09 09:55:22.899177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:10206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.494 [2024-12-09 09:55:22.899194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:37:47.494 [2024-12-09 09:55:22.906873] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016efe2e8 00:37:47.494 [2024-12-09 09:55:22.907610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.494 [2024-12-09 09:55:22.907626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:37:47.494 [2024-12-09 09:55:22.915319] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016eebfd0 00:37:47.494 [2024-12-09 09:55:22.916062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:12559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.494 [2024-12-09 09:55:22.916079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:37:47.494 [2024-12-09 09:55:22.923745] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ee3060 00:37:47.494 [2024-12-09 09:55:22.924448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:16468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.494 [2024-12-09 09:55:22.924464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:37:47.494 [2024-12-09 09:55:22.932183] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016eec840 00:37:47.494 [2024-12-09 09:55:22.932927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:19949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.494 [2024-12-09 09:55:22.932943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:37:47.494 [2024-12-09 09:55:22.940612] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ee5658 00:37:47.494 [2024-12-09 09:55:22.941356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:23096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.494 [2024-12-09 09:55:22.941372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:37:47.757 [2024-12-09 09:55:22.949038] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ee5a90 00:37:47.757 [2024-12-09 09:55:22.949775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:6056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.757 [2024-12-09 09:55:22.949792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:37:47.757 [2024-12-09 09:55:22.957474] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016efe2e8 00:37:47.757 [2024-12-09 09:55:22.958218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:9167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.757 [2024-12-09 09:55:22.958235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:37:47.757 [2024-12-09 09:55:22.965895] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016eebfd0 00:37:47.757 [2024-12-09 09:55:22.966502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:16045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.757 [2024-12-09 09:55:22.966518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:37:47.757 [2024-12-09 09:55:22.974301] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ee3060 00:37:47.757 [2024-12-09 09:55:22.975047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:1289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.757 [2024-12-09 09:55:22.975063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:37:47.757 [2024-12-09 09:55:22.982737] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016eec840 00:37:47.757 [2024-12-09 09:55:22.983446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:16858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.757 [2024-12-09 09:55:22.983463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:37:47.757 [2024-12-09 09:55:22.991175] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ee5658 00:37:47.757 [2024-12-09 09:55:22.991920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:12669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.757 [2024-12-09 09:55:22.991937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:37:47.757 [2024-12-09 09:55:22.999603] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ee5a90 00:37:47.757 [2024-12-09 09:55:23.000345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.757 [2024-12-09 09:55:23.000361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:37:47.757 [2024-12-09 09:55:23.008117] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016efe2e8 00:37:47.757 [2024-12-09 09:55:23.008834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:15386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.757 [2024-12-09 09:55:23.008851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:37:47.757 [2024-12-09 09:55:23.016533] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016eebfd0 00:37:47.757 [2024-12-09 09:55:23.017274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:15474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.757 [2024-12-09 09:55:23.017290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:37:47.757 [2024-12-09 09:55:23.024975] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ee3060 00:37:47.757 [2024-12-09 09:55:23.025711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.757 [2024-12-09 09:55:23.025728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:37:47.757 [2024-12-09 09:55:23.033409] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016eec840 00:37:47.757 [2024-12-09 09:55:23.034155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:10354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.757 [2024-12-09 09:55:23.034175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:37:47.757 [2024-12-09 09:55:23.041846] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ee5658 00:37:47.757 [2024-12-09 09:55:23.042588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:1880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.757 [2024-12-09 09:55:23.042604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:37:47.757 [2024-12-09 09:55:23.050271] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ee5a90 00:37:47.757 [2024-12-09 09:55:23.051018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:4393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.757 [2024-12-09 09:55:23.051034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:37:47.757 [2024-12-09 09:55:23.058715] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016efe2e8 00:37:47.757 [2024-12-09 09:55:23.059457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:18874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.757 [2024-12-09 09:55:23.059474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:37:47.757 [2024-12-09 09:55:23.067138] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016eebfd0 00:37:47.757 [2024-12-09 09:55:23.067881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:21100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.757 [2024-12-09 09:55:23.067898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:37:47.757 [2024-12-09 09:55:23.075579] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ee3060 00:37:47.757 [2024-12-09 09:55:23.076323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:24097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.757 [2024-12-09 09:55:23.076340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:37:47.757 [2024-12-09 09:55:23.084040] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016eec840 00:37:47.757 [2024-12-09 09:55:23.084783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.758 [2024-12-09 09:55:23.084800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:37:47.758 [2024-12-09 09:55:23.092478] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ee5658 00:37:47.758 [2024-12-09 09:55:23.093220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:14465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.758 [2024-12-09 09:55:23.093237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:37:47.758 [2024-12-09 09:55:23.100918] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ee5a90 00:37:47.758 [2024-12-09 09:55:23.101659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:7420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.758 [2024-12-09 09:55:23.101676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:37:47.758 [2024-12-09 09:55:23.109340] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016efe2e8 00:37:47.758 [2024-12-09 09:55:23.110089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:22851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.758 [2024-12-09 09:55:23.110106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:37:47.758 [2024-12-09 09:55:23.117791] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016eebfd0 00:37:47.758 [2024-12-09 09:55:23.118526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:6277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.758 [2024-12-09 09:55:23.118543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:37:47.758 [2024-12-09 09:55:23.126260] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016ee3060 00:37:47.758 [2024-12-09 09:55:23.127011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:16883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.758 [2024-12-09 09:55:23.127028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:37:47.758 [2024-12-09 09:55:23.134694] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf48c30) with pdu=0x200016eec840 00:37:47.758 [2024-12-09 09:55:23.135969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:23156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:37:47.758 [2024-12-09 09:55:23.135986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:37:47.758 30183.50 IOPS, 117.90 MiB/s 00:37:47.758 Latency(us) 00:37:47.758 [2024-12-09T08:55:23.211Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:47.758 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:47.758 nvme0n1 : 2.00 30197.31 117.96 0.00 0.00 4234.04 1720.32 14199.47 00:37:47.758 [2024-12-09T08:55:23.211Z] =================================================================================================================== 00:37:47.758 [2024-12-09T08:55:23.211Z] Total : 30197.31 117.96 0.00 0.00 4234.04 1720.32 14199.47 00:37:47.758 { 00:37:47.758 "results": [ 00:37:47.758 { 00:37:47.758 "job": "nvme0n1", 00:37:47.758 "core_mask": "0x2", 00:37:47.758 "workload": "randwrite", 00:37:47.758 "status": "finished", 00:37:47.758 "queue_depth": 128, 00:37:47.758 "io_size": 4096, 00:37:47.758 "runtime": 2.003324, 00:37:47.758 "iops": 30197.312067344075, 00:37:47.758 "mibps": 117.9582502630628, 00:37:47.758 "io_failed": 0, 00:37:47.758 "io_timeout": 0, 00:37:47.758 "avg_latency_us": 4234.035154861283, 00:37:47.758 "min_latency_us": 1720.32, 00:37:47.758 "max_latency_us": 14199.466666666667 00:37:47.758 } 00:37:47.758 ], 00:37:47.758 "core_count": 1 00:37:47.758 } 00:37:47.758 09:55:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:37:47.758 09:55:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:37:47.758 09:55:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:37:47.758 | .driver_specific 00:37:47.758 | .nvme_error 00:37:47.758 | .status_code 00:37:47.758 | .command_transient_transport_error' 00:37:47.758 09:55:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:37:48.019 09:55:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 237 > 0 )) 00:37:48.019 09:55:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3052463 00:37:48.019 09:55:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3052463 ']' 00:37:48.019 09:55:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3052463 00:37:48.019 09:55:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:37:48.019 09:55:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:48.019 09:55:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3052463 00:37:48.019 09:55:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:48.019 09:55:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:48.019 09:55:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3052463' 00:37:48.019 killing process with pid 3052463 00:37:48.019 09:55:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3052463 00:37:48.019 Received shutdown signal, test time was about 2.000000 seconds 00:37:48.019 00:37:48.019 Latency(us) 00:37:48.019 [2024-12-09T08:55:23.472Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:48.019 [2024-12-09T08:55:23.472Z] =================================================================================================================== 00:37:48.019 [2024-12-09T08:55:23.472Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:48.019 09:55:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3052463 00:37:48.281 09:55:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:37:48.281 09:55:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:37:48.281 09:55:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:37:48.281 09:55:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:37:48.281 09:55:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:37:48.281 09:55:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3053033 00:37:48.281 09:55:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3053033 /var/tmp/bperf.sock 00:37:48.281 09:55:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3053033 ']' 00:37:48.281 09:55:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:37:48.281 09:55:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:48.281 09:55:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:48.281 09:55:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:48.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:48.281 09:55:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:48.281 09:55:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:48.281 [2024-12-09 09:55:23.555497] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:37:48.281 [2024-12-09 09:55:23.555550] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3053033 ] 00:37:48.281 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:48.281 Zero copy mechanism will not be used. 00:37:48.281 [2024-12-09 09:55:23.641224] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:48.281 [2024-12-09 09:55:23.655810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:48.281 09:55:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:48.281 09:55:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:37:48.281 09:55:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:48.281 09:55:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:37:48.543 09:55:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:37:48.543 09:55:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:48.543 09:55:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:48.543 09:55:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:48.543 09:55:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:48.543 09:55:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:37:48.804 nvme0n1 00:37:48.804 09:55:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:37:48.804 09:55:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:48.804 09:55:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:48.804 09:55:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:48.804 09:55:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:37:48.804 09:55:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:48.804 I/O size of 131072 is greater than zero copy threshold (65536). 00:37:48.804 Zero copy mechanism will not be used. 00:37:48.804 Running I/O for 2 seconds... 00:37:48.804 [2024-12-09 09:55:24.235013] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:48.804 [2024-12-09 09:55:24.235121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.804 [2024-12-09 09:55:24.235146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:48.804 [2024-12-09 09:55:24.241324] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:48.804 [2024-12-09 09:55:24.241382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.804 [2024-12-09 09:55:24.241401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:48.804 [2024-12-09 09:55:24.245256] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:48.804 [2024-12-09 09:55:24.245326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.804 [2024-12-09 09:55:24.245342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:48.804 [2024-12-09 09:55:24.249077] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:48.804 [2024-12-09 09:55:24.249151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.804 [2024-12-09 09:55:24.249169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:48.805 [2024-12-09 09:55:24.252881] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:48.805 [2024-12-09 09:55:24.252945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:48.805 [2024-12-09 09:55:24.252961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:49.068 [2024-12-09 09:55:24.256623] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.068 [2024-12-09 09:55:24.256722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.068 [2024-12-09 09:55:24.256742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:49.068 [2024-12-09 09:55:24.261150] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.068 [2024-12-09 09:55:24.261215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.068 [2024-12-09 09:55:24.261231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:49.068 [2024-12-09 09:55:24.265945] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.068 [2024-12-09 09:55:24.266016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.068 [2024-12-09 09:55:24.266032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:49.068 [2024-12-09 09:55:24.269461] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.068 [2024-12-09 09:55:24.269522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.068 [2024-12-09 09:55:24.269540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:49.068 [2024-12-09 09:55:24.274400] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.068 [2024-12-09 09:55:24.274490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.068 [2024-12-09 09:55:24.274509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:49.068 [2024-12-09 09:55:24.279749] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.068 [2024-12-09 09:55:24.279826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.068 [2024-12-09 09:55:24.279842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:49.068 [2024-12-09 09:55:24.283916] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.068 [2024-12-09 09:55:24.283972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.068 [2024-12-09 09:55:24.283988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:49.069 [2024-12-09 09:55:24.288097] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.069 [2024-12-09 09:55:24.288173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.069 [2024-12-09 09:55:24.288193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:49.069 [2024-12-09 09:55:24.292083] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.069 [2024-12-09 09:55:24.292153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.069 [2024-12-09 09:55:24.292171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:49.069 [2024-12-09 09:55:24.295930] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.069 [2024-12-09 09:55:24.296003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.069 [2024-12-09 09:55:24.296018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:49.069 [2024-12-09 09:55:24.299597] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.069 [2024-12-09 09:55:24.299650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.069 [2024-12-09 09:55:24.299669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:49.069 [2024-12-09 09:55:24.303177] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.069 [2024-12-09 09:55:24.303234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.069 [2024-12-09 09:55:24.303250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:49.069 [2024-12-09 09:55:24.306905] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.069 [2024-12-09 09:55:24.306972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.069 [2024-12-09 09:55:24.306987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:49.069 [2024-12-09 09:55:24.310368] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.069 [2024-12-09 09:55:24.310464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.069 [2024-12-09 09:55:24.310482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:49.069 [2024-12-09 09:55:24.314176] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.069 [2024-12-09 09:55:24.314229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.069 [2024-12-09 09:55:24.314246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:49.069 [2024-12-09 09:55:24.317788] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.069 [2024-12-09 09:55:24.317850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.069 [2024-12-09 09:55:24.317869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:49.069 [2024-12-09 09:55:24.321292] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.069 [2024-12-09 09:55:24.321352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.069 [2024-12-09 09:55:24.321368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:49.069 [2024-12-09 09:55:24.324780] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.069 [2024-12-09 09:55:24.324846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.069 [2024-12-09 09:55:24.324863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:49.069 [2024-12-09 09:55:24.331072] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.069 [2024-12-09 09:55:24.331294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.069 [2024-12-09 09:55:24.331310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:49.069 [2024-12-09 09:55:24.335624] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.069 [2024-12-09 09:55:24.335731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.069 [2024-12-09 09:55:24.335747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:49.069 [2024-12-09 09:55:24.342283] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.069 [2024-12-09 09:55:24.342374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.069 [2024-12-09 09:55:24.342392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:49.069 [2024-12-09 09:55:24.349780] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.069 [2024-12-09 09:55:24.349838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.069 [2024-12-09 09:55:24.349854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:49.069 [2024-12-09 09:55:24.356098] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.069 [2024-12-09 09:55:24.356156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.069 [2024-12-09 09:55:24.356172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:49.069 [2024-12-09 09:55:24.361675] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.069 [2024-12-09 09:55:24.361941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.069 [2024-12-09 09:55:24.361958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:49.069 [2024-12-09 09:55:24.366365] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.069 [2024-12-09 09:55:24.366441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.069 [2024-12-09 09:55:24.366456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:49.069 [2024-12-09 09:55:24.370097] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.069 [2024-12-09 09:55:24.370159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.069 [2024-12-09 09:55:24.370175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:49.069 [2024-12-09 09:55:24.373864] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.069 [2024-12-09 09:55:24.373928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.069 [2024-12-09 09:55:24.373943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:49.069 [2024-12-09 09:55:24.377650] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.069 [2024-12-09 09:55:24.377712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.069 [2024-12-09 09:55:24.377728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:49.069 [2024-12-09 09:55:24.383596] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.069 [2024-12-09 09:55:24.383888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.069 [2024-12-09 09:55:24.383905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:49.069 [2024-12-09 09:55:24.388121] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.069 [2024-12-09 09:55:24.388182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.069 [2024-12-09 09:55:24.388198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:49.069 [2024-12-09 09:55:24.391702] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.069 [2024-12-09 09:55:24.391762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.069 [2024-12-09 09:55:24.391778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:49.069 [2024-12-09 09:55:24.396000] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.069 [2024-12-09 09:55:24.396059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.069 [2024-12-09 09:55:24.396076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:49.069 [2024-12-09 09:55:24.399996] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.069 [2024-12-09 09:55:24.400065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.069 [2024-12-09 09:55:24.400081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:49.069 [2024-12-09 09:55:24.405536] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.069 [2024-12-09 09:55:24.405602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.069 [2024-12-09 09:55:24.405621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:49.070 [2024-12-09 09:55:24.413679] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.070 [2024-12-09 09:55:24.413734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.070 [2024-12-09 09:55:24.413750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:49.070 [2024-12-09 09:55:24.418080] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.070 [2024-12-09 09:55:24.418141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.070 [2024-12-09 09:55:24.418157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:49.070 [2024-12-09 09:55:24.425866] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.070 [2024-12-09 09:55:24.426116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.070 [2024-12-09 09:55:24.426135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:49.070 [2024-12-09 09:55:24.430773] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.070 [2024-12-09 09:55:24.430851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.070 [2024-12-09 09:55:24.430867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:49.070 [2024-12-09 09:55:24.434674] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.070 [2024-12-09 09:55:24.434756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.070 [2024-12-09 09:55:24.434772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:49.070 [2024-12-09 09:55:24.439121] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.070 [2024-12-09 09:55:24.439193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.070 [2024-12-09 09:55:24.439210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:49.070 [2024-12-09 09:55:24.442835] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.070 [2024-12-09 09:55:24.442893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.070 [2024-12-09 09:55:24.442910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:49.070 [2024-12-09 09:55:24.447303] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.070 [2024-12-09 09:55:24.447361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.070 [2024-12-09 09:55:24.447377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:49.070 [2024-12-09 09:55:24.451214] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.070 [2024-12-09 09:55:24.451276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.070 [2024-12-09 09:55:24.451291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:49.070 [2024-12-09 09:55:24.456260] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.070 [2024-12-09 09:55:24.456334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.070 [2024-12-09 09:55:24.456350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:49.070 [2024-12-09 09:55:24.459928] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.070 [2024-12-09 09:55:24.459991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.070 [2024-12-09 09:55:24.460006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:49.070 [2024-12-09 09:55:24.463495] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.070 [2024-12-09 09:55:24.463558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.070 [2024-12-09 09:55:24.463576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:49.070 [2024-12-09 09:55:24.467195] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.070 [2024-12-09 09:55:24.467267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.070 [2024-12-09 09:55:24.467283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:49.070 [2024-12-09 09:55:24.471052] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.070 [2024-12-09 09:55:24.471118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.070 [2024-12-09 09:55:24.471134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:49.070 [2024-12-09 09:55:24.474625] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.070 [2024-12-09 09:55:24.474682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.070 [2024-12-09 09:55:24.474699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:49.070 [2024-12-09 09:55:24.478819] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.070 [2024-12-09 09:55:24.478885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.070 [2024-12-09 09:55:24.478900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:49.070 [2024-12-09 09:55:24.483300] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.070 [2024-12-09 09:55:24.483369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.070 [2024-12-09 09:55:24.483385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:49.070 [2024-12-09 09:55:24.490942] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.070 [2024-12-09 09:55:24.491009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.070 [2024-12-09 09:55:24.491025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:49.070 [2024-12-09 09:55:24.496928] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.070 [2024-12-09 09:55:24.496989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.070 [2024-12-09 09:55:24.497011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:49.070 [2024-12-09 09:55:24.500739] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.070 [2024-12-09 09:55:24.500816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.070 [2024-12-09 09:55:24.500832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:49.070 [2024-12-09 09:55:24.504607] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.070 [2024-12-09 09:55:24.504665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.070 [2024-12-09 09:55:24.504681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:49.070 [2024-12-09 09:55:24.510841] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.070 [2024-12-09 09:55:24.510940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.070 [2024-12-09 09:55:24.510956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:49.070 [2024-12-09 09:55:24.517281] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.070 [2024-12-09 09:55:24.517359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.070 [2024-12-09 09:55:24.517375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:49.333 [2024-12-09 09:55:24.522076] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.333 [2024-12-09 09:55:24.522327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.333 [2024-12-09 09:55:24.522346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:49.334 [2024-12-09 09:55:24.529032] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.334 [2024-12-09 09:55:24.529108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.334 [2024-12-09 09:55:24.529124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:49.334 [2024-12-09 09:55:24.533055] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.334 [2024-12-09 09:55:24.533143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.334 [2024-12-09 09:55:24.533162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:49.334 [2024-12-09 09:55:24.536526] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.334 [2024-12-09 09:55:24.536607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.334 [2024-12-09 09:55:24.536622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:49.334 [2024-12-09 09:55:24.540454] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.334 [2024-12-09 09:55:24.540527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.334 [2024-12-09 09:55:24.540542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:49.334 [2024-12-09 09:55:24.544236] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.334 [2024-12-09 09:55:24.544315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.334 [2024-12-09 09:55:24.544332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:49.334 [2024-12-09 09:55:24.548119] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.334 [2024-12-09 09:55:24.548189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.334 [2024-12-09 09:55:24.548204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:49.334 [2024-12-09 09:55:24.552184] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.334 [2024-12-09 09:55:24.552231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.334 [2024-12-09 09:55:24.552247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:49.334 [2024-12-09 09:55:24.556038] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.334 [2024-12-09 09:55:24.556093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.334 [2024-12-09 09:55:24.556109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:49.334 [2024-12-09 09:55:24.559940] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.334 [2024-12-09 09:55:24.560007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.334 [2024-12-09 09:55:24.560023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:49.334 [2024-12-09 09:55:24.563863] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.334 [2024-12-09 09:55:24.563921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.334 [2024-12-09 09:55:24.563936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:49.334 [2024-12-09 09:55:24.567733] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.334 [2024-12-09 09:55:24.567804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.334 [2024-12-09 09:55:24.567819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:49.334 [2024-12-09 09:55:24.573375] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.334 [2024-12-09 09:55:24.573435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.334 [2024-12-09 09:55:24.573451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:49.334 [2024-12-09 09:55:24.580560] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.334 [2024-12-09 09:55:24.580614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.334 [2024-12-09 09:55:24.580630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:49.334 [2024-12-09 09:55:24.585567] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.334 [2024-12-09 09:55:24.585645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.334 [2024-12-09 09:55:24.585661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:49.334 [2024-12-09 09:55:24.589551] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.334 [2024-12-09 09:55:24.589629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.334 [2024-12-09 09:55:24.589649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:49.334 [2024-12-09 09:55:24.594171] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.334 [2024-12-09 09:55:24.594229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.334 [2024-12-09 09:55:24.594245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:49.334 [2024-12-09 09:55:24.599660] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.334 [2024-12-09 09:55:24.599739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.334 [2024-12-09 09:55:24.599755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:49.334 [2024-12-09 09:55:24.604432] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.334 [2024-12-09 09:55:24.604486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.334 [2024-12-09 09:55:24.604502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:49.334 [2024-12-09 09:55:24.608206] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.334 [2024-12-09 09:55:24.608263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.334 [2024-12-09 09:55:24.608279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:49.334 [2024-12-09 09:55:24.612038] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.334 [2024-12-09 09:55:24.612108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.334 [2024-12-09 09:55:24.612124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:49.334 [2024-12-09 09:55:24.615566] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.334 [2024-12-09 09:55:24.615649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.334 [2024-12-09 09:55:24.615665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:49.334 [2024-12-09 09:55:24.619588] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.334 [2024-12-09 09:55:24.619891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.334 [2024-12-09 09:55:24.619908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:49.334 [2024-12-09 09:55:24.627976] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.334 [2024-12-09 09:55:24.628053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.334 [2024-12-09 09:55:24.628069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:49.334 [2024-12-09 09:55:24.632050] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.334 [2024-12-09 09:55:24.632120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.334 [2024-12-09 09:55:24.632136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:49.334 [2024-12-09 09:55:24.635827] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.334 [2024-12-09 09:55:24.635893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.334 [2024-12-09 09:55:24.635909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:49.334 [2024-12-09 09:55:24.641325] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.334 [2024-12-09 09:55:24.641390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.334 [2024-12-09 09:55:24.641405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:49.334 [2024-12-09 09:55:24.645988] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.335 [2024-12-09 09:55:24.646054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.335 [2024-12-09 09:55:24.646069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:49.335 [2024-12-09 09:55:24.651226] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.335 [2024-12-09 09:55:24.651274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.335 [2024-12-09 09:55:24.651292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:49.335 [2024-12-09 09:55:24.655085] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.335 [2024-12-09 09:55:24.655152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.335 [2024-12-09 09:55:24.655168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:49.335 [2024-12-09 09:55:24.658863] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.335 [2024-12-09 09:55:24.658990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.335 [2024-12-09 09:55:24.659006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:49.335 [2024-12-09 09:55:24.664681] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.335 [2024-12-09 09:55:24.664739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.335 [2024-12-09 09:55:24.664754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:49.335 [2024-12-09 09:55:24.668646] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.335 [2024-12-09 09:55:24.668706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.335 [2024-12-09 09:55:24.668722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:49.335 [2024-12-09 09:55:24.674994] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.335 [2024-12-09 09:55:24.675085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.335 [2024-12-09 09:55:24.675101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:49.335 [2024-12-09 09:55:24.679597] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.335 [2024-12-09 09:55:24.679663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.335 [2024-12-09 09:55:24.679679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:49.335 [2024-12-09 09:55:24.685697] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.335 [2024-12-09 09:55:24.685772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.335 [2024-12-09 09:55:24.685788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:49.335 [2024-12-09 09:55:24.690768] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.335 [2024-12-09 09:55:24.691014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.335 [2024-12-09 09:55:24.691032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:49.335 [2024-12-09 09:55:24.701123] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.335 [2024-12-09 09:55:24.701171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.335 [2024-12-09 09:55:24.701186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:49.335 [2024-12-09 09:55:24.705740] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.335 [2024-12-09 09:55:24.705813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.335 [2024-12-09 09:55:24.705829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:49.335 [2024-12-09 09:55:24.709882] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.335 [2024-12-09 09:55:24.709958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.335 [2024-12-09 09:55:24.709974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:49.335 [2024-12-09 09:55:24.714281] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.335 [2024-12-09 09:55:24.714358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.335 [2024-12-09 09:55:24.714374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:49.335 [2024-12-09 09:55:24.718213] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.335 [2024-12-09 09:55:24.718274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.335 [2024-12-09 09:55:24.718289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:49.335 [2024-12-09 09:55:24.722229] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.335 [2024-12-09 09:55:24.722290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.335 [2024-12-09 09:55:24.722306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:49.335 [2024-12-09 09:55:24.725928] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.335 [2024-12-09 09:55:24.725996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.335 [2024-12-09 09:55:24.726011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:49.335 [2024-12-09 09:55:24.730111] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.335 [2024-12-09 09:55:24.730167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.335 [2024-12-09 09:55:24.730183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:49.335 [2024-12-09 09:55:24.734142] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.335 [2024-12-09 09:55:24.734213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.335 [2024-12-09 09:55:24.734228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:49.335 [2024-12-09 09:55:24.738052] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.335 [2024-12-09 09:55:24.738104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.335 [2024-12-09 09:55:24.738119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:49.335 [2024-12-09 09:55:24.741784] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.335 [2024-12-09 09:55:24.741874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.335 [2024-12-09 09:55:24.741890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:49.335 [2024-12-09 09:55:24.746997] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.335 [2024-12-09 09:55:24.747104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.335 [2024-12-09 09:55:24.747120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:49.335 [2024-12-09 09:55:24.751755] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.335 [2024-12-09 09:55:24.751811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.335 [2024-12-09 09:55:24.751827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:49.335 [2024-12-09 09:55:24.755646] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.335 [2024-12-09 09:55:24.755717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.335 [2024-12-09 09:55:24.755733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:49.335 [2024-12-09 09:55:24.761127] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.335 [2024-12-09 09:55:24.761200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.335 [2024-12-09 09:55:24.761216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:49.335 [2024-12-09 09:55:24.766982] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.335 [2024-12-09 09:55:24.767046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.335 [2024-12-09 09:55:24.767062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:49.335 [2024-12-09 09:55:24.770914] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.335 [2024-12-09 09:55:24.770988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.335 [2024-12-09 09:55:24.771003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:49.335 [2024-12-09 09:55:24.774551] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.335 [2024-12-09 09:55:24.774658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.336 [2024-12-09 09:55:24.774677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:49.336 [2024-12-09 09:55:24.780037] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.336 [2024-12-09 09:55:24.780111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.336 [2024-12-09 09:55:24.780126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:49.599 [2024-12-09 09:55:24.783998] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.599 [2024-12-09 09:55:24.784042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.599 [2024-12-09 09:55:24.784058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:49.599 [2024-12-09 09:55:24.788368] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.599 [2024-12-09 09:55:24.788422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.599 [2024-12-09 09:55:24.788438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:49.599 [2024-12-09 09:55:24.792360] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.599 [2024-12-09 09:55:24.792428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.599 [2024-12-09 09:55:24.792443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:49.599 [2024-12-09 09:55:24.796349] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.599 [2024-12-09 09:55:24.796419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.599 [2024-12-09 09:55:24.796435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:49.599 [2024-12-09 09:55:24.800346] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.599 [2024-12-09 09:55:24.800408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.599 [2024-12-09 09:55:24.800424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:49.599 [2024-12-09 09:55:24.805811] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.599 [2024-12-09 09:55:24.805887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.599 [2024-12-09 09:55:24.805902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:49.599 [2024-12-09 09:55:24.814162] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.599 [2024-12-09 09:55:24.814244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.599 [2024-12-09 09:55:24.814259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:49.599 [2024-12-09 09:55:24.818406] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.599 [2024-12-09 09:55:24.818465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.599 [2024-12-09 09:55:24.818480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:49.599 [2024-12-09 09:55:24.822466] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.599 [2024-12-09 09:55:24.822543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.599 [2024-12-09 09:55:24.822559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:49.599 [2024-12-09 09:55:24.826549] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.599 [2024-12-09 09:55:24.826605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.599 [2024-12-09 09:55:24.826621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:49.599 [2024-12-09 09:55:24.830473] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.599 [2024-12-09 09:55:24.830542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.599 [2024-12-09 09:55:24.830558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:49.599 [2024-12-09 09:55:24.834606] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.599 [2024-12-09 09:55:24.834664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.600 [2024-12-09 09:55:24.834680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:49.600 [2024-12-09 09:55:24.839748] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.600 [2024-12-09 09:55:24.839815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.600 [2024-12-09 09:55:24.839832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:49.600 [2024-12-09 09:55:24.843744] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.600 [2024-12-09 09:55:24.843819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.600 [2024-12-09 09:55:24.843835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:49.600 [2024-12-09 09:55:24.847547] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.600 [2024-12-09 09:55:24.847606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.600 [2024-12-09 09:55:24.847621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:49.600 [2024-12-09 09:55:24.853158] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.600 [2024-12-09 09:55:24.853213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.600 [2024-12-09 09:55:24.853229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:49.600 [2024-12-09 09:55:24.857215] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.600 [2024-12-09 09:55:24.857274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.600 [2024-12-09 09:55:24.857289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:49.600 [2024-12-09 09:55:24.861420] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.600 [2024-12-09 09:55:24.861496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.600 [2024-12-09 09:55:24.861512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:49.600 [2024-12-09 09:55:24.866514] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.600 [2024-12-09 09:55:24.866575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.600 [2024-12-09 09:55:24.866591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:49.600 [2024-12-09 09:55:24.872572] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.600 [2024-12-09 09:55:24.872646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.600 [2024-12-09 09:55:24.872661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:49.600 [2024-12-09 09:55:24.877429] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.600 [2024-12-09 09:55:24.877486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.600 [2024-12-09 09:55:24.877501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:49.600 [2024-12-09 09:55:24.881432] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.600 [2024-12-09 09:55:24.881487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.600 [2024-12-09 09:55:24.881502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:49.600 [2024-12-09 09:55:24.885353] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.600 [2024-12-09 09:55:24.885407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.600 [2024-12-09 09:55:24.885422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:49.600 [2024-12-09 09:55:24.889251] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.600 [2024-12-09 09:55:24.889314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.600 [2024-12-09 09:55:24.889330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:49.600 [2024-12-09 09:55:24.893265] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.600 [2024-12-09 09:55:24.893341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.600 [2024-12-09 09:55:24.893357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:49.600 [2024-12-09 09:55:24.897184] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.600 [2024-12-09 09:55:24.897274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.600 [2024-12-09 09:55:24.897292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:49.600 [2024-12-09 09:55:24.900848] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.600 [2024-12-09 09:55:24.900904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.600 [2024-12-09 09:55:24.900919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:49.600 [2024-12-09 09:55:24.904517] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.600 [2024-12-09 09:55:24.904592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.600 [2024-12-09 09:55:24.904607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:49.600 [2024-12-09 09:55:24.908042] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.600 [2024-12-09 09:55:24.908110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.600 [2024-12-09 09:55:24.908126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:49.600 [2024-12-09 09:55:24.911689] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.600 [2024-12-09 09:55:24.912017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.600 [2024-12-09 09:55:24.912034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:49.600 [2024-12-09 09:55:24.920511] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.600 [2024-12-09 09:55:24.920568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.600 [2024-12-09 09:55:24.920584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:49.600 [2024-12-09 09:55:24.924265] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.600 [2024-12-09 09:55:24.924322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.600 [2024-12-09 09:55:24.924339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:49.600 [2024-12-09 09:55:24.927983] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.600 [2024-12-09 09:55:24.928066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.600 [2024-12-09 09:55:24.928081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:49.600 [2024-12-09 09:55:24.931576] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.600 [2024-12-09 09:55:24.931634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.600 [2024-12-09 09:55:24.931659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:49.600 [2024-12-09 09:55:24.936343] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.600 [2024-12-09 09:55:24.936455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.600 [2024-12-09 09:55:24.936471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:49.600 [2024-12-09 09:55:24.945039] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.600 [2024-12-09 09:55:24.945122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.600 [2024-12-09 09:55:24.945137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:49.600 [2024-12-09 09:55:24.950074] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.600 [2024-12-09 09:55:24.950133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.600 [2024-12-09 09:55:24.950148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:49.600 [2024-12-09 09:55:24.953765] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.600 [2024-12-09 09:55:24.953839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.600 [2024-12-09 09:55:24.953855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:49.601 [2024-12-09 09:55:24.957688] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.601 [2024-12-09 09:55:24.957976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.601 [2024-12-09 09:55:24.957993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:49.601 [2024-12-09 09:55:24.966079] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.601 [2024-12-09 09:55:24.966144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.601 [2024-12-09 09:55:24.966160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:49.601 [2024-12-09 09:55:24.972041] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.601 [2024-12-09 09:55:24.972296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.601 [2024-12-09 09:55:24.972313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:49.601 [2024-12-09 09:55:24.976473] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.601 [2024-12-09 09:55:24.976809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.601 [2024-12-09 09:55:24.976825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:49.601 [2024-12-09 09:55:24.983394] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.601 [2024-12-09 09:55:24.983458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.601 [2024-12-09 09:55:24.983474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:49.601 [2024-12-09 09:55:24.990309] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.601 [2024-12-09 09:55:24.990398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.601 [2024-12-09 09:55:24.990414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:49.601 [2024-12-09 09:55:24.995946] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.601 [2024-12-09 09:55:24.996215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.601 [2024-12-09 09:55:24.996231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:49.601 [2024-12-09 09:55:25.002798] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.601 [2024-12-09 09:55:25.003072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.601 [2024-12-09 09:55:25.003089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:49.601 [2024-12-09 09:55:25.008764] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.601 [2024-12-09 09:55:25.008835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.601 [2024-12-09 09:55:25.008851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:49.601 [2024-12-09 09:55:25.012590] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.601 [2024-12-09 09:55:25.012688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.601 [2024-12-09 09:55:25.012705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:49.601 [2024-12-09 09:55:25.018015] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.601 [2024-12-09 09:55:25.018088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.601 [2024-12-09 09:55:25.018104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:49.601 [2024-12-09 09:55:25.023853] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.601 [2024-12-09 09:55:25.023923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.601 [2024-12-09 09:55:25.023939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:49.601 [2024-12-09 09:55:25.031168] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.601 [2024-12-09 09:55:25.031269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.601 [2024-12-09 09:55:25.031285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:49.601 [2024-12-09 09:55:25.039132] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.601 [2024-12-09 09:55:25.039203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.601 [2024-12-09 09:55:25.039219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:49.601 [2024-12-09 09:55:25.048489] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.863 [2024-12-09 09:55:25.048804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.863 [2024-12-09 09:55:25.048822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:49.863 [2024-12-09 09:55:25.053449] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.863 [2024-12-09 09:55:25.053506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.863 [2024-12-09 09:55:25.053522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:49.863 [2024-12-09 09:55:25.059169] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.863 [2024-12-09 09:55:25.059270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.863 [2024-12-09 09:55:25.059286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:49.863 [2024-12-09 09:55:25.067508] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.863 [2024-12-09 09:55:25.067572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.863 [2024-12-09 09:55:25.067588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:49.863 [2024-12-09 09:55:25.076479] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.863 [2024-12-09 09:55:25.076551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.863 [2024-12-09 09:55:25.076567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:49.863 [2024-12-09 09:55:25.085226] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.863 [2024-12-09 09:55:25.085287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.863 [2024-12-09 09:55:25.085302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:49.863 [2024-12-09 09:55:25.094030] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.863 [2024-12-09 09:55:25.094102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.863 [2024-12-09 09:55:25.094117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:49.863 [2024-12-09 09:55:25.101015] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.863 [2024-12-09 09:55:25.101278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.863 [2024-12-09 09:55:25.101300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:49.863 [2024-12-09 09:55:25.109573] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.863 [2024-12-09 09:55:25.109632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.863 [2024-12-09 09:55:25.109654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:49.863 [2024-12-09 09:55:25.118671] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.863 [2024-12-09 09:55:25.118723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.863 [2024-12-09 09:55:25.118739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:49.863 [2024-12-09 09:55:25.127310] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.863 [2024-12-09 09:55:25.127379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.863 [2024-12-09 09:55:25.127395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:49.863 [2024-12-09 09:55:25.137994] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.863 [2024-12-09 09:55:25.138065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.863 [2024-12-09 09:55:25.138082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:49.863 [2024-12-09 09:55:25.146902] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.863 [2024-12-09 09:55:25.147134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.863 [2024-12-09 09:55:25.147151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:49.863 [2024-12-09 09:55:25.155183] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.863 [2024-12-09 09:55:25.155462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.863 [2024-12-09 09:55:25.155479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:49.863 [2024-12-09 09:55:25.163634] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.863 [2024-12-09 09:55:25.163893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.863 [2024-12-09 09:55:25.163910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:49.863 [2024-12-09 09:55:25.172393] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.863 [2024-12-09 09:55:25.172454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.863 [2024-12-09 09:55:25.172469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:49.863 [2024-12-09 09:55:25.179886] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.863 [2024-12-09 09:55:25.179955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.863 [2024-12-09 09:55:25.179971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:49.863 [2024-12-09 09:55:25.185603] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.863 [2024-12-09 09:55:25.185688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.863 [2024-12-09 09:55:25.185704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:49.863 [2024-12-09 09:55:25.189693] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.863 [2024-12-09 09:55:25.189967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.863 [2024-12-09 09:55:25.189983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:49.863 [2024-12-09 09:55:25.197575] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.863 [2024-12-09 09:55:25.197626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.863 [2024-12-09 09:55:25.197646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:49.863 [2024-12-09 09:55:25.202863] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.863 [2024-12-09 09:55:25.202934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.863 [2024-12-09 09:55:25.202950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:49.863 [2024-12-09 09:55:25.209466] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.863 [2024-12-09 09:55:25.209528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.863 [2024-12-09 09:55:25.209544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:49.863 [2024-12-09 09:55:25.213766] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.863 [2024-12-09 09:55:25.214096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.863 [2024-12-09 09:55:25.214113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:49.863 [2024-12-09 09:55:25.218430] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.863 [2024-12-09 09:55:25.218496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.863 [2024-12-09 09:55:25.218512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:49.863 6013.00 IOPS, 751.62 MiB/s [2024-12-09T08:55:25.316Z] [2024-12-09 09:55:25.228760] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.863 [2024-12-09 09:55:25.228828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.863 [2024-12-09 09:55:25.228843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:49.863 [2024-12-09 09:55:25.239602] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.863 [2024-12-09 09:55:25.239900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.863 [2024-12-09 09:55:25.239916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:49.863 [2024-12-09 09:55:25.250696] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.863 [2024-12-09 09:55:25.250997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.863 [2024-12-09 09:55:25.251013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:49.863 [2024-12-09 09:55:25.262218] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.863 [2024-12-09 09:55:25.262515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.863 [2024-12-09 09:55:25.262532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:49.863 [2024-12-09 09:55:25.273359] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.863 [2024-12-09 09:55:25.273655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.863 [2024-12-09 09:55:25.273672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:49.863 [2024-12-09 09:55:25.284780] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.863 [2024-12-09 09:55:25.285031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.863 [2024-12-09 09:55:25.285049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:49.863 [2024-12-09 09:55:25.294356] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.863 [2024-12-09 09:55:25.294611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.863 [2024-12-09 09:55:25.294628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:49.863 [2024-12-09 09:55:25.301165] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.863 [2024-12-09 09:55:25.301271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.863 [2024-12-09 09:55:25.301287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:49.863 [2024-12-09 09:55:25.310450] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:49.864 [2024-12-09 09:55:25.310517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:49.864 [2024-12-09 09:55:25.310535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:50.125 [2024-12-09 09:55:25.319357] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.125 [2024-12-09 09:55:25.319428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.125 [2024-12-09 09:55:25.319450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:50.125 [2024-12-09 09:55:25.325862] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.125 [2024-12-09 09:55:25.325958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.125 [2024-12-09 09:55:25.325974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:50.125 [2024-12-09 09:55:25.331054] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.125 [2024-12-09 09:55:25.331307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.125 [2024-12-09 09:55:25.331324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:50.125 [2024-12-09 09:55:25.338157] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.125 [2024-12-09 09:55:25.338452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.125 [2024-12-09 09:55:25.338469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:50.125 [2024-12-09 09:55:25.343987] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.125 [2024-12-09 09:55:25.344086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.125 [2024-12-09 09:55:25.344102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:50.125 [2024-12-09 09:55:25.350068] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.125 [2024-12-09 09:55:25.350133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.125 [2024-12-09 09:55:25.350148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:50.125 [2024-12-09 09:55:25.359408] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.125 [2024-12-09 09:55:25.359467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.125 [2024-12-09 09:55:25.359483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:50.125 [2024-12-09 09:55:25.366758] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.125 [2024-12-09 09:55:25.366807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.126 [2024-12-09 09:55:25.366823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:50.126 [2024-12-09 09:55:25.374124] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.126 [2024-12-09 09:55:25.374379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.126 [2024-12-09 09:55:25.374396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:50.126 [2024-12-09 09:55:25.381205] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.126 [2024-12-09 09:55:25.381482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.126 [2024-12-09 09:55:25.381499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:50.126 [2024-12-09 09:55:25.391613] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.126 [2024-12-09 09:55:25.391861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.126 [2024-12-09 09:55:25.391878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:50.126 [2024-12-09 09:55:25.400521] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.126 [2024-12-09 09:55:25.400591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.126 [2024-12-09 09:55:25.400606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:50.126 [2024-12-09 09:55:25.407187] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.126 [2024-12-09 09:55:25.407252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.126 [2024-12-09 09:55:25.407268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:50.126 [2024-12-09 09:55:25.414962] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.126 [2024-12-09 09:55:25.415032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.126 [2024-12-09 09:55:25.415048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:50.126 [2024-12-09 09:55:25.419922] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.126 [2024-12-09 09:55:25.419989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.126 [2024-12-09 09:55:25.420004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:50.126 [2024-12-09 09:55:25.428020] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.126 [2024-12-09 09:55:25.428287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.126 [2024-12-09 09:55:25.428304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:50.126 [2024-12-09 09:55:25.433496] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.126 [2024-12-09 09:55:25.433565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.126 [2024-12-09 09:55:25.433582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:50.126 [2024-12-09 09:55:25.441263] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.126 [2024-12-09 09:55:25.441325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.126 [2024-12-09 09:55:25.441341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:50.126 [2024-12-09 09:55:25.445497] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.126 [2024-12-09 09:55:25.445565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.126 [2024-12-09 09:55:25.445581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:50.126 [2024-12-09 09:55:25.450621] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.126 [2024-12-09 09:55:25.450747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.126 [2024-12-09 09:55:25.450763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:50.126 [2024-12-09 09:55:25.455474] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.126 [2024-12-09 09:55:25.455709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.126 [2024-12-09 09:55:25.455725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:50.126 [2024-12-09 09:55:25.463674] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.126 [2024-12-09 09:55:25.463730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.126 [2024-12-09 09:55:25.463746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:50.126 [2024-12-09 09:55:25.468452] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.126 [2024-12-09 09:55:25.468660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.126 [2024-12-09 09:55:25.468676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:50.126 [2024-12-09 09:55:25.474954] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.126 [2024-12-09 09:55:25.475021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.126 [2024-12-09 09:55:25.475037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:50.126 [2024-12-09 09:55:25.482412] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.126 [2024-12-09 09:55:25.482468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.126 [2024-12-09 09:55:25.482484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:50.126 [2024-12-09 09:55:25.488192] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.126 [2024-12-09 09:55:25.488261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.126 [2024-12-09 09:55:25.488277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:50.126 [2024-12-09 09:55:25.493963] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.126 [2024-12-09 09:55:25.494217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.126 [2024-12-09 09:55:25.494237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:50.126 [2024-12-09 09:55:25.499343] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.126 [2024-12-09 09:55:25.499401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.126 [2024-12-09 09:55:25.499417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:50.126 [2024-12-09 09:55:25.506758] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.126 [2024-12-09 09:55:25.506838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.126 [2024-12-09 09:55:25.506854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:50.126 [2024-12-09 09:55:25.514223] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.126 [2024-12-09 09:55:25.514511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.126 [2024-12-09 09:55:25.514528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:50.126 [2024-12-09 09:55:25.521788] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.126 [2024-12-09 09:55:25.521856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.126 [2024-12-09 09:55:25.521872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:50.126 [2024-12-09 09:55:25.529489] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.126 [2024-12-09 09:55:25.529552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.126 [2024-12-09 09:55:25.529568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:50.126 [2024-12-09 09:55:25.540436] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.126 [2024-12-09 09:55:25.540499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.126 [2024-12-09 09:55:25.540514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:50.126 [2024-12-09 09:55:25.550829] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.127 [2024-12-09 09:55:25.551103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.127 [2024-12-09 09:55:25.551120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:50.127 [2024-12-09 09:55:25.562341] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.127 [2024-12-09 09:55:25.562571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.127 [2024-12-09 09:55:25.562587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:50.127 [2024-12-09 09:55:25.574049] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.127 [2024-12-09 09:55:25.574285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.127 [2024-12-09 09:55:25.574301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:50.389 [2024-12-09 09:55:25.585372] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.389 [2024-12-09 09:55:25.585603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.389 [2024-12-09 09:55:25.585619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:50.389 [2024-12-09 09:55:25.596579] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.389 [2024-12-09 09:55:25.596875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.389 [2024-12-09 09:55:25.596892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:50.389 [2024-12-09 09:55:25.608392] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.389 [2024-12-09 09:55:25.608678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.389 [2024-12-09 09:55:25.608695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:50.389 [2024-12-09 09:55:25.619354] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.389 [2024-12-09 09:55:25.619425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.389 [2024-12-09 09:55:25.619440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:50.389 [2024-12-09 09:55:25.630300] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.389 [2024-12-09 09:55:25.630633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.389 [2024-12-09 09:55:25.630656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:50.389 [2024-12-09 09:55:25.642142] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.389 [2024-12-09 09:55:25.642220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.389 [2024-12-09 09:55:25.642235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:50.389 [2024-12-09 09:55:25.653094] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.389 [2024-12-09 09:55:25.653370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.389 [2024-12-09 09:55:25.653388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:50.389 [2024-12-09 09:55:25.664415] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.389 [2024-12-09 09:55:25.664516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.389 [2024-12-09 09:55:25.664533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:50.389 [2024-12-09 09:55:25.673381] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.389 [2024-12-09 09:55:25.673452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.389 [2024-12-09 09:55:25.673468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:50.389 [2024-12-09 09:55:25.680028] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.389 [2024-12-09 09:55:25.680087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.389 [2024-12-09 09:55:25.680103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:50.389 [2024-12-09 09:55:25.684680] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.389 [2024-12-09 09:55:25.684783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.389 [2024-12-09 09:55:25.684801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:50.389 [2024-12-09 09:55:25.690290] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.389 [2024-12-09 09:55:25.690343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.389 [2024-12-09 09:55:25.690359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:50.389 [2024-12-09 09:55:25.694129] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.389 [2024-12-09 09:55:25.694224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.389 [2024-12-09 09:55:25.694240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:50.389 [2024-12-09 09:55:25.699475] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.389 [2024-12-09 09:55:25.699686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.389 [2024-12-09 09:55:25.699702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:50.389 [2024-12-09 09:55:25.705587] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.389 [2024-12-09 09:55:25.705954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.389 [2024-12-09 09:55:25.705972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:50.389 [2024-12-09 09:55:25.712286] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.389 [2024-12-09 09:55:25.712648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.389 [2024-12-09 09:55:25.712664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:50.389 [2024-12-09 09:55:25.719042] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.389 [2024-12-09 09:55:25.719397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.389 [2024-12-09 09:55:25.719419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:50.389 [2024-12-09 09:55:25.727016] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.389 [2024-12-09 09:55:25.727291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.389 [2024-12-09 09:55:25.727309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:50.389 [2024-12-09 09:55:25.734810] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.389 [2024-12-09 09:55:25.735003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.389 [2024-12-09 09:55:25.735020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:50.389 [2024-12-09 09:55:25.741386] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.389 [2024-12-09 09:55:25.741579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.389 [2024-12-09 09:55:25.741596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:50.389 [2024-12-09 09:55:25.748728] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.389 [2024-12-09 09:55:25.748787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.389 [2024-12-09 09:55:25.748802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:50.389 [2024-12-09 09:55:25.758109] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.389 [2024-12-09 09:55:25.758404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.389 [2024-12-09 09:55:25.758422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:50.389 [2024-12-09 09:55:25.763679] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.389 [2024-12-09 09:55:25.763968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.389 [2024-12-09 09:55:25.763987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:50.389 [2024-12-09 09:55:25.770234] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.389 [2024-12-09 09:55:25.770427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.389 [2024-12-09 09:55:25.770444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:50.389 [2024-12-09 09:55:25.775036] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.389 [2024-12-09 09:55:25.775231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.389 [2024-12-09 09:55:25.775248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:50.389 [2024-12-09 09:55:25.780734] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.390 [2024-12-09 09:55:25.780930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.390 [2024-12-09 09:55:25.780947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:50.390 [2024-12-09 09:55:25.785374] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.390 [2024-12-09 09:55:25.785576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.390 [2024-12-09 09:55:25.785592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:50.390 [2024-12-09 09:55:25.792403] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.390 [2024-12-09 09:55:25.792607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.390 [2024-12-09 09:55:25.792624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:50.390 [2024-12-09 09:55:25.796514] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.390 [2024-12-09 09:55:25.796563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.390 [2024-12-09 09:55:25.796578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:50.390 [2024-12-09 09:55:25.800740] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.390 [2024-12-09 09:55:25.800900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.390 [2024-12-09 09:55:25.800916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:50.390 [2024-12-09 09:55:25.805988] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.390 [2024-12-09 09:55:25.806180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.390 [2024-12-09 09:55:25.806197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:50.390 [2024-12-09 09:55:25.813656] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.390 [2024-12-09 09:55:25.813851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.390 [2024-12-09 09:55:25.813868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:50.390 [2024-12-09 09:55:25.817418] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.390 [2024-12-09 09:55:25.817615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.390 [2024-12-09 09:55:25.817632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:50.390 [2024-12-09 09:55:25.821540] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.390 [2024-12-09 09:55:25.821735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.390 [2024-12-09 09:55:25.821753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:50.390 [2024-12-09 09:55:25.825314] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.390 [2024-12-09 09:55:25.825506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.390 [2024-12-09 09:55:25.825523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:50.390 [2024-12-09 09:55:25.828816] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.390 [2024-12-09 09:55:25.829010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.390 [2024-12-09 09:55:25.829026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:50.390 [2024-12-09 09:55:25.833443] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.390 [2024-12-09 09:55:25.833635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.390 [2024-12-09 09:55:25.833657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:50.390 [2024-12-09 09:55:25.837214] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.390 [2024-12-09 09:55:25.837408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.390 [2024-12-09 09:55:25.837425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:50.652 [2024-12-09 09:55:25.841382] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.652 [2024-12-09 09:55:25.841573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.652 [2024-12-09 09:55:25.841591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:50.652 [2024-12-09 09:55:25.847380] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.652 [2024-12-09 09:55:25.847573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.652 [2024-12-09 09:55:25.847589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:50.652 [2024-12-09 09:55:25.851675] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.652 [2024-12-09 09:55:25.851869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.652 [2024-12-09 09:55:25.851886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:50.652 [2024-12-09 09:55:25.855183] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.652 [2024-12-09 09:55:25.855374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.652 [2024-12-09 09:55:25.855390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:50.653 [2024-12-09 09:55:25.858901] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.653 [2024-12-09 09:55:25.859090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.653 [2024-12-09 09:55:25.859111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:50.653 [2024-12-09 09:55:25.862800] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.653 [2024-12-09 09:55:25.862992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.653 [2024-12-09 09:55:25.863009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:50.653 [2024-12-09 09:55:25.870490] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.653 [2024-12-09 09:55:25.870701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.653 [2024-12-09 09:55:25.870718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:50.653 [2024-12-09 09:55:25.877078] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.653 [2024-12-09 09:55:25.877283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.653 [2024-12-09 09:55:25.877299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:50.653 [2024-12-09 09:55:25.886558] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.653 [2024-12-09 09:55:25.886860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.653 [2024-12-09 09:55:25.886878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:50.653 [2024-12-09 09:55:25.895250] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.653 [2024-12-09 09:55:25.895486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.653 [2024-12-09 09:55:25.895505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:50.653 [2024-12-09 09:55:25.903306] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.653 [2024-12-09 09:55:25.903489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.653 [2024-12-09 09:55:25.903506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:50.653 [2024-12-09 09:55:25.907160] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.653 [2024-12-09 09:55:25.907344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.653 [2024-12-09 09:55:25.907362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:50.653 [2024-12-09 09:55:25.912541] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.653 [2024-12-09 09:55:25.912729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.653 [2024-12-09 09:55:25.912747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:50.653 [2024-12-09 09:55:25.916373] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.653 [2024-12-09 09:55:25.916559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.653 [2024-12-09 09:55:25.916576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:50.653 [2024-12-09 09:55:25.920549] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.653 [2024-12-09 09:55:25.920736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.653 [2024-12-09 09:55:25.920753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:50.653 [2024-12-09 09:55:25.924351] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.653 [2024-12-09 09:55:25.924524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.653 [2024-12-09 09:55:25.924541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:50.653 [2024-12-09 09:55:25.927725] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.653 [2024-12-09 09:55:25.927895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.653 [2024-12-09 09:55:25.927912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:50.653 [2024-12-09 09:55:25.931714] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.653 [2024-12-09 09:55:25.931884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.653 [2024-12-09 09:55:25.931901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:50.653 [2024-12-09 09:55:25.935261] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.653 [2024-12-09 09:55:25.935432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.653 [2024-12-09 09:55:25.935449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:50.653 [2024-12-09 09:55:25.938777] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.653 [2024-12-09 09:55:25.938949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.653 [2024-12-09 09:55:25.938966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:50.653 [2024-12-09 09:55:25.944452] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.653 [2024-12-09 09:55:25.944624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.653 [2024-12-09 09:55:25.944646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:50.653 [2024-12-09 09:55:25.948078] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.653 [2024-12-09 09:55:25.948250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.653 [2024-12-09 09:55:25.948267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:50.653 [2024-12-09 09:55:25.951565] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.653 [2024-12-09 09:55:25.951741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.653 [2024-12-09 09:55:25.951758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:50.653 [2024-12-09 09:55:25.955431] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.653 [2024-12-09 09:55:25.955600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.653 [2024-12-09 09:55:25.955616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:50.653 [2024-12-09 09:55:25.959234] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.653 [2024-12-09 09:55:25.959407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.653 [2024-12-09 09:55:25.959424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:50.653 [2024-12-09 09:55:25.963938] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.653 [2024-12-09 09:55:25.964236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.653 [2024-12-09 09:55:25.964255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:50.653 [2024-12-09 09:55:25.967787] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.653 [2024-12-09 09:55:25.967979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.653 [2024-12-09 09:55:25.967996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:50.653 [2024-12-09 09:55:25.975852] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.653 [2024-12-09 09:55:25.976110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.653 [2024-12-09 09:55:25.976128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:50.653 [2024-12-09 09:55:25.984450] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.653 [2024-12-09 09:55:25.984642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.653 [2024-12-09 09:55:25.984659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:50.653 [2024-12-09 09:55:25.994940] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.653 [2024-12-09 09:55:25.995192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.653 [2024-12-09 09:55:25.995210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:50.654 [2024-12-09 09:55:26.005709] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.654 [2024-12-09 09:55:26.006007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.654 [2024-12-09 09:55:26.006029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:50.654 [2024-12-09 09:55:26.015181] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.654 [2024-12-09 09:55:26.015477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.654 [2024-12-09 09:55:26.015496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:50.654 [2024-12-09 09:55:26.024972] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.654 [2024-12-09 09:55:26.025229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.654 [2024-12-09 09:55:26.025250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:50.654 [2024-12-09 09:55:26.034839] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.654 [2024-12-09 09:55:26.035087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.654 [2024-12-09 09:55:26.035104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:50.654 [2024-12-09 09:55:26.045558] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.654 [2024-12-09 09:55:26.045825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.654 [2024-12-09 09:55:26.045846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:50.654 [2024-12-09 09:55:26.055861] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.654 [2024-12-09 09:55:26.056202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.654 [2024-12-09 09:55:26.056220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:50.654 [2024-12-09 09:55:26.066763] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.654 [2024-12-09 09:55:26.067068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.654 [2024-12-09 09:55:26.067086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:50.654 [2024-12-09 09:55:26.077501] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.654 [2024-12-09 09:55:26.077904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.654 [2024-12-09 09:55:26.077922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:50.654 [2024-12-09 09:55:26.088331] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.654 [2024-12-09 09:55:26.088537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.654 [2024-12-09 09:55:26.088555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:50.654 [2024-12-09 09:55:26.098646] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.654 [2024-12-09 09:55:26.098960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.654 [2024-12-09 09:55:26.098982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:50.915 [2024-12-09 09:55:26.106023] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.915 [2024-12-09 09:55:26.106368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.915 [2024-12-09 09:55:26.106386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:50.915 [2024-12-09 09:55:26.114916] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.915 [2024-12-09 09:55:26.115251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.915 [2024-12-09 09:55:26.115269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:50.915 [2024-12-09 09:55:26.123124] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.915 [2024-12-09 09:55:26.123294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.915 [2024-12-09 09:55:26.123311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:50.915 [2024-12-09 09:55:26.129827] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.915 [2024-12-09 09:55:26.130000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.915 [2024-12-09 09:55:26.130017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:50.915 [2024-12-09 09:55:26.138275] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.915 [2024-12-09 09:55:26.138450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.915 [2024-12-09 09:55:26.138467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:50.915 [2024-12-09 09:55:26.147535] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.915 [2024-12-09 09:55:26.147902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.915 [2024-12-09 09:55:26.147920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:50.915 [2024-12-09 09:55:26.154960] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.915 [2024-12-09 09:55:26.155131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.915 [2024-12-09 09:55:26.155148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:50.915 [2024-12-09 09:55:26.159044] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.916 [2024-12-09 09:55:26.159213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.916 [2024-12-09 09:55:26.159230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:50.916 [2024-12-09 09:55:26.165114] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.916 [2024-12-09 09:55:26.165283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.916 [2024-12-09 09:55:26.165302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:50.916 [2024-12-09 09:55:26.171774] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.916 [2024-12-09 09:55:26.172075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.916 [2024-12-09 09:55:26.172094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:50.916 [2024-12-09 09:55:26.178667] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.916 [2024-12-09 09:55:26.178839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.916 [2024-12-09 09:55:26.178856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:50.916 [2024-12-09 09:55:26.183024] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.916 [2024-12-09 09:55:26.183217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.916 [2024-12-09 09:55:26.183234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:50.916 [2024-12-09 09:55:26.191782] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.916 [2024-12-09 09:55:26.192064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.916 [2024-12-09 09:55:26.192082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:50.916 [2024-12-09 09:55:26.198386] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.916 [2024-12-09 09:55:26.198557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.916 [2024-12-09 09:55:26.198574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:50.916 [2024-12-09 09:55:26.202552] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.916 [2024-12-09 09:55:26.202731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.916 [2024-12-09 09:55:26.202749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:50.916 [2024-12-09 09:55:26.208247] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.916 [2024-12-09 09:55:26.208618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.916 [2024-12-09 09:55:26.208636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:50.916 [2024-12-09 09:55:26.213067] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.916 [2024-12-09 09:55:26.213237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.916 [2024-12-09 09:55:26.213254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:50.916 [2024-12-09 09:55:26.217269] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.916 [2024-12-09 09:55:26.217439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.916 [2024-12-09 09:55:26.217456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:37:50.916 [2024-12-09 09:55:26.221498] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.916 [2024-12-09 09:55:26.221674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.916 [2024-12-09 09:55:26.221691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:37:50.916 [2024-12-09 09:55:26.225287] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.916 [2024-12-09 09:55:26.225458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.916 [2024-12-09 09:55:26.225475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:37:50.916 [2024-12-09 09:55:26.229304] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf49110) with pdu=0x200016eff3c8 00:37:50.916 [2024-12-09 09:55:26.230650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:50.916 [2024-12-09 09:55:26.230669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:37:50.916 5223.50 IOPS, 652.94 MiB/s 00:37:50.916 Latency(us) 00:37:50.916 [2024-12-09T08:55:26.369Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:50.916 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:37:50.916 nvme0n1 : 2.01 5217.18 652.15 0.00 0.00 3061.24 1433.60 14090.24 00:37:50.916 [2024-12-09T08:55:26.369Z] =================================================================================================================== 00:37:50.916 [2024-12-09T08:55:26.369Z] Total : 5217.18 652.15 0.00 0.00 3061.24 1433.60 14090.24 00:37:50.916 { 00:37:50.916 "results": [ 00:37:50.916 { 00:37:50.916 "job": "nvme0n1", 00:37:50.916 "core_mask": "0x2", 00:37:50.916 "workload": "randwrite", 00:37:50.916 "status": "finished", 00:37:50.916 "queue_depth": 16, 00:37:50.916 "io_size": 131072, 00:37:50.916 "runtime": 2.005682, 00:37:50.916 "iops": 5217.177997309644, 00:37:50.916 "mibps": 652.1472496637055, 00:37:50.916 "io_failed": 0, 00:37:50.916 "io_timeout": 0, 00:37:50.916 "avg_latency_us": 3061.244933741081, 00:37:50.916 "min_latency_us": 1433.6, 00:37:50.916 "max_latency_us": 14090.24 00:37:50.916 } 00:37:50.916 ], 00:37:50.916 "core_count": 1 00:37:50.916 } 00:37:50.916 09:55:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:37:50.916 09:55:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:37:50.916 09:55:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:37:50.916 | .driver_specific 00:37:50.916 | .nvme_error 00:37:50.916 | .status_code 00:37:50.916 | .command_transient_transport_error' 00:37:50.916 09:55:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:37:51.177 09:55:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 338 > 0 )) 00:37:51.177 09:55:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3053033 00:37:51.177 09:55:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3053033 ']' 00:37:51.177 09:55:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3053033 00:37:51.177 09:55:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:37:51.177 09:55:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:51.177 09:55:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3053033 00:37:51.177 09:55:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:51.177 09:55:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:51.177 09:55:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3053033' 00:37:51.177 killing process with pid 3053033 00:37:51.177 09:55:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3053033 00:37:51.177 Received shutdown signal, test time was about 2.000000 seconds 00:37:51.177 00:37:51.177 Latency(us) 00:37:51.177 [2024-12-09T08:55:26.630Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:51.177 [2024-12-09T08:55:26.630Z] =================================================================================================================== 00:37:51.177 [2024-12-09T08:55:26.630Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:51.177 09:55:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3053033 00:37:51.177 09:55:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 3050995 00:37:51.177 09:55:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3050995 ']' 00:37:51.177 09:55:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3050995 00:37:51.177 09:55:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:37:51.177 09:55:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:51.177 09:55:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3050995 00:37:51.439 09:55:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:51.439 09:55:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:51.439 09:55:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3050995' 00:37:51.439 killing process with pid 3050995 00:37:51.439 09:55:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3050995 00:37:51.439 09:55:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3050995 00:37:51.439 00:37:51.439 real 0m13.838s 00:37:51.439 user 0m26.926s 00:37:51.439 sys 0m3.430s 00:37:51.439 09:55:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:51.439 09:55:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:37:51.439 ************************************ 00:37:51.439 END TEST nvmf_digest_error 00:37:51.439 ************************************ 00:37:51.439 09:55:26 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:37:51.439 09:55:26 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:37:51.439 09:55:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:51.439 09:55:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:37:51.439 09:55:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:51.439 09:55:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:37:51.439 09:55:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:51.439 09:55:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:51.439 rmmod nvme_tcp 00:37:51.439 rmmod nvme_fabrics 00:37:51.439 rmmod nvme_keyring 00:37:51.439 09:55:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:51.439 09:55:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:37:51.439 09:55:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:37:51.439 09:55:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 3050995 ']' 00:37:51.439 09:55:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 3050995 00:37:51.439 09:55:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 3050995 ']' 00:37:51.439 09:55:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 3050995 00:37:51.439 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3050995) - No such process 00:37:51.439 09:55:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 3050995 is not found' 00:37:51.439 Process with pid 3050995 is not found 00:37:51.439 09:55:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:51.439 09:55:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:51.439 09:55:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:51.439 09:55:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:37:51.439 09:55:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:37:51.439 09:55:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:51.439 09:55:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:37:51.700 09:55:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:51.700 09:55:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:51.700 09:55:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:51.700 09:55:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:51.700 09:55:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:53.615 09:55:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:53.615 00:37:53.615 real 0m38.949s 00:37:53.615 user 0m59.221s 00:37:53.615 sys 0m12.603s 00:37:53.615 09:55:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:53.615 09:55:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:37:53.615 ************************************ 00:37:53.615 END TEST nvmf_digest 00:37:53.615 ************************************ 00:37:53.615 09:55:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:37:53.615 09:55:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:37:53.615 09:55:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:37:53.615 09:55:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:37:53.615 09:55:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:37:53.615 09:55:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:53.615 09:55:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:37:53.615 ************************************ 00:37:53.615 START TEST nvmf_bdevperf 00:37:53.615 ************************************ 00:37:53.615 09:55:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:37:53.878 * Looking for test storage... 00:37:53.878 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:37:53.878 09:55:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:53.878 09:55:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:37:53.878 09:55:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:53.878 09:55:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:53.878 09:55:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:53.878 09:55:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:53.878 09:55:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:53.878 09:55:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:37:53.878 09:55:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:37:53.878 09:55:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:37:53.878 09:55:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:37:53.878 09:55:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:37:53.878 09:55:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:37:53.878 09:55:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:37:53.878 09:55:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:53.878 09:55:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:37:53.878 09:55:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:37:53.878 09:55:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:53.878 09:55:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:53.878 09:55:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:37:53.878 09:55:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:37:53.878 09:55:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:53.878 09:55:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:37:53.878 09:55:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:37:53.878 09:55:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:37:53.878 09:55:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:37:53.878 09:55:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:53.878 09:55:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:37:53.878 09:55:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:37:53.878 09:55:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:53.878 09:55:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:53.878 09:55:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:37:53.878 09:55:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:53.878 09:55:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:53.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:53.878 --rc genhtml_branch_coverage=1 00:37:53.878 --rc genhtml_function_coverage=1 00:37:53.878 --rc genhtml_legend=1 00:37:53.878 --rc geninfo_all_blocks=1 00:37:53.878 --rc geninfo_unexecuted_blocks=1 00:37:53.878 00:37:53.878 ' 00:37:53.878 09:55:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:53.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:53.878 --rc genhtml_branch_coverage=1 00:37:53.878 --rc genhtml_function_coverage=1 00:37:53.878 --rc genhtml_legend=1 00:37:53.878 --rc geninfo_all_blocks=1 00:37:53.879 --rc geninfo_unexecuted_blocks=1 00:37:53.879 00:37:53.879 ' 00:37:53.879 09:55:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:53.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:53.879 --rc genhtml_branch_coverage=1 00:37:53.879 --rc genhtml_function_coverage=1 00:37:53.879 --rc genhtml_legend=1 00:37:53.879 --rc geninfo_all_blocks=1 00:37:53.879 --rc geninfo_unexecuted_blocks=1 00:37:53.879 00:37:53.879 ' 00:37:53.879 09:55:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:53.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:53.879 --rc genhtml_branch_coverage=1 00:37:53.879 --rc genhtml_function_coverage=1 00:37:53.879 --rc genhtml_legend=1 00:37:53.879 --rc geninfo_all_blocks=1 00:37:53.879 --rc geninfo_unexecuted_blocks=1 00:37:53.879 00:37:53.879 ' 00:37:53.879 09:55:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:53.879 09:55:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:37:53.879 09:55:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:53.879 09:55:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:53.879 09:55:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:53.879 09:55:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:53.879 09:55:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:53.879 09:55:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:53.879 09:55:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:53.879 09:55:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:53.879 09:55:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:53.879 09:55:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:53.879 09:55:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:53.879 09:55:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:53.879 09:55:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:53.879 09:55:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:53.879 09:55:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:53.879 09:55:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:53.879 09:55:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:53.879 09:55:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:37:53.879 09:55:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:53.879 09:55:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:53.879 09:55:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:53.879 09:55:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:53.879 09:55:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:53.879 09:55:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:53.879 09:55:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:37:53.879 09:55:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:53.879 09:55:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:37:53.879 09:55:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:53.879 09:55:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:53.879 09:55:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:53.879 09:55:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:53.879 09:55:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:53.879 09:55:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:53.879 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:53.879 09:55:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:53.879 09:55:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:53.879 09:55:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:53.879 09:55:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:53.879 09:55:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:53.879 09:55:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:37:53.879 09:55:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:53.879 09:55:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:53.879 09:55:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:53.879 09:55:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:53.879 09:55:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:53.879 09:55:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:53.879 09:55:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:53.879 09:55:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:53.879 09:55:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:53.879 09:55:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:53.879 09:55:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:37:53.879 09:55:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:02.022 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:02.022 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:38:02.022 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:02.022 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:02.022 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:02.022 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:02.022 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:02.022 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:38:02.022 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:02.022 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:38:02.022 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:38:02.022 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:38:02.022 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:38:02.022 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:38:02.022 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:38:02.022 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:02.022 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:02.022 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:02.022 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:02.022 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:02.022 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:02.022 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:02.022 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:02.022 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:02.022 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:02.022 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:02.022 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:02.022 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:02.022 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:02.022 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:02.022 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:02.022 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:02.022 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:02.022 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:02.022 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:38:02.022 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:38:02.022 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:02.022 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:02.022 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:02.022 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:02.022 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:02.022 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:02.022 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:38:02.022 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:38:02.022 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:02.022 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:02.022 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:02.022 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:02.022 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:02.023 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:02.023 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:02.023 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:02.023 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:02.023 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:02.023 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:02.023 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:02.023 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:02.023 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:02.023 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:02.023 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:38:02.023 Found net devices under 0000:4b:00.0: cvl_0_0 00:38:02.023 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:02.023 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:02.023 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:02.023 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:02.023 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:02.023 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:02.023 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:02.023 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:02.023 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:38:02.023 Found net devices under 0000:4b:00.1: cvl_0_1 00:38:02.023 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:02.023 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:02.023 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:38:02.023 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:02.023 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:02.023 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:02.023 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:02.023 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:02.023 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:02.023 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:02.023 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:02.023 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:02.023 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:02.023 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:02.023 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:02.023 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:02.023 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:02.023 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:02.023 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:02.023 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:02.023 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:02.023 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:02.023 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:02.023 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:02.023 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:02.023 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:02.023 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:02.023 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:02.023 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:02.023 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:02.023 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.453 ms 00:38:02.023 00:38:02.023 --- 10.0.0.2 ping statistics --- 00:38:02.023 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:02.023 rtt min/avg/max/mdev = 0.453/0.453/0.453/0.000 ms 00:38:02.023 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:02.023 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:02.023 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.299 ms 00:38:02.023 00:38:02.023 --- 10.0.0.1 ping statistics --- 00:38:02.023 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:02.023 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:38:02.023 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:02.023 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:38:02.023 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:02.023 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:02.023 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:02.023 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:02.023 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:02.023 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:02.023 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:02.023 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:38:02.023 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:38:02.023 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:02.023 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:02.023 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:02.023 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=3057780 00:38:02.023 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 3057780 00:38:02.023 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:38:02.023 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 3057780 ']' 00:38:02.023 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:02.023 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:02.023 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:02.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:02.023 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:02.023 09:55:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:02.023 [2024-12-09 09:55:36.716523] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:38:02.023 [2024-12-09 09:55:36.716573] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:02.023 [2024-12-09 09:55:36.808342] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:38:02.023 [2024-12-09 09:55:36.826631] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:02.023 [2024-12-09 09:55:36.826675] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:02.023 [2024-12-09 09:55:36.826683] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:02.023 [2024-12-09 09:55:36.826690] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:02.023 [2024-12-09 09:55:36.826696] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:02.023 [2024-12-09 09:55:36.828165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:02.023 [2024-12-09 09:55:36.828323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:02.023 [2024-12-09 09:55:36.828325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:38:02.284 09:55:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:02.284 09:55:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:38:02.284 09:55:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:02.284 09:55:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:02.284 09:55:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:02.284 09:55:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:02.284 09:55:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:02.284 09:55:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:02.284 09:55:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:02.284 [2024-12-09 09:55:37.563717] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:02.284 09:55:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:02.284 09:55:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:02.284 09:55:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:02.284 09:55:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:02.284 Malloc0 00:38:02.284 09:55:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:02.284 09:55:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:02.284 09:55:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:02.284 09:55:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:02.284 09:55:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:02.284 09:55:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:02.284 09:55:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:02.284 09:55:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:02.284 09:55:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:02.284 09:55:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:02.284 09:55:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:02.284 09:55:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:02.284 [2024-12-09 09:55:37.630710] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:02.284 09:55:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:02.284 09:55:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:38:02.284 09:55:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:38:02.284 09:55:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:38:02.284 09:55:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:38:02.284 09:55:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:02.284 09:55:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:02.284 { 00:38:02.284 "params": { 00:38:02.284 "name": "Nvme$subsystem", 00:38:02.284 "trtype": "$TEST_TRANSPORT", 00:38:02.284 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:02.284 "adrfam": "ipv4", 00:38:02.284 "trsvcid": "$NVMF_PORT", 00:38:02.284 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:02.284 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:02.284 "hdgst": ${hdgst:-false}, 00:38:02.284 "ddgst": ${ddgst:-false} 00:38:02.284 }, 00:38:02.284 "method": "bdev_nvme_attach_controller" 00:38:02.284 } 00:38:02.284 EOF 00:38:02.284 )") 00:38:02.284 09:55:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:38:02.284 09:55:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:38:02.284 09:55:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:38:02.284 09:55:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:02.284 "params": { 00:38:02.284 "name": "Nvme1", 00:38:02.284 "trtype": "tcp", 00:38:02.284 "traddr": "10.0.0.2", 00:38:02.284 "adrfam": "ipv4", 00:38:02.284 "trsvcid": "4420", 00:38:02.284 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:02.284 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:02.284 "hdgst": false, 00:38:02.284 "ddgst": false 00:38:02.284 }, 00:38:02.284 "method": "bdev_nvme_attach_controller" 00:38:02.284 }' 00:38:02.284 [2024-12-09 09:55:37.686896] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:38:02.284 [2024-12-09 09:55:37.686947] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3058080 ] 00:38:02.544 [2024-12-09 09:55:37.775119] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:02.544 [2024-12-09 09:55:37.793240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:02.544 Running I/O for 1 seconds... 00:38:03.928 8923.00 IOPS, 34.86 MiB/s 00:38:03.928 Latency(us) 00:38:03.928 [2024-12-09T08:55:39.381Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:03.928 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:38:03.928 Verification LBA range: start 0x0 length 0x4000 00:38:03.928 Nvme1n1 : 1.05 8633.04 33.72 0.00 0.00 14306.82 3208.53 45001.39 00:38:03.928 [2024-12-09T08:55:39.381Z] =================================================================================================================== 00:38:03.928 [2024-12-09T08:55:39.381Z] Total : 8633.04 33.72 0.00 0.00 14306.82 3208.53 45001.39 00:38:03.928 09:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3058412 00:38:03.928 09:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:38:03.928 09:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:38:03.928 09:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:38:03.928 09:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:38:03.928 09:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:38:03.928 09:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:03.928 09:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:03.928 { 00:38:03.928 "params": { 00:38:03.928 "name": "Nvme$subsystem", 00:38:03.928 "trtype": "$TEST_TRANSPORT", 00:38:03.928 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:03.928 "adrfam": "ipv4", 00:38:03.928 "trsvcid": "$NVMF_PORT", 00:38:03.928 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:03.928 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:03.928 "hdgst": ${hdgst:-false}, 00:38:03.928 "ddgst": ${ddgst:-false} 00:38:03.928 }, 00:38:03.928 "method": "bdev_nvme_attach_controller" 00:38:03.928 } 00:38:03.928 EOF 00:38:03.928 )") 00:38:03.928 09:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:38:03.928 09:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:38:03.928 09:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:38:03.928 09:55:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:03.928 "params": { 00:38:03.928 "name": "Nvme1", 00:38:03.928 "trtype": "tcp", 00:38:03.928 "traddr": "10.0.0.2", 00:38:03.928 "adrfam": "ipv4", 00:38:03.928 "trsvcid": "4420", 00:38:03.928 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:03.928 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:03.928 "hdgst": false, 00:38:03.928 "ddgst": false 00:38:03.928 }, 00:38:03.928 "method": "bdev_nvme_attach_controller" 00:38:03.928 }' 00:38:03.928 [2024-12-09 09:55:39.152830] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:38:03.928 [2024-12-09 09:55:39.152886] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3058412 ] 00:38:03.928 [2024-12-09 09:55:39.242388] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:03.928 [2024-12-09 09:55:39.259528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:04.188 Running I/O for 15 seconds... 00:38:06.070 11247.00 IOPS, 43.93 MiB/s [2024-12-09T08:55:42.469Z] 11280.50 IOPS, 44.06 MiB/s [2024-12-09T08:55:42.469Z] 09:55:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3057780 00:38:07.016 09:55:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:38:07.016 [2024-12-09 09:55:42.106998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:108232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.016 [2024-12-09 09:55:42.107043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.016 [2024-12-09 09:55:42.107063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:108240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.016 [2024-12-09 09:55:42.107075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.016 [2024-12-09 09:55:42.107086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:108248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.016 [2024-12-09 09:55:42.107095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.016 [2024-12-09 09:55:42.107107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:108256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.016 [2024-12-09 09:55:42.107115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.016 [2024-12-09 09:55:42.107126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:108264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.016 [2024-12-09 09:55:42.107133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.016 [2024-12-09 09:55:42.107145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:108272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.016 [2024-12-09 09:55:42.107152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.016 [2024-12-09 09:55:42.107163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:108280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.016 [2024-12-09 09:55:42.107172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.016 [2024-12-09 09:55:42.107185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:108288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.016 [2024-12-09 09:55:42.107195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.016 [2024-12-09 09:55:42.107204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:108296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.016 [2024-12-09 09:55:42.107212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.016 [2024-12-09 09:55:42.107229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:108304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.016 [2024-12-09 09:55:42.107236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.016 [2024-12-09 09:55:42.107246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:108312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.016 [2024-12-09 09:55:42.107256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.016 [2024-12-09 09:55:42.107270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:108320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.016 [2024-12-09 09:55:42.107279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.016 [2024-12-09 09:55:42.107290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:108328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.016 [2024-12-09 09:55:42.107299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.016 [2024-12-09 09:55:42.107309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:108336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.016 [2024-12-09 09:55:42.107322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.016 [2024-12-09 09:55:42.107333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:108344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.016 [2024-12-09 09:55:42.107345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.016 [2024-12-09 09:55:42.107357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:108912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.016 [2024-12-09 09:55:42.107367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.016 [2024-12-09 09:55:42.107377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:108920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.016 [2024-12-09 09:55:42.107386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.016 [2024-12-09 09:55:42.107400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:108928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.016 [2024-12-09 09:55:42.107411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.016 [2024-12-09 09:55:42.107421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:108936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.016 [2024-12-09 09:55:42.107429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.016 [2024-12-09 09:55:42.107439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:108944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.016 [2024-12-09 09:55:42.107446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.016 [2024-12-09 09:55:42.107458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:108952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.016 [2024-12-09 09:55:42.107466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.017 [2024-12-09 09:55:42.107477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:108960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.017 [2024-12-09 09:55:42.107487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.017 [2024-12-09 09:55:42.107497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:108968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.017 [2024-12-09 09:55:42.107506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.017 [2024-12-09 09:55:42.107517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:108976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.017 [2024-12-09 09:55:42.107525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.017 [2024-12-09 09:55:42.107534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:108984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.017 [2024-12-09 09:55:42.107541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.017 [2024-12-09 09:55:42.107551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:108992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.017 [2024-12-09 09:55:42.107558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.017 [2024-12-09 09:55:42.107568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:109000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.017 [2024-12-09 09:55:42.107576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.017 [2024-12-09 09:55:42.107586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:109008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.017 [2024-12-09 09:55:42.107594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.017 [2024-12-09 09:55:42.107604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:109016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.017 [2024-12-09 09:55:42.107611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.017 [2024-12-09 09:55:42.107621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:109024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.017 [2024-12-09 09:55:42.107628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.017 [2024-12-09 09:55:42.107735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:109032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.017 [2024-12-09 09:55:42.107744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.017 [2024-12-09 09:55:42.107754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:109040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.017 [2024-12-09 09:55:42.107761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.017 [2024-12-09 09:55:42.107770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:109048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.017 [2024-12-09 09:55:42.107778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.017 [2024-12-09 09:55:42.107789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:109056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.017 [2024-12-09 09:55:42.107796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.017 [2024-12-09 09:55:42.107808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:109064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.017 [2024-12-09 09:55:42.107815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.017 [2024-12-09 09:55:42.107825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:109072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.017 [2024-12-09 09:55:42.107832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.017 [2024-12-09 09:55:42.107842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:109080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.017 [2024-12-09 09:55:42.107849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.017 [2024-12-09 09:55:42.107859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:109088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.017 [2024-12-09 09:55:42.107866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.017 [2024-12-09 09:55:42.107875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:109096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.017 [2024-12-09 09:55:42.107882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.017 [2024-12-09 09:55:42.107891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:109104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.017 [2024-12-09 09:55:42.107899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.017 [2024-12-09 09:55:42.107909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:109112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.017 [2024-12-09 09:55:42.107917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.017 [2024-12-09 09:55:42.107926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:109120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.017 [2024-12-09 09:55:42.107933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.017 [2024-12-09 09:55:42.107943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:109128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.017 [2024-12-09 09:55:42.107950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.017 [2024-12-09 09:55:42.107960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:109136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.017 [2024-12-09 09:55:42.107967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.017 [2024-12-09 09:55:42.107977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:109144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.017 [2024-12-09 09:55:42.107984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.017 [2024-12-09 09:55:42.107993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:109152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.017 [2024-12-09 09:55:42.108001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.017 [2024-12-09 09:55:42.108010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:109160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.017 [2024-12-09 09:55:42.108018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.017 [2024-12-09 09:55:42.108033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:109168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.017 [2024-12-09 09:55:42.108041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.017 [2024-12-09 09:55:42.108050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:109176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.017 [2024-12-09 09:55:42.108057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.017 [2024-12-09 09:55:42.108067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:109184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.017 [2024-12-09 09:55:42.108074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.017 [2024-12-09 09:55:42.108084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:109192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.017 [2024-12-09 09:55:42.108091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.017 [2024-12-09 09:55:42.108100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:109200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.017 [2024-12-09 09:55:42.108107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.018 [2024-12-09 09:55:42.108116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:109208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.018 [2024-12-09 09:55:42.108124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.018 [2024-12-09 09:55:42.108134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:109216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.018 [2024-12-09 09:55:42.108141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.018 [2024-12-09 09:55:42.108151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:109224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.018 [2024-12-09 09:55:42.108158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.018 [2024-12-09 09:55:42.108167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:109232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.018 [2024-12-09 09:55:42.108175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.018 [2024-12-09 09:55:42.108184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:109240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.018 [2024-12-09 09:55:42.108192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.018 [2024-12-09 09:55:42.108201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:109248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:38:07.018 [2024-12-09 09:55:42.108208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.018 [2024-12-09 09:55:42.108218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:108352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.018 [2024-12-09 09:55:42.108226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.018 [2024-12-09 09:55:42.108236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:108360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.018 [2024-12-09 09:55:42.108245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.018 [2024-12-09 09:55:42.108254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:108368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.018 [2024-12-09 09:55:42.108261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.018 [2024-12-09 09:55:42.108271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:108376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.018 [2024-12-09 09:55:42.108278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.018 [2024-12-09 09:55:42.108287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:108384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.018 [2024-12-09 09:55:42.108295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.018 [2024-12-09 09:55:42.108305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:108392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.018 [2024-12-09 09:55:42.108312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.018 [2024-12-09 09:55:42.108321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:108400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.018 [2024-12-09 09:55:42.108328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.018 [2024-12-09 09:55:42.108338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:108408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.018 [2024-12-09 09:55:42.108346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.018 [2024-12-09 09:55:42.108356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:108416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.018 [2024-12-09 09:55:42.108363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.018 [2024-12-09 09:55:42.108372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:108424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.018 [2024-12-09 09:55:42.108379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.018 [2024-12-09 09:55:42.108389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:108432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.018 [2024-12-09 09:55:42.108397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.018 [2024-12-09 09:55:42.108406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:108440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.018 [2024-12-09 09:55:42.108415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.018 [2024-12-09 09:55:42.108424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:108448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.018 [2024-12-09 09:55:42.108431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.018 [2024-12-09 09:55:42.108441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:108456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.018 [2024-12-09 09:55:42.108448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.018 [2024-12-09 09:55:42.108459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:108464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.018 [2024-12-09 09:55:42.108467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.018 [2024-12-09 09:55:42.108476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:108472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.018 [2024-12-09 09:55:42.108483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.018 [2024-12-09 09:55:42.108493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:108480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.018 [2024-12-09 09:55:42.108501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.018 [2024-12-09 09:55:42.108510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:108488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.018 [2024-12-09 09:55:42.108518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.018 [2024-12-09 09:55:42.108527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:108496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.018 [2024-12-09 09:55:42.108535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.018 [2024-12-09 09:55:42.108544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:108504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.018 [2024-12-09 09:55:42.108551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.018 [2024-12-09 09:55:42.108561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:108512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.018 [2024-12-09 09:55:42.108568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.018 [2024-12-09 09:55:42.108577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:108520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.018 [2024-12-09 09:55:42.108584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.018 [2024-12-09 09:55:42.108594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:108528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.018 [2024-12-09 09:55:42.108601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.018 [2024-12-09 09:55:42.108611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:108536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.018 [2024-12-09 09:55:42.108618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.019 [2024-12-09 09:55:42.108628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:108544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.019 [2024-12-09 09:55:42.108635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.019 [2024-12-09 09:55:42.108648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:108552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.019 [2024-12-09 09:55:42.108655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.019 [2024-12-09 09:55:42.108665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:108560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.019 [2024-12-09 09:55:42.108675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.019 [2024-12-09 09:55:42.108684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:108568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.019 [2024-12-09 09:55:42.108692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.019 [2024-12-09 09:55:42.108702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:108576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.019 [2024-12-09 09:55:42.108709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.019 [2024-12-09 09:55:42.108718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:108584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.019 [2024-12-09 09:55:42.108725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.019 [2024-12-09 09:55:42.108736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:108592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.019 [2024-12-09 09:55:42.108743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.019 [2024-12-09 09:55:42.108752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:108600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.019 [2024-12-09 09:55:42.108760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.019 [2024-12-09 09:55:42.108769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:108608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.019 [2024-12-09 09:55:42.108776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.019 [2024-12-09 09:55:42.108786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:108616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.019 [2024-12-09 09:55:42.108793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.019 [2024-12-09 09:55:42.108802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:108624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.019 [2024-12-09 09:55:42.108809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.019 [2024-12-09 09:55:42.108819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:108632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.019 [2024-12-09 09:55:42.108826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.019 [2024-12-09 09:55:42.108835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:108640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.019 [2024-12-09 09:55:42.108843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.019 [2024-12-09 09:55:42.108852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:108648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.019 [2024-12-09 09:55:42.108860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.019 [2024-12-09 09:55:42.108869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:108656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.019 [2024-12-09 09:55:42.108876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.019 [2024-12-09 09:55:42.108889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:108664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.019 [2024-12-09 09:55:42.108897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.019 [2024-12-09 09:55:42.108907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:108672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.019 [2024-12-09 09:55:42.108915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.019 [2024-12-09 09:55:42.108924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:108680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.019 [2024-12-09 09:55:42.108931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.019 [2024-12-09 09:55:42.108941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:108688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.019 [2024-12-09 09:55:42.108949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.019 [2024-12-09 09:55:42.108958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:108696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.019 [2024-12-09 09:55:42.108966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.019 [2024-12-09 09:55:42.108975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:108704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.019 [2024-12-09 09:55:42.108982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.019 [2024-12-09 09:55:42.108992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:108712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.019 [2024-12-09 09:55:42.108999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.019 [2024-12-09 09:55:42.109009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:108720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.019 [2024-12-09 09:55:42.109016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.019 [2024-12-09 09:55:42.109026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:108728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.019 [2024-12-09 09:55:42.109033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.019 [2024-12-09 09:55:42.109042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:108736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.019 [2024-12-09 09:55:42.109049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.019 [2024-12-09 09:55:42.109059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:108744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.019 [2024-12-09 09:55:42.109067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.019 [2024-12-09 09:55:42.109077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:108752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.019 [2024-12-09 09:55:42.109084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.019 [2024-12-09 09:55:42.109093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:108760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.019 [2024-12-09 09:55:42.109102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.019 [2024-12-09 09:55:42.109112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:108768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.019 [2024-12-09 09:55:42.109119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.019 [2024-12-09 09:55:42.109129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:108776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.020 [2024-12-09 09:55:42.109136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.020 [2024-12-09 09:55:42.109145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:108784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.020 [2024-12-09 09:55:42.109152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.020 [2024-12-09 09:55:42.109162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:108792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.020 [2024-12-09 09:55:42.109170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.020 [2024-12-09 09:55:42.109179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:108800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.020 [2024-12-09 09:55:42.109187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.020 [2024-12-09 09:55:42.109196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:108808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.020 [2024-12-09 09:55:42.109204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.020 [2024-12-09 09:55:42.109214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:108816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.020 [2024-12-09 09:55:42.109221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.020 [2024-12-09 09:55:42.109230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:108824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.020 [2024-12-09 09:55:42.109238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.020 [2024-12-09 09:55:42.109247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:108832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.020 [2024-12-09 09:55:42.109254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.020 [2024-12-09 09:55:42.109264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:108840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.020 [2024-12-09 09:55:42.109273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.020 [2024-12-09 09:55:42.109282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:108848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.020 [2024-12-09 09:55:42.109290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.020 [2024-12-09 09:55:42.109299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:108856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.020 [2024-12-09 09:55:42.109306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.020 [2024-12-09 09:55:42.109319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:108864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.020 [2024-12-09 09:55:42.109326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.020 [2024-12-09 09:55:42.109336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:108872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.020 [2024-12-09 09:55:42.109343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.020 [2024-12-09 09:55:42.109352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:108880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.020 [2024-12-09 09:55:42.109360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.020 [2024-12-09 09:55:42.109369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:108888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.020 [2024-12-09 09:55:42.109377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.020 [2024-12-09 09:55:42.109387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:108896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:07.020 [2024-12-09 09:55:42.109394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.020 [2024-12-09 09:55:42.109402] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa829a0 is same with the state(6) to be set 00:38:07.020 [2024-12-09 09:55:42.109412] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:38:07.020 [2024-12-09 09:55:42.109418] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:38:07.020 [2024-12-09 09:55:42.109425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:108904 len:8 PRP1 0x0 PRP2 0x0 00:38:07.020 [2024-12-09 09:55:42.109433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.020 [2024-12-09 09:55:42.109510] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:38:07.020 [2024-12-09 09:55:42.109522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.020 [2024-12-09 09:55:42.109530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:38:07.020 [2024-12-09 09:55:42.109539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.020 [2024-12-09 09:55:42.109548] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:38:07.020 [2024-12-09 09:55:42.109555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.020 [2024-12-09 09:55:42.109563] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:38:07.020 [2024-12-09 09:55:42.109570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:07.020 [2024-12-09 09:55:42.109577] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:07.020 [2024-12-09 09:55:42.113183] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.020 [2024-12-09 09:55:42.113206] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:07.020 [2024-12-09 09:55:42.114121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.020 [2024-12-09 09:55:42.114142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:07.020 [2024-12-09 09:55:42.114151] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:07.020 [2024-12-09 09:55:42.114448] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:07.020 [2024-12-09 09:55:42.114753] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.020 [2024-12-09 09:55:42.114763] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.020 [2024-12-09 09:55:42.114771] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.020 [2024-12-09 09:55:42.114780] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.020 [2024-12-09 09:55:42.128740] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.020 [2024-12-09 09:55:42.129460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.020 [2024-12-09 09:55:42.129501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:07.020 [2024-12-09 09:55:42.129512] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:07.020 [2024-12-09 09:55:42.129839] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:07.020 [2024-12-09 09:55:42.130141] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.020 [2024-12-09 09:55:42.130151] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.020 [2024-12-09 09:55:42.130159] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.020 [2024-12-09 09:55:42.130168] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.020 [2024-12-09 09:55:42.144109] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.021 [2024-12-09 09:55:42.144745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.021 [2024-12-09 09:55:42.144785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:07.021 [2024-12-09 09:55:42.144798] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:07.021 [2024-12-09 09:55:42.145116] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:07.021 [2024-12-09 09:55:42.145416] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.021 [2024-12-09 09:55:42.145427] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.021 [2024-12-09 09:55:42.145435] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.021 [2024-12-09 09:55:42.145443] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.021 [2024-12-09 09:55:42.159403] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.021 [2024-12-09 09:55:42.160137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.021 [2024-12-09 09:55:42.160178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:07.021 [2024-12-09 09:55:42.160189] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:07.021 [2024-12-09 09:55:42.160509] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:07.021 [2024-12-09 09:55:42.160820] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.021 [2024-12-09 09:55:42.160830] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.021 [2024-12-09 09:55:42.160839] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.021 [2024-12-09 09:55:42.160847] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.021 [2024-12-09 09:55:42.174872] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.021 [2024-12-09 09:55:42.175490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.021 [2024-12-09 09:55:42.175510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:07.021 [2024-12-09 09:55:42.175518] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:07.021 [2024-12-09 09:55:42.175821] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:07.021 [2024-12-09 09:55:42.176119] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.021 [2024-12-09 09:55:42.176128] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.021 [2024-12-09 09:55:42.176135] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.021 [2024-12-09 09:55:42.176142] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.021 [2024-12-09 09:55:42.190082] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.021 [2024-12-09 09:55:42.190752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.021 [2024-12-09 09:55:42.190793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:07.021 [2024-12-09 09:55:42.190805] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:07.021 [2024-12-09 09:55:42.191122] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:07.021 [2024-12-09 09:55:42.191423] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.021 [2024-12-09 09:55:42.191433] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.021 [2024-12-09 09:55:42.191441] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.021 [2024-12-09 09:55:42.191449] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.021 [2024-12-09 09:55:42.205400] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.021 [2024-12-09 09:55:42.206127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.021 [2024-12-09 09:55:42.206167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:07.021 [2024-12-09 09:55:42.206178] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:07.021 [2024-12-09 09:55:42.206493] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:07.021 [2024-12-09 09:55:42.206802] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.021 [2024-12-09 09:55:42.206813] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.021 [2024-12-09 09:55:42.206825] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.021 [2024-12-09 09:55:42.206834] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.021 [2024-12-09 09:55:42.220784] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.021 [2024-12-09 09:55:42.221541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.021 [2024-12-09 09:55:42.221581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:07.021 [2024-12-09 09:55:42.221593] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:07.021 [2024-12-09 09:55:42.221919] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:07.021 [2024-12-09 09:55:42.222221] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.021 [2024-12-09 09:55:42.222231] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.021 [2024-12-09 09:55:42.222240] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.021 [2024-12-09 09:55:42.222248] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.021 [2024-12-09 09:55:42.236189] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.021 [2024-12-09 09:55:42.236915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.021 [2024-12-09 09:55:42.236955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:07.021 [2024-12-09 09:55:42.236966] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:07.021 [2024-12-09 09:55:42.237282] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:07.021 [2024-12-09 09:55:42.237582] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.021 [2024-12-09 09:55:42.237592] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.021 [2024-12-09 09:55:42.237601] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.021 [2024-12-09 09:55:42.237609] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.021 [2024-12-09 09:55:42.251559] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.021 [2024-12-09 09:55:42.252308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.021 [2024-12-09 09:55:42.252349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:07.021 [2024-12-09 09:55:42.252360] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:07.021 [2024-12-09 09:55:42.252684] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:07.022 [2024-12-09 09:55:42.252984] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.022 [2024-12-09 09:55:42.252995] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.022 [2024-12-09 09:55:42.253003] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.022 [2024-12-09 09:55:42.253011] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.022 [2024-12-09 09:55:42.266743] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.022 [2024-12-09 09:55:42.267492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.022 [2024-12-09 09:55:42.267532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:07.022 [2024-12-09 09:55:42.267543] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:07.022 [2024-12-09 09:55:42.267868] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:07.022 [2024-12-09 09:55:42.268170] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.022 [2024-12-09 09:55:42.268180] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.022 [2024-12-09 09:55:42.268188] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.022 [2024-12-09 09:55:42.268197] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.022 [2024-12-09 09:55:42.282141] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.022 [2024-12-09 09:55:42.282876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.022 [2024-12-09 09:55:42.282917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:07.022 [2024-12-09 09:55:42.282930] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:07.022 [2024-12-09 09:55:42.283246] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:07.022 [2024-12-09 09:55:42.283547] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.022 [2024-12-09 09:55:42.283557] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.022 [2024-12-09 09:55:42.283565] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.022 [2024-12-09 09:55:42.283574] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.022 [2024-12-09 09:55:42.297523] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.022 [2024-12-09 09:55:42.298166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.022 [2024-12-09 09:55:42.298205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:07.022 [2024-12-09 09:55:42.298217] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:07.022 [2024-12-09 09:55:42.298534] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:07.022 [2024-12-09 09:55:42.298846] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.022 [2024-12-09 09:55:42.298858] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.022 [2024-12-09 09:55:42.298867] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.022 [2024-12-09 09:55:42.298875] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.022 [2024-12-09 09:55:42.312824] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.022 [2024-12-09 09:55:42.313462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.022 [2024-12-09 09:55:42.313508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:07.022 [2024-12-09 09:55:42.313521] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:07.022 [2024-12-09 09:55:42.313845] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:07.022 [2024-12-09 09:55:42.314147] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.022 [2024-12-09 09:55:42.314157] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.022 [2024-12-09 09:55:42.314166] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.022 [2024-12-09 09:55:42.314174] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.022 [2024-12-09 09:55:42.328117] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.022 [2024-12-09 09:55:42.328595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.022 [2024-12-09 09:55:42.328615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:07.023 [2024-12-09 09:55:42.328624] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:07.023 [2024-12-09 09:55:42.328926] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:07.023 [2024-12-09 09:55:42.329223] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.023 [2024-12-09 09:55:42.329232] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.023 [2024-12-09 09:55:42.329239] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.023 [2024-12-09 09:55:42.329246] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.023 [2024-12-09 09:55:42.343464] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.023 [2024-12-09 09:55:42.344118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.023 [2024-12-09 09:55:42.344136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:07.023 [2024-12-09 09:55:42.344144] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:07.023 [2024-12-09 09:55:42.344439] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:07.023 [2024-12-09 09:55:42.344740] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.023 [2024-12-09 09:55:42.344749] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.023 [2024-12-09 09:55:42.344757] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.023 [2024-12-09 09:55:42.344763] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.023 [2024-12-09 09:55:42.358708] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.023 [2024-12-09 09:55:42.359445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.023 [2024-12-09 09:55:42.359485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:07.023 [2024-12-09 09:55:42.359496] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:07.023 [2024-12-09 09:55:42.359819] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:07.023 [2024-12-09 09:55:42.360126] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.023 [2024-12-09 09:55:42.360137] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.023 [2024-12-09 09:55:42.360145] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.023 [2024-12-09 09:55:42.360153] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.023 [2024-12-09 09:55:42.374107] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.023 [2024-12-09 09:55:42.374856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.023 [2024-12-09 09:55:42.374896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:07.023 [2024-12-09 09:55:42.374907] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:07.023 [2024-12-09 09:55:42.375223] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:07.023 [2024-12-09 09:55:42.375523] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.023 [2024-12-09 09:55:42.375534] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.023 [2024-12-09 09:55:42.375544] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.023 [2024-12-09 09:55:42.375553] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.023 [2024-12-09 09:55:42.389501] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.023 [2024-12-09 09:55:42.390258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.023 [2024-12-09 09:55:42.390299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:07.023 [2024-12-09 09:55:42.390311] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:07.023 [2024-12-09 09:55:42.390626] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:07.023 [2024-12-09 09:55:42.390936] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.023 [2024-12-09 09:55:42.390947] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.023 [2024-12-09 09:55:42.390956] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.023 [2024-12-09 09:55:42.390964] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.023 [2024-12-09 09:55:42.404842] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.023 [2024-12-09 09:55:42.405529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.023 [2024-12-09 09:55:42.405570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:07.023 [2024-12-09 09:55:42.405581] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:07.023 [2024-12-09 09:55:42.405906] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:07.023 [2024-12-09 09:55:42.406208] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.023 [2024-12-09 09:55:42.406218] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.023 [2024-12-09 09:55:42.406235] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.023 [2024-12-09 09:55:42.406244] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.023 [2024-12-09 09:55:42.420184] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.023 [2024-12-09 09:55:42.420966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.023 [2024-12-09 09:55:42.421007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:07.023 [2024-12-09 09:55:42.421018] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:07.023 [2024-12-09 09:55:42.421334] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:07.023 [2024-12-09 09:55:42.421635] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.023 [2024-12-09 09:55:42.421653] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.023 [2024-12-09 09:55:42.421662] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.023 [2024-12-09 09:55:42.421670] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.023 [2024-12-09 09:55:42.435411] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.023 [2024-12-09 09:55:42.436169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.023 [2024-12-09 09:55:42.436210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:07.023 [2024-12-09 09:55:42.436221] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:07.023 [2024-12-09 09:55:42.436537] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:07.023 [2024-12-09 09:55:42.436847] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.023 [2024-12-09 09:55:42.436859] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.023 [2024-12-09 09:55:42.436867] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.024 [2024-12-09 09:55:42.436875] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.024 [2024-12-09 09:55:42.450855] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.024 [2024-12-09 09:55:42.451609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.024 [2024-12-09 09:55:42.451660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:07.024 [2024-12-09 09:55:42.451673] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:07.024 [2024-12-09 09:55:42.451992] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:07.024 [2024-12-09 09:55:42.452293] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.024 [2024-12-09 09:55:42.452303] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.024 [2024-12-09 09:55:42.452312] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.024 [2024-12-09 09:55:42.452320] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.286 9971.00 IOPS, 38.95 MiB/s [2024-12-09T08:55:42.739Z] [2024-12-09 09:55:42.466272] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.286 [2024-12-09 09:55:42.467033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.286 [2024-12-09 09:55:42.467076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:07.286 [2024-12-09 09:55:42.467088] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:07.286 [2024-12-09 09:55:42.467406] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:07.286 [2024-12-09 09:55:42.467716] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.286 [2024-12-09 09:55:42.467727] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.286 [2024-12-09 09:55:42.467736] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.286 [2024-12-09 09:55:42.467744] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.286 [2024-12-09 09:55:42.481430] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.286 [2024-12-09 09:55:42.482064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.286 [2024-12-09 09:55:42.482086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:07.286 [2024-12-09 09:55:42.482095] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:07.286 [2024-12-09 09:55:42.482391] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:07.286 [2024-12-09 09:55:42.482694] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.286 [2024-12-09 09:55:42.482705] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.286 [2024-12-09 09:55:42.482712] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.286 [2024-12-09 09:55:42.482719] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.286 [2024-12-09 09:55:42.496687] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.286 [2024-12-09 09:55:42.497332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.286 [2024-12-09 09:55:42.497352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:07.286 [2024-12-09 09:55:42.497360] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:07.286 [2024-12-09 09:55:42.497662] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:07.286 [2024-12-09 09:55:42.497960] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.286 [2024-12-09 09:55:42.497970] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.286 [2024-12-09 09:55:42.497978] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.286 [2024-12-09 09:55:42.497985] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.286 [2024-12-09 09:55:42.511960] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.286 [2024-12-09 09:55:42.512607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.286 [2024-12-09 09:55:42.512633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:07.286 [2024-12-09 09:55:42.512649] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:07.286 [2024-12-09 09:55:42.512945] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:07.286 [2024-12-09 09:55:42.513242] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.286 [2024-12-09 09:55:42.513252] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.286 [2024-12-09 09:55:42.513260] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.286 [2024-12-09 09:55:42.513267] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.286 [2024-12-09 09:55:42.527238] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.286 [2024-12-09 09:55:42.527948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.286 [2024-12-09 09:55:42.527998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:07.286 [2024-12-09 09:55:42.528012] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:07.286 [2024-12-09 09:55:42.528336] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:07.286 [2024-12-09 09:55:42.528651] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.286 [2024-12-09 09:55:42.528662] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.286 [2024-12-09 09:55:42.528671] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.286 [2024-12-09 09:55:42.528680] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.286 [2024-12-09 09:55:42.542681] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.286 [2024-12-09 09:55:42.543317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.286 [2024-12-09 09:55:42.543343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:07.286 [2024-12-09 09:55:42.543351] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:07.286 [2024-12-09 09:55:42.543659] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:07.286 [2024-12-09 09:55:42.543960] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.286 [2024-12-09 09:55:42.543971] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.286 [2024-12-09 09:55:42.543979] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.286 [2024-12-09 09:55:42.543987] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.286 [2024-12-09 09:55:42.558156] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.286 [2024-12-09 09:55:42.558797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.286 [2024-12-09 09:55:42.558849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:07.286 [2024-12-09 09:55:42.558863] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:07.286 [2024-12-09 09:55:42.559190] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:07.286 [2024-12-09 09:55:42.559502] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.286 [2024-12-09 09:55:42.559513] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.286 [2024-12-09 09:55:42.559522] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.286 [2024-12-09 09:55:42.559531] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.286 [2024-12-09 09:55:42.573537] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.286 [2024-12-09 09:55:42.574306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.286 [2024-12-09 09:55:42.574367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:07.286 [2024-12-09 09:55:42.574380] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:07.286 [2024-12-09 09:55:42.574726] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:07.286 [2024-12-09 09:55:42.575033] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.286 [2024-12-09 09:55:42.575045] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.286 [2024-12-09 09:55:42.575054] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.286 [2024-12-09 09:55:42.575063] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.286 [2024-12-09 09:55:42.588804] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.286 [2024-12-09 09:55:42.589473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.286 [2024-12-09 09:55:42.589504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:07.286 [2024-12-09 09:55:42.589513] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:07.286 [2024-12-09 09:55:42.589824] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:07.286 [2024-12-09 09:55:42.590125] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.286 [2024-12-09 09:55:42.590138] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.286 [2024-12-09 09:55:42.590146] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.286 [2024-12-09 09:55:42.590155] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.287 [2024-12-09 09:55:42.604182] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.287 [2024-12-09 09:55:42.604858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.287 [2024-12-09 09:55:42.604886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:07.287 [2024-12-09 09:55:42.604894] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:07.287 [2024-12-09 09:55:42.605193] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:07.287 [2024-12-09 09:55:42.605492] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.287 [2024-12-09 09:55:42.605504] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.287 [2024-12-09 09:55:42.605520] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.287 [2024-12-09 09:55:42.605528] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.287 [2024-12-09 09:55:42.619545] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.287 [2024-12-09 09:55:42.620192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.287 [2024-12-09 09:55:42.620220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:07.287 [2024-12-09 09:55:42.620229] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:07.287 [2024-12-09 09:55:42.620527] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:07.287 [2024-12-09 09:55:42.620836] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.287 [2024-12-09 09:55:42.620850] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.287 [2024-12-09 09:55:42.620860] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.287 [2024-12-09 09:55:42.620869] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.287 [2024-12-09 09:55:42.634898] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.287 [2024-12-09 09:55:42.635552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.287 [2024-12-09 09:55:42.635614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:07.287 [2024-12-09 09:55:42.635628] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:07.287 [2024-12-09 09:55:42.635978] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:07.287 [2024-12-09 09:55:42.636286] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.287 [2024-12-09 09:55:42.636299] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.287 [2024-12-09 09:55:42.636308] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.287 [2024-12-09 09:55:42.636319] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.287 [2024-12-09 09:55:42.650078] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.287 [2024-12-09 09:55:42.650755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.287 [2024-12-09 09:55:42.650820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:07.287 [2024-12-09 09:55:42.650833] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:07.287 [2024-12-09 09:55:42.651167] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:07.287 [2024-12-09 09:55:42.651473] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.287 [2024-12-09 09:55:42.651487] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.287 [2024-12-09 09:55:42.651496] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.287 [2024-12-09 09:55:42.651506] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.287 [2024-12-09 09:55:42.665281] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.287 [2024-12-09 09:55:42.666060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.287 [2024-12-09 09:55:42.666125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:07.287 [2024-12-09 09:55:42.666138] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:07.287 [2024-12-09 09:55:42.666486] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:07.287 [2024-12-09 09:55:42.666808] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.287 [2024-12-09 09:55:42.666822] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.287 [2024-12-09 09:55:42.666832] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.287 [2024-12-09 09:55:42.666841] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.287 [2024-12-09 09:55:42.680648] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.287 [2024-12-09 09:55:42.681464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.287 [2024-12-09 09:55:42.681529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:07.287 [2024-12-09 09:55:42.681543] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:07.287 [2024-12-09 09:55:42.681892] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:07.287 [2024-12-09 09:55:42.682201] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.287 [2024-12-09 09:55:42.682213] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.287 [2024-12-09 09:55:42.682223] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.287 [2024-12-09 09:55:42.682233] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.287 [2024-12-09 09:55:42.695981] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.287 [2024-12-09 09:55:42.696754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.287 [2024-12-09 09:55:42.696820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:07.287 [2024-12-09 09:55:42.696833] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:07.287 [2024-12-09 09:55:42.697166] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:07.287 [2024-12-09 09:55:42.697473] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.287 [2024-12-09 09:55:42.697487] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.287 [2024-12-09 09:55:42.697500] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.287 [2024-12-09 09:55:42.697513] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.287 [2024-12-09 09:55:42.711220] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.287 [2024-12-09 09:55:42.711982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.287 [2024-12-09 09:55:42.712048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:07.287 [2024-12-09 09:55:42.712068] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:07.287 [2024-12-09 09:55:42.712403] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:07.287 [2024-12-09 09:55:42.712721] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.287 [2024-12-09 09:55:42.712735] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.287 [2024-12-09 09:55:42.712744] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.287 [2024-12-09 09:55:42.712754] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.287 [2024-12-09 09:55:42.726455] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.287 [2024-12-09 09:55:42.727169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.287 [2024-12-09 09:55:42.727200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:07.287 [2024-12-09 09:55:42.727210] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:07.287 [2024-12-09 09:55:42.727509] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:07.287 [2024-12-09 09:55:42.727822] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.287 [2024-12-09 09:55:42.727835] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.287 [2024-12-09 09:55:42.727844] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.287 [2024-12-09 09:55:42.727852] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.549 [2024-12-09 09:55:42.741872] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.549 [2024-12-09 09:55:42.742607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.549 [2024-12-09 09:55:42.742685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:07.549 [2024-12-09 09:55:42.742699] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:07.549 [2024-12-09 09:55:42.743033] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:07.549 [2024-12-09 09:55:42.743341] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.549 [2024-12-09 09:55:42.743356] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.549 [2024-12-09 09:55:42.743365] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.549 [2024-12-09 09:55:42.743378] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.549 [2024-12-09 09:55:42.757128] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.549 [2024-12-09 09:55:42.757841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.549 [2024-12-09 09:55:42.757874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:07.549 [2024-12-09 09:55:42.757884] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:07.549 [2024-12-09 09:55:42.758185] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:07.549 [2024-12-09 09:55:42.758496] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.549 [2024-12-09 09:55:42.758508] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.549 [2024-12-09 09:55:42.758516] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.549 [2024-12-09 09:55:42.758524] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.549 [2024-12-09 09:55:42.772574] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.549 [2024-12-09 09:55:42.773266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.549 [2024-12-09 09:55:42.773294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:07.549 [2024-12-09 09:55:42.773303] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:07.549 [2024-12-09 09:55:42.773602] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:07.549 [2024-12-09 09:55:42.773912] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.549 [2024-12-09 09:55:42.773925] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.549 [2024-12-09 09:55:42.773933] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.549 [2024-12-09 09:55:42.773941] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.549 [2024-12-09 09:55:42.787932] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.549 [2024-12-09 09:55:42.788691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.549 [2024-12-09 09:55:42.788757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:07.549 [2024-12-09 09:55:42.788771] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:07.549 [2024-12-09 09:55:42.789107] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:07.549 [2024-12-09 09:55:42.789414] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.549 [2024-12-09 09:55:42.789426] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.549 [2024-12-09 09:55:42.789435] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.549 [2024-12-09 09:55:42.789445] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.549 [2024-12-09 09:55:42.803181] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.549 [2024-12-09 09:55:42.803976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.549 [2024-12-09 09:55:42.804041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:07.549 [2024-12-09 09:55:42.804054] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:07.549 [2024-12-09 09:55:42.804388] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:07.549 [2024-12-09 09:55:42.804708] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.549 [2024-12-09 09:55:42.804721] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.549 [2024-12-09 09:55:42.804738] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.549 [2024-12-09 09:55:42.804748] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.550 [2024-12-09 09:55:42.818470] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.550 [2024-12-09 09:55:42.819239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.550 [2024-12-09 09:55:42.819305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:07.550 [2024-12-09 09:55:42.819319] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:07.550 [2024-12-09 09:55:42.819672] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:07.550 [2024-12-09 09:55:42.819979] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.550 [2024-12-09 09:55:42.819992] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.550 [2024-12-09 09:55:42.820002] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.550 [2024-12-09 09:55:42.820013] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.550 [2024-12-09 09:55:42.833735] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.550 [2024-12-09 09:55:42.834490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.550 [2024-12-09 09:55:42.834557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:07.550 [2024-12-09 09:55:42.834570] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:07.550 [2024-12-09 09:55:42.834923] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:07.550 [2024-12-09 09:55:42.835232] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.550 [2024-12-09 09:55:42.835244] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.550 [2024-12-09 09:55:42.835254] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.550 [2024-12-09 09:55:42.835265] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.550 [2024-12-09 09:55:42.848969] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.550 [2024-12-09 09:55:42.849543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.550 [2024-12-09 09:55:42.849575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:07.550 [2024-12-09 09:55:42.849585] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:07.550 [2024-12-09 09:55:42.849897] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:07.550 [2024-12-09 09:55:42.850199] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.550 [2024-12-09 09:55:42.850211] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.550 [2024-12-09 09:55:42.850220] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.550 [2024-12-09 09:55:42.850228] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.550 [2024-12-09 09:55:42.864263] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.550 [2024-12-09 09:55:42.865027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.550 [2024-12-09 09:55:42.865092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:07.550 [2024-12-09 09:55:42.865105] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:07.550 [2024-12-09 09:55:42.865440] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:07.550 [2024-12-09 09:55:42.865760] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.550 [2024-12-09 09:55:42.865773] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.550 [2024-12-09 09:55:42.865782] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.550 [2024-12-09 09:55:42.865792] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.550 [2024-12-09 09:55:42.879540] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.550 [2024-12-09 09:55:42.880339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.550 [2024-12-09 09:55:42.880403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:07.550 [2024-12-09 09:55:42.880416] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:07.550 [2024-12-09 09:55:42.880763] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:07.550 [2024-12-09 09:55:42.881071] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.550 [2024-12-09 09:55:42.881084] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.550 [2024-12-09 09:55:42.881094] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.550 [2024-12-09 09:55:42.881104] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.550 [2024-12-09 09:55:42.894897] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.550 [2024-12-09 09:55:42.895594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.550 [2024-12-09 09:55:42.895626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:07.550 [2024-12-09 09:55:42.895647] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:07.550 [2024-12-09 09:55:42.895949] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:07.550 [2024-12-09 09:55:42.896249] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.550 [2024-12-09 09:55:42.896262] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.550 [2024-12-09 09:55:42.896271] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.550 [2024-12-09 09:55:42.896280] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.550 [2024-12-09 09:55:42.910271] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.550 [2024-12-09 09:55:42.911011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.550 [2024-12-09 09:55:42.911075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:07.550 [2024-12-09 09:55:42.911096] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:07.550 [2024-12-09 09:55:42.911430] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:07.550 [2024-12-09 09:55:42.911751] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.550 [2024-12-09 09:55:42.911765] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.550 [2024-12-09 09:55:42.911774] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.550 [2024-12-09 09:55:42.911783] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.550 [2024-12-09 09:55:42.925490] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.550 [2024-12-09 09:55:42.926156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.550 [2024-12-09 09:55:42.926188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:07.550 [2024-12-09 09:55:42.926198] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:07.550 [2024-12-09 09:55:42.926497] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:07.550 [2024-12-09 09:55:42.926809] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.550 [2024-12-09 09:55:42.926821] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.550 [2024-12-09 09:55:42.926830] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.550 [2024-12-09 09:55:42.926838] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.550 [2024-12-09 09:55:42.940793] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.550 [2024-12-09 09:55:42.941446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.550 [2024-12-09 09:55:42.941507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:07.550 [2024-12-09 09:55:42.941520] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:07.550 [2024-12-09 09:55:42.941870] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:07.550 [2024-12-09 09:55:42.942177] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.550 [2024-12-09 09:55:42.942190] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.550 [2024-12-09 09:55:42.942199] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.550 [2024-12-09 09:55:42.942209] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.550 [2024-12-09 09:55:42.956193] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.550 [2024-12-09 09:55:42.956964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.550 [2024-12-09 09:55:42.957029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:07.550 [2024-12-09 09:55:42.957042] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:07.550 [2024-12-09 09:55:42.957376] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:07.550 [2024-12-09 09:55:42.957707] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.550 [2024-12-09 09:55:42.957720] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.551 [2024-12-09 09:55:42.957729] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.551 [2024-12-09 09:55:42.957739] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.551 [2024-12-09 09:55:42.971487] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.551 [2024-12-09 09:55:42.972284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.551 [2024-12-09 09:55:42.972348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:07.551 [2024-12-09 09:55:42.972362] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:07.551 [2024-12-09 09:55:42.972712] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:07.551 [2024-12-09 09:55:42.973020] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.551 [2024-12-09 09:55:42.973032] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.551 [2024-12-09 09:55:42.973041] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.551 [2024-12-09 09:55:42.973052] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.551 [2024-12-09 09:55:42.986742] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.551 [2024-12-09 09:55:42.987316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.551 [2024-12-09 09:55:42.987346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:07.551 [2024-12-09 09:55:42.987357] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:07.551 [2024-12-09 09:55:42.987669] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:07.551 [2024-12-09 09:55:42.987972] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.551 [2024-12-09 09:55:42.987984] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.551 [2024-12-09 09:55:42.987993] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.551 [2024-12-09 09:55:42.988001] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.813 [2024-12-09 09:55:43.001993] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.813 [2024-12-09 09:55:43.002692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.813 [2024-12-09 09:55:43.002720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:07.814 [2024-12-09 09:55:43.002730] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:07.814 [2024-12-09 09:55:43.003030] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:07.814 [2024-12-09 09:55:43.003329] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.814 [2024-12-09 09:55:43.003341] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.814 [2024-12-09 09:55:43.003357] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.814 [2024-12-09 09:55:43.003365] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.814 [2024-12-09 09:55:43.017347] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.814 [2024-12-09 09:55:43.018086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.814 [2024-12-09 09:55:43.018151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:07.814 [2024-12-09 09:55:43.018164] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:07.814 [2024-12-09 09:55:43.018498] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:07.814 [2024-12-09 09:55:43.018825] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.814 [2024-12-09 09:55:43.018841] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.814 [2024-12-09 09:55:43.018850] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.814 [2024-12-09 09:55:43.018860] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.814 [2024-12-09 09:55:43.032571] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.814 [2024-12-09 09:55:43.033238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.814 [2024-12-09 09:55:43.033269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:07.814 [2024-12-09 09:55:43.033279] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:07.814 [2024-12-09 09:55:43.033579] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:07.814 [2024-12-09 09:55:43.033891] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.814 [2024-12-09 09:55:43.033904] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.814 [2024-12-09 09:55:43.033912] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.814 [2024-12-09 09:55:43.033921] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.814 [2024-12-09 09:55:43.047875] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.814 [2024-12-09 09:55:43.048631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.814 [2024-12-09 09:55:43.048709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:07.814 [2024-12-09 09:55:43.048722] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:07.814 [2024-12-09 09:55:43.049056] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:07.814 [2024-12-09 09:55:43.049363] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.814 [2024-12-09 09:55:43.049374] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.814 [2024-12-09 09:55:43.049383] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.814 [2024-12-09 09:55:43.049393] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.814 [2024-12-09 09:55:43.063120] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.814 [2024-12-09 09:55:43.063770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.814 [2024-12-09 09:55:43.063817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:07.814 [2024-12-09 09:55:43.063827] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:07.814 [2024-12-09 09:55:43.064147] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:07.814 [2024-12-09 09:55:43.064451] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.814 [2024-12-09 09:55:43.064463] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.814 [2024-12-09 09:55:43.064472] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.814 [2024-12-09 09:55:43.064481] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.814 [2024-12-09 09:55:43.078495] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.814 [2024-12-09 09:55:43.079247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.814 [2024-12-09 09:55:43.079312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:07.814 [2024-12-09 09:55:43.079325] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:07.814 [2024-12-09 09:55:43.079676] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:07.814 [2024-12-09 09:55:43.079984] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.814 [2024-12-09 09:55:43.079997] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.814 [2024-12-09 09:55:43.080006] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.814 [2024-12-09 09:55:43.080016] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.814 [2024-12-09 09:55:43.093777] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.814 [2024-12-09 09:55:43.094548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.814 [2024-12-09 09:55:43.094612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:07.814 [2024-12-09 09:55:43.094625] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:07.814 [2024-12-09 09:55:43.094973] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:07.814 [2024-12-09 09:55:43.095281] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.814 [2024-12-09 09:55:43.095293] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.814 [2024-12-09 09:55:43.095303] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.814 [2024-12-09 09:55:43.095313] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.814 [2024-12-09 09:55:43.109009] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.814 [2024-12-09 09:55:43.109762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.814 [2024-12-09 09:55:43.109828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:07.814 [2024-12-09 09:55:43.109849] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:07.814 [2024-12-09 09:55:43.110183] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:07.814 [2024-12-09 09:55:43.110490] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.814 [2024-12-09 09:55:43.110504] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.814 [2024-12-09 09:55:43.110513] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.814 [2024-12-09 09:55:43.110524] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.814 [2024-12-09 09:55:43.124232] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.814 [2024-12-09 09:55:43.124936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.814 [2024-12-09 09:55:43.125001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:07.814 [2024-12-09 09:55:43.125014] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:07.814 [2024-12-09 09:55:43.125348] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:07.814 [2024-12-09 09:55:43.125671] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.814 [2024-12-09 09:55:43.125684] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.814 [2024-12-09 09:55:43.125693] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.814 [2024-12-09 09:55:43.125703] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.814 [2024-12-09 09:55:43.139550] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.814 [2024-12-09 09:55:43.140357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.814 [2024-12-09 09:55:43.140423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:07.814 [2024-12-09 09:55:43.140437] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:07.814 [2024-12-09 09:55:43.140790] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:07.814 [2024-12-09 09:55:43.141098] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.814 [2024-12-09 09:55:43.141113] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.814 [2024-12-09 09:55:43.141123] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.815 [2024-12-09 09:55:43.141134] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.815 [2024-12-09 09:55:43.154876] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.815 [2024-12-09 09:55:43.155580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.815 [2024-12-09 09:55:43.155612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:07.815 [2024-12-09 09:55:43.155622] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:07.815 [2024-12-09 09:55:43.155934] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:07.815 [2024-12-09 09:55:43.156251] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.815 [2024-12-09 09:55:43.156263] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.815 [2024-12-09 09:55:43.156271] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.815 [2024-12-09 09:55:43.156279] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.815 [2024-12-09 09:55:43.170314] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.815 [2024-12-09 09:55:43.171093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.815 [2024-12-09 09:55:43.171158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:07.815 [2024-12-09 09:55:43.171171] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:07.815 [2024-12-09 09:55:43.171505] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:07.815 [2024-12-09 09:55:43.171829] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.815 [2024-12-09 09:55:43.171842] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.815 [2024-12-09 09:55:43.171852] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.815 [2024-12-09 09:55:43.171862] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.815 [2024-12-09 09:55:43.185546] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.815 [2024-12-09 09:55:43.186298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.815 [2024-12-09 09:55:43.186363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:07.815 [2024-12-09 09:55:43.186376] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:07.815 [2024-12-09 09:55:43.186726] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:07.815 [2024-12-09 09:55:43.187033] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.815 [2024-12-09 09:55:43.187046] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.815 [2024-12-09 09:55:43.187055] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.815 [2024-12-09 09:55:43.187066] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.815 [2024-12-09 09:55:43.200770] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.815 [2024-12-09 09:55:43.201471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.815 [2024-12-09 09:55:43.201502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:07.815 [2024-12-09 09:55:43.201512] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:07.815 [2024-12-09 09:55:43.201826] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:07.815 [2024-12-09 09:55:43.202127] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.815 [2024-12-09 09:55:43.202138] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.815 [2024-12-09 09:55:43.202147] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.815 [2024-12-09 09:55:43.202162] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.815 [2024-12-09 09:55:43.214410] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.815 [2024-12-09 09:55:43.214966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.815 [2024-12-09 09:55:43.214991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:07.815 [2024-12-09 09:55:43.214998] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:07.815 [2024-12-09 09:55:43.215204] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:07.815 [2024-12-09 09:55:43.215412] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.815 [2024-12-09 09:55:43.215421] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.815 [2024-12-09 09:55:43.215427] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.815 [2024-12-09 09:55:43.215433] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.815 [2024-12-09 09:55:43.228140] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.815 [2024-12-09 09:55:43.228798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.815 [2024-12-09 09:55:43.228848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:07.815 [2024-12-09 09:55:43.228857] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:07.815 [2024-12-09 09:55:43.229090] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:07.815 [2024-12-09 09:55:43.229301] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.815 [2024-12-09 09:55:43.229314] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.815 [2024-12-09 09:55:43.229321] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.815 [2024-12-09 09:55:43.229328] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.815 [2024-12-09 09:55:43.241842] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.815 [2024-12-09 09:55:43.242387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.815 [2024-12-09 09:55:43.242431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:07.815 [2024-12-09 09:55:43.242441] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:07.815 [2024-12-09 09:55:43.242684] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:07.815 [2024-12-09 09:55:43.242896] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.815 [2024-12-09 09:55:43.242905] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.815 [2024-12-09 09:55:43.242911] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.815 [2024-12-09 09:55:43.242918] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:07.815 [2024-12-09 09:55:43.255420] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:07.815 [2024-12-09 09:55:43.256076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:07.815 [2024-12-09 09:55:43.256121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:07.815 [2024-12-09 09:55:43.256130] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:07.815 [2024-12-09 09:55:43.256359] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:07.815 [2024-12-09 09:55:43.256569] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:07.815 [2024-12-09 09:55:43.256579] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:07.815 [2024-12-09 09:55:43.256585] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:07.815 [2024-12-09 09:55:43.256593] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.078 [2024-12-09 09:55:43.269170] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.078 [2024-12-09 09:55:43.269773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.078 [2024-12-09 09:55:43.269814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:08.078 [2024-12-09 09:55:43.269823] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:08.078 [2024-12-09 09:55:43.270049] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:08.078 [2024-12-09 09:55:43.270260] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.078 [2024-12-09 09:55:43.270271] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.078 [2024-12-09 09:55:43.270277] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.078 [2024-12-09 09:55:43.270286] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.078 [2024-12-09 09:55:43.282796] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.078 [2024-12-09 09:55:43.283344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.078 [2024-12-09 09:55:43.283363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:08.078 [2024-12-09 09:55:43.283369] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:08.078 [2024-12-09 09:55:43.283573] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:08.078 [2024-12-09 09:55:43.283786] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.078 [2024-12-09 09:55:43.283794] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.078 [2024-12-09 09:55:43.283800] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.078 [2024-12-09 09:55:43.283806] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.078 [2024-12-09 09:55:43.296479] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.078 [2024-12-09 09:55:43.297042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.078 [2024-12-09 09:55:43.297057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:08.078 [2024-12-09 09:55:43.297063] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:08.078 [2024-12-09 09:55:43.297270] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:08.078 [2024-12-09 09:55:43.297474] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.078 [2024-12-09 09:55:43.297481] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.078 [2024-12-09 09:55:43.297487] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.078 [2024-12-09 09:55:43.297493] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.078 [2024-12-09 09:55:43.310200] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.078 [2024-12-09 09:55:43.310844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.078 [2024-12-09 09:55:43.310879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:08.078 [2024-12-09 09:55:43.310888] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:08.078 [2024-12-09 09:55:43.311109] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:08.078 [2024-12-09 09:55:43.311317] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.078 [2024-12-09 09:55:43.311327] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.078 [2024-12-09 09:55:43.311333] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.078 [2024-12-09 09:55:43.311339] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.078 [2024-12-09 09:55:43.323837] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.078 [2024-12-09 09:55:43.324382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.078 [2024-12-09 09:55:43.324414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:08.078 [2024-12-09 09:55:43.324423] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:08.078 [2024-12-09 09:55:43.324656] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:08.078 [2024-12-09 09:55:43.324868] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.078 [2024-12-09 09:55:43.324875] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.078 [2024-12-09 09:55:43.324881] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.078 [2024-12-09 09:55:43.324888] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.078 [2024-12-09 09:55:43.337373] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.078 [2024-12-09 09:55:43.337990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.078 [2024-12-09 09:55:43.338022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:08.078 [2024-12-09 09:55:43.338031] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:08.078 [2024-12-09 09:55:43.338251] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:08.078 [2024-12-09 09:55:43.338458] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.078 [2024-12-09 09:55:43.338470] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.078 [2024-12-09 09:55:43.338476] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.078 [2024-12-09 09:55:43.338482] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.078 [2024-12-09 09:55:43.350968] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.078 [2024-12-09 09:55:43.351594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.078 [2024-12-09 09:55:43.351625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:08.078 [2024-12-09 09:55:43.351634] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:08.078 [2024-12-09 09:55:43.351861] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:08.078 [2024-12-09 09:55:43.352069] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.078 [2024-12-09 09:55:43.352078] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.078 [2024-12-09 09:55:43.352084] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.078 [2024-12-09 09:55:43.352090] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.078 [2024-12-09 09:55:43.364579] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.078 [2024-12-09 09:55:43.365235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.078 [2024-12-09 09:55:43.365267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:08.078 [2024-12-09 09:55:43.365276] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:08.078 [2024-12-09 09:55:43.365494] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:08.078 [2024-12-09 09:55:43.365710] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.078 [2024-12-09 09:55:43.365719] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.078 [2024-12-09 09:55:43.365725] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.078 [2024-12-09 09:55:43.365731] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.078 [2024-12-09 09:55:43.378217] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.078 [2024-12-09 09:55:43.378834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.078 [2024-12-09 09:55:43.378866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:08.079 [2024-12-09 09:55:43.378875] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:08.079 [2024-12-09 09:55:43.379095] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:08.079 [2024-12-09 09:55:43.379302] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.079 [2024-12-09 09:55:43.379310] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.079 [2024-12-09 09:55:43.379316] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.079 [2024-12-09 09:55:43.379325] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.079 [2024-12-09 09:55:43.391809] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.079 [2024-12-09 09:55:43.392421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.079 [2024-12-09 09:55:43.392454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:08.079 [2024-12-09 09:55:43.392462] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:08.079 [2024-12-09 09:55:43.392692] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:08.079 [2024-12-09 09:55:43.392900] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.079 [2024-12-09 09:55:43.392909] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.079 [2024-12-09 09:55:43.392915] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.079 [2024-12-09 09:55:43.392921] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.079 [2024-12-09 09:55:43.405372] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.079 [2024-12-09 09:55:43.405996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.079 [2024-12-09 09:55:43.406028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:08.079 [2024-12-09 09:55:43.406036] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:08.079 [2024-12-09 09:55:43.406256] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:08.079 [2024-12-09 09:55:43.406464] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.079 [2024-12-09 09:55:43.406471] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.079 [2024-12-09 09:55:43.406477] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.079 [2024-12-09 09:55:43.406483] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.079 [2024-12-09 09:55:43.418982] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.079 [2024-12-09 09:55:43.419636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.079 [2024-12-09 09:55:43.419674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:08.079 [2024-12-09 09:55:43.419682] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:08.079 [2024-12-09 09:55:43.419902] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:08.079 [2024-12-09 09:55:43.420109] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.079 [2024-12-09 09:55:43.420116] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.079 [2024-12-09 09:55:43.420122] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.079 [2024-12-09 09:55:43.420128] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.079 [2024-12-09 09:55:43.432629] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.079 [2024-12-09 09:55:43.433286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.079 [2024-12-09 09:55:43.433317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:08.079 [2024-12-09 09:55:43.433326] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:08.079 [2024-12-09 09:55:43.433545] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:08.079 [2024-12-09 09:55:43.433762] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.079 [2024-12-09 09:55:43.433772] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.079 [2024-12-09 09:55:43.433778] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.079 [2024-12-09 09:55:43.433784] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.079 [2024-12-09 09:55:43.446289] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.079 [2024-12-09 09:55:43.446817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.079 [2024-12-09 09:55:43.446833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:08.079 [2024-12-09 09:55:43.446839] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:08.079 [2024-12-09 09:55:43.447043] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:08.079 [2024-12-09 09:55:43.447246] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.079 [2024-12-09 09:55:43.447255] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.079 [2024-12-09 09:55:43.447260] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.079 [2024-12-09 09:55:43.447266] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.079 7478.25 IOPS, 29.21 MiB/s [2024-12-09T08:55:43.532Z] [2024-12-09 09:55:43.461102] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.079 [2024-12-09 09:55:43.461614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.079 [2024-12-09 09:55:43.461628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:08.079 [2024-12-09 09:55:43.461635] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:08.079 [2024-12-09 09:55:43.461845] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:08.079 [2024-12-09 09:55:43.462049] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.079 [2024-12-09 09:55:43.462057] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.079 [2024-12-09 09:55:43.462063] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.079 [2024-12-09 09:55:43.462068] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.079 [2024-12-09 09:55:43.474764] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.079 [2024-12-09 09:55:43.475364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.079 [2024-12-09 09:55:43.475396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:08.079 [2024-12-09 09:55:43.475404] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:08.079 [2024-12-09 09:55:43.475627] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:08.079 [2024-12-09 09:55:43.475842] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.079 [2024-12-09 09:55:43.475852] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.079 [2024-12-09 09:55:43.475857] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.079 [2024-12-09 09:55:43.475863] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.079 [2024-12-09 09:55:43.488363] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.079 [2024-12-09 09:55:43.488957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.079 [2024-12-09 09:55:43.488989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:08.079 [2024-12-09 09:55:43.488998] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:08.079 [2024-12-09 09:55:43.489218] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:08.079 [2024-12-09 09:55:43.489425] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.079 [2024-12-09 09:55:43.489432] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.079 [2024-12-09 09:55:43.489438] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.079 [2024-12-09 09:55:43.489444] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.079 [2024-12-09 09:55:43.501950] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.079 [2024-12-09 09:55:43.502561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.079 [2024-12-09 09:55:43.502593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:08.079 [2024-12-09 09:55:43.502602] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:08.079 [2024-12-09 09:55:43.502832] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:08.079 [2024-12-09 09:55:43.503040] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.079 [2024-12-09 09:55:43.503047] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.080 [2024-12-09 09:55:43.503053] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.080 [2024-12-09 09:55:43.503060] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.080 [2024-12-09 09:55:43.515581] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.080 [2024-12-09 09:55:43.516245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.080 [2024-12-09 09:55:43.516277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:08.080 [2024-12-09 09:55:43.516286] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:08.080 [2024-12-09 09:55:43.516505] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:08.080 [2024-12-09 09:55:43.516721] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.080 [2024-12-09 09:55:43.516734] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.080 [2024-12-09 09:55:43.516740] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.080 [2024-12-09 09:55:43.516746] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.344 [2024-12-09 09:55:43.529234] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.344 [2024-12-09 09:55:43.529904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.344 [2024-12-09 09:55:43.529936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:08.344 [2024-12-09 09:55:43.529944] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:08.344 [2024-12-09 09:55:43.530164] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:08.344 [2024-12-09 09:55:43.530371] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.344 [2024-12-09 09:55:43.530380] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.344 [2024-12-09 09:55:43.530386] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.344 [2024-12-09 09:55:43.530392] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.344 [2024-12-09 09:55:43.542885] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.344 [2024-12-09 09:55:43.543513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.344 [2024-12-09 09:55:43.543545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:08.344 [2024-12-09 09:55:43.543554] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:08.344 [2024-12-09 09:55:43.543782] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:08.344 [2024-12-09 09:55:43.543990] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.344 [2024-12-09 09:55:43.543999] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.344 [2024-12-09 09:55:43.544005] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.344 [2024-12-09 09:55:43.544011] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.344 [2024-12-09 09:55:43.556498] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.344 [2024-12-09 09:55:43.557149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.344 [2024-12-09 09:55:43.557181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:08.344 [2024-12-09 09:55:43.557189] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:08.344 [2024-12-09 09:55:43.557408] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:08.344 [2024-12-09 09:55:43.557616] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.344 [2024-12-09 09:55:43.557624] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.344 [2024-12-09 09:55:43.557630] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.344 [2024-12-09 09:55:43.557648] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.344 [2024-12-09 09:55:43.570148] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.344 [2024-12-09 09:55:43.570735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.344 [2024-12-09 09:55:43.570766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:08.344 [2024-12-09 09:55:43.570775] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:08.344 [2024-12-09 09:55:43.570997] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:08.344 [2024-12-09 09:55:43.571204] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.344 [2024-12-09 09:55:43.571213] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.344 [2024-12-09 09:55:43.571219] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.344 [2024-12-09 09:55:43.571226] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.344 [2024-12-09 09:55:43.583804] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.344 [2024-12-09 09:55:43.584464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.344 [2024-12-09 09:55:43.584495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:08.344 [2024-12-09 09:55:43.584504] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:08.344 [2024-12-09 09:55:43.584732] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:08.344 [2024-12-09 09:55:43.584940] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.344 [2024-12-09 09:55:43.584948] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.344 [2024-12-09 09:55:43.584954] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.344 [2024-12-09 09:55:43.584960] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.344 [2024-12-09 09:55:43.597438] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.344 [2024-12-09 09:55:43.598023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.344 [2024-12-09 09:55:43.598055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:08.344 [2024-12-09 09:55:43.598063] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:08.344 [2024-12-09 09:55:43.598282] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:08.344 [2024-12-09 09:55:43.598490] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.344 [2024-12-09 09:55:43.598498] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.344 [2024-12-09 09:55:43.598503] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.344 [2024-12-09 09:55:43.598509] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.344 [2024-12-09 09:55:43.610993] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.344 [2024-12-09 09:55:43.611648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.344 [2024-12-09 09:55:43.611679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:08.344 [2024-12-09 09:55:43.611688] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:08.344 [2024-12-09 09:55:43.611907] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:08.344 [2024-12-09 09:55:43.612114] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.344 [2024-12-09 09:55:43.612121] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.344 [2024-12-09 09:55:43.612127] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.344 [2024-12-09 09:55:43.612132] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.344 [2024-12-09 09:55:43.624620] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.344 [2024-12-09 09:55:43.625271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.344 [2024-12-09 09:55:43.625303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:08.344 [2024-12-09 09:55:43.625312] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:08.344 [2024-12-09 09:55:43.625531] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:08.344 [2024-12-09 09:55:43.625747] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.344 [2024-12-09 09:55:43.625755] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.344 [2024-12-09 09:55:43.625762] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.344 [2024-12-09 09:55:43.625767] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.344 [2024-12-09 09:55:43.638246] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.344 [2024-12-09 09:55:43.638779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.344 [2024-12-09 09:55:43.638796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:08.344 [2024-12-09 09:55:43.638802] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:08.344 [2024-12-09 09:55:43.639005] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:08.344 [2024-12-09 09:55:43.639208] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.344 [2024-12-09 09:55:43.639216] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.344 [2024-12-09 09:55:43.639222] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.345 [2024-12-09 09:55:43.639227] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.345 [2024-12-09 09:55:43.651933] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.345 [2024-12-09 09:55:43.652550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.345 [2024-12-09 09:55:43.652581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:08.345 [2024-12-09 09:55:43.652590] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:08.345 [2024-12-09 09:55:43.652822] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:08.345 [2024-12-09 09:55:43.653030] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.345 [2024-12-09 09:55:43.653038] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.345 [2024-12-09 09:55:43.653043] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.345 [2024-12-09 09:55:43.653049] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.345 [2024-12-09 09:55:43.665543] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.345 [2024-12-09 09:55:43.666150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.345 [2024-12-09 09:55:43.666182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:08.345 [2024-12-09 09:55:43.666191] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:08.345 [2024-12-09 09:55:43.666409] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:08.345 [2024-12-09 09:55:43.666617] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.345 [2024-12-09 09:55:43.666624] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.345 [2024-12-09 09:55:43.666630] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.345 [2024-12-09 09:55:43.666636] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.345 [2024-12-09 09:55:43.679133] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.345 [2024-12-09 09:55:43.679645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.345 [2024-12-09 09:55:43.679662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:08.345 [2024-12-09 09:55:43.679668] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:08.345 [2024-12-09 09:55:43.679871] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:08.345 [2024-12-09 09:55:43.680074] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.345 [2024-12-09 09:55:43.680083] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.345 [2024-12-09 09:55:43.680088] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.345 [2024-12-09 09:55:43.680093] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.345 [2024-12-09 09:55:43.692759] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.345 [2024-12-09 09:55:43.693353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.345 [2024-12-09 09:55:43.693385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:08.345 [2024-12-09 09:55:43.693394] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:08.345 [2024-12-09 09:55:43.693613] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:08.345 [2024-12-09 09:55:43.693827] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.345 [2024-12-09 09:55:43.693839] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.345 [2024-12-09 09:55:43.693846] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.345 [2024-12-09 09:55:43.693851] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.345 [2024-12-09 09:55:43.706339] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.345 [2024-12-09 09:55:43.707405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.345 [2024-12-09 09:55:43.707433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:08.345 [2024-12-09 09:55:43.707441] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:08.345 [2024-12-09 09:55:43.707668] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:08.345 [2024-12-09 09:55:43.707876] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.345 [2024-12-09 09:55:43.707883] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.345 [2024-12-09 09:55:43.707889] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.345 [2024-12-09 09:55:43.707895] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.345 [2024-12-09 09:55:43.720031] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.345 [2024-12-09 09:55:43.720621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.345 [2024-12-09 09:55:43.720659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:08.345 [2024-12-09 09:55:43.720667] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:08.345 [2024-12-09 09:55:43.720887] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:08.345 [2024-12-09 09:55:43.721094] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.345 [2024-12-09 09:55:43.721102] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.345 [2024-12-09 09:55:43.721108] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.345 [2024-12-09 09:55:43.721114] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.345 [2024-12-09 09:55:43.733600] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.345 [2024-12-09 09:55:43.734239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.345 [2024-12-09 09:55:43.734271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:08.345 [2024-12-09 09:55:43.734280] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:08.345 [2024-12-09 09:55:43.734499] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:08.345 [2024-12-09 09:55:43.734713] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.345 [2024-12-09 09:55:43.734722] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.345 [2024-12-09 09:55:43.734728] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.345 [2024-12-09 09:55:43.734738] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.345 [2024-12-09 09:55:43.747231] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.345 [2024-12-09 09:55:43.747756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.345 [2024-12-09 09:55:43.747788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:08.345 [2024-12-09 09:55:43.747797] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:08.345 [2024-12-09 09:55:43.748018] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:08.345 [2024-12-09 09:55:43.748226] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.345 [2024-12-09 09:55:43.748235] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.345 [2024-12-09 09:55:43.748241] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.345 [2024-12-09 09:55:43.748248] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.345 [2024-12-09 09:55:43.760951] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.345 [2024-12-09 09:55:43.761506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.345 [2024-12-09 09:55:43.761522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:08.345 [2024-12-09 09:55:43.761529] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:08.345 [2024-12-09 09:55:43.761737] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:08.345 [2024-12-09 09:55:43.761941] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.345 [2024-12-09 09:55:43.761949] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.345 [2024-12-09 09:55:43.761955] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.345 [2024-12-09 09:55:43.761960] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.345 [2024-12-09 09:55:43.774648] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.345 [2024-12-09 09:55:43.775152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.345 [2024-12-09 09:55:43.775166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:08.345 [2024-12-09 09:55:43.775172] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:08.345 [2024-12-09 09:55:43.775375] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:08.345 [2024-12-09 09:55:43.775579] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.346 [2024-12-09 09:55:43.775588] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.346 [2024-12-09 09:55:43.775593] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.346 [2024-12-09 09:55:43.775598] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.346 [2024-12-09 09:55:43.788283] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.346 [2024-12-09 09:55:43.788952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.346 [2024-12-09 09:55:43.788987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:08.346 [2024-12-09 09:55:43.788996] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:08.346 [2024-12-09 09:55:43.789215] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:08.346 [2024-12-09 09:55:43.789422] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.346 [2024-12-09 09:55:43.789430] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.346 [2024-12-09 09:55:43.789436] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.346 [2024-12-09 09:55:43.789442] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.610 [2024-12-09 09:55:43.801986] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.610 [2024-12-09 09:55:43.802504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.610 [2024-12-09 09:55:43.802520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:08.610 [2024-12-09 09:55:43.802526] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:08.610 [2024-12-09 09:55:43.802733] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:08.610 [2024-12-09 09:55:43.802938] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.610 [2024-12-09 09:55:43.802946] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.610 [2024-12-09 09:55:43.802951] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.610 [2024-12-09 09:55:43.802956] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.610 [2024-12-09 09:55:43.815627] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.610 [2024-12-09 09:55:43.816170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.610 [2024-12-09 09:55:43.816183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:08.610 [2024-12-09 09:55:43.816189] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:08.610 [2024-12-09 09:55:43.816392] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:08.610 [2024-12-09 09:55:43.816595] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.610 [2024-12-09 09:55:43.816603] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.610 [2024-12-09 09:55:43.816609] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.610 [2024-12-09 09:55:43.816614] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.610 [2024-12-09 09:55:43.829293] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.610 [2024-12-09 09:55:43.829838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.610 [2024-12-09 09:55:43.829852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:08.610 [2024-12-09 09:55:43.829857] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:08.610 [2024-12-09 09:55:43.830063] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:08.610 [2024-12-09 09:55:43.830267] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.610 [2024-12-09 09:55:43.830274] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.610 [2024-12-09 09:55:43.830279] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.610 [2024-12-09 09:55:43.830287] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.610 [2024-12-09 09:55:43.842965] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.610 [2024-12-09 09:55:43.843604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.610 [2024-12-09 09:55:43.843636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:08.610 [2024-12-09 09:55:43.843651] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:08.610 [2024-12-09 09:55:43.843872] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:08.610 [2024-12-09 09:55:43.844080] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.610 [2024-12-09 09:55:43.844087] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.610 [2024-12-09 09:55:43.844093] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.610 [2024-12-09 09:55:43.844099] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.610 [2024-12-09 09:55:43.856592] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.610 [2024-12-09 09:55:43.857230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.610 [2024-12-09 09:55:43.857262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:08.610 [2024-12-09 09:55:43.857271] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:08.610 [2024-12-09 09:55:43.857490] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:08.610 [2024-12-09 09:55:43.857703] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.610 [2024-12-09 09:55:43.857713] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.610 [2024-12-09 09:55:43.857718] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.610 [2024-12-09 09:55:43.857724] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.610 [2024-12-09 09:55:43.870233] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.610 [2024-12-09 09:55:43.870784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.610 [2024-12-09 09:55:43.870800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:08.610 [2024-12-09 09:55:43.870806] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:08.610 [2024-12-09 09:55:43.871009] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:08.610 [2024-12-09 09:55:43.871213] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.610 [2024-12-09 09:55:43.871225] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.610 [2024-12-09 09:55:43.871231] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.610 [2024-12-09 09:55:43.871236] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.610 [2024-12-09 09:55:43.883920] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.610 [2024-12-09 09:55:43.884557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.610 [2024-12-09 09:55:43.884589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:08.610 [2024-12-09 09:55:43.884599] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:08.610 [2024-12-09 09:55:43.884824] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:08.610 [2024-12-09 09:55:43.885032] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.610 [2024-12-09 09:55:43.885041] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.610 [2024-12-09 09:55:43.885047] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.610 [2024-12-09 09:55:43.885053] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.610 [2024-12-09 09:55:43.897540] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.610 [2024-12-09 09:55:43.898098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.610 [2024-12-09 09:55:43.898114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:08.610 [2024-12-09 09:55:43.898120] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:08.610 [2024-12-09 09:55:43.898324] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:08.610 [2024-12-09 09:55:43.898527] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.610 [2024-12-09 09:55:43.898536] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.610 [2024-12-09 09:55:43.898541] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.610 [2024-12-09 09:55:43.898546] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.610 [2024-12-09 09:55:43.911263] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.610 [2024-12-09 09:55:43.911677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.610 [2024-12-09 09:55:43.911691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:08.610 [2024-12-09 09:55:43.911697] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:08.610 [2024-12-09 09:55:43.911900] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:08.610 [2024-12-09 09:55:43.912104] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.610 [2024-12-09 09:55:43.912111] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.611 [2024-12-09 09:55:43.912116] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.611 [2024-12-09 09:55:43.912122] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.611 [2024-12-09 09:55:43.924831] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.611 [2024-12-09 09:55:43.925475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.611 [2024-12-09 09:55:43.925506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:08.611 [2024-12-09 09:55:43.925516] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:08.611 [2024-12-09 09:55:43.925742] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:08.611 [2024-12-09 09:55:43.925949] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.611 [2024-12-09 09:55:43.925957] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.611 [2024-12-09 09:55:43.925963] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.611 [2024-12-09 09:55:43.925969] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.611 [2024-12-09 09:55:43.938456] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.611 [2024-12-09 09:55:43.939172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.611 [2024-12-09 09:55:43.939204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:08.611 [2024-12-09 09:55:43.939213] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:08.611 [2024-12-09 09:55:43.939433] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:08.611 [2024-12-09 09:55:43.939647] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.611 [2024-12-09 09:55:43.939656] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.611 [2024-12-09 09:55:43.939661] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.611 [2024-12-09 09:55:43.939667] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.611 [2024-12-09 09:55:43.952147] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.611 [2024-12-09 09:55:43.952689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.611 [2024-12-09 09:55:43.952711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:08.611 [2024-12-09 09:55:43.952718] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:08.611 [2024-12-09 09:55:43.952927] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:08.611 [2024-12-09 09:55:43.953133] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.611 [2024-12-09 09:55:43.953140] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.611 [2024-12-09 09:55:43.953146] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.611 [2024-12-09 09:55:43.953151] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.611 [2024-12-09 09:55:43.965851] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.611 [2024-12-09 09:55:43.966357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.611 [2024-12-09 09:55:43.966375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:08.611 [2024-12-09 09:55:43.966381] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:08.611 [2024-12-09 09:55:43.966584] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:08.611 [2024-12-09 09:55:43.966795] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.611 [2024-12-09 09:55:43.966803] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.611 [2024-12-09 09:55:43.966809] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.611 [2024-12-09 09:55:43.966814] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.611 [2024-12-09 09:55:43.979497] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.611 [2024-12-09 09:55:43.980012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.611 [2024-12-09 09:55:43.980026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:08.611 [2024-12-09 09:55:43.980032] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:08.611 [2024-12-09 09:55:43.980235] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:08.611 [2024-12-09 09:55:43.980438] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.611 [2024-12-09 09:55:43.980447] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.611 [2024-12-09 09:55:43.980452] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.611 [2024-12-09 09:55:43.980457] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.611 [2024-12-09 09:55:43.993132] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.611 [2024-12-09 09:55:43.993759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.611 [2024-12-09 09:55:43.993792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:08.611 [2024-12-09 09:55:43.993801] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:08.611 [2024-12-09 09:55:43.994022] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:08.611 [2024-12-09 09:55:43.994229] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.611 [2024-12-09 09:55:43.994237] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.611 [2024-12-09 09:55:43.994243] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.611 [2024-12-09 09:55:43.994249] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.611 [2024-12-09 09:55:44.006751] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.611 [2024-12-09 09:55:44.007277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.611 [2024-12-09 09:55:44.007292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:08.611 [2024-12-09 09:55:44.007298] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:08.611 [2024-12-09 09:55:44.007506] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:08.611 [2024-12-09 09:55:44.007719] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.611 [2024-12-09 09:55:44.007726] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.611 [2024-12-09 09:55:44.007732] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.611 [2024-12-09 09:55:44.007737] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.611 [2024-12-09 09:55:44.020409] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.611 [2024-12-09 09:55:44.020928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.611 [2024-12-09 09:55:44.020945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:08.611 [2024-12-09 09:55:44.020951] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:08.611 [2024-12-09 09:55:44.021153] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:08.611 [2024-12-09 09:55:44.021357] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.611 [2024-12-09 09:55:44.021364] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.611 [2024-12-09 09:55:44.021369] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.611 [2024-12-09 09:55:44.021375] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.611 [2024-12-09 09:55:44.034052] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.611 [2024-12-09 09:55:44.034697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.611 [2024-12-09 09:55:44.034728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:08.611 [2024-12-09 09:55:44.034737] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:08.611 [2024-12-09 09:55:44.034959] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:08.611 [2024-12-09 09:55:44.035166] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.611 [2024-12-09 09:55:44.035174] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.612 [2024-12-09 09:55:44.035181] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.612 [2024-12-09 09:55:44.035187] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.612 [2024-12-09 09:55:44.047688] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.612 [2024-12-09 09:55:44.048332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.612 [2024-12-09 09:55:44.048364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:08.612 [2024-12-09 09:55:44.048374] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:08.612 [2024-12-09 09:55:44.048593] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:08.612 [2024-12-09 09:55:44.048807] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.612 [2024-12-09 09:55:44.048816] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.612 [2024-12-09 09:55:44.048826] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.612 [2024-12-09 09:55:44.048832] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.875 [2024-12-09 09:55:44.061329] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.875 [2024-12-09 09:55:44.061855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.875 [2024-12-09 09:55:44.061872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:08.875 [2024-12-09 09:55:44.061878] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:08.875 [2024-12-09 09:55:44.062082] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:08.875 [2024-12-09 09:55:44.062285] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.875 [2024-12-09 09:55:44.062292] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.875 [2024-12-09 09:55:44.062298] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.875 [2024-12-09 09:55:44.062303] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.875 [2024-12-09 09:55:44.074991] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.875 [2024-12-09 09:55:44.075497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.875 [2024-12-09 09:55:44.075511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:08.875 [2024-12-09 09:55:44.075517] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:08.875 [2024-12-09 09:55:44.075724] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:08.875 [2024-12-09 09:55:44.075928] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.875 [2024-12-09 09:55:44.075937] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.875 [2024-12-09 09:55:44.075942] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.875 [2024-12-09 09:55:44.075947] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.875 [2024-12-09 09:55:44.088616] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.875 [2024-12-09 09:55:44.089261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.875 [2024-12-09 09:55:44.089293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:08.875 [2024-12-09 09:55:44.089302] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:08.875 [2024-12-09 09:55:44.089521] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:08.875 [2024-12-09 09:55:44.089734] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.875 [2024-12-09 09:55:44.089743] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.875 [2024-12-09 09:55:44.089749] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.875 [2024-12-09 09:55:44.089755] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.875 [2024-12-09 09:55:44.102244] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.875 [2024-12-09 09:55:44.102936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.875 [2024-12-09 09:55:44.102967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:08.875 [2024-12-09 09:55:44.102976] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:08.875 [2024-12-09 09:55:44.103195] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:08.875 [2024-12-09 09:55:44.103403] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.875 [2024-12-09 09:55:44.103410] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.875 [2024-12-09 09:55:44.103416] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.875 [2024-12-09 09:55:44.103422] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.875 [2024-12-09 09:55:44.115914] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.875 [2024-12-09 09:55:44.116433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.875 [2024-12-09 09:55:44.116448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:08.875 [2024-12-09 09:55:44.116454] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:08.875 [2024-12-09 09:55:44.116662] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:08.875 [2024-12-09 09:55:44.116867] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.875 [2024-12-09 09:55:44.116874] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.875 [2024-12-09 09:55:44.116880] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.875 [2024-12-09 09:55:44.116885] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.875 [2024-12-09 09:55:44.129559] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.875 [2024-12-09 09:55:44.130029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.875 [2024-12-09 09:55:44.130043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:08.875 [2024-12-09 09:55:44.130049] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:08.875 [2024-12-09 09:55:44.130251] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:08.875 [2024-12-09 09:55:44.130455] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.875 [2024-12-09 09:55:44.130463] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.875 [2024-12-09 09:55:44.130468] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.875 [2024-12-09 09:55:44.130473] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.875 [2024-12-09 09:55:44.143174] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.875 [2024-12-09 09:55:44.143762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.875 [2024-12-09 09:55:44.143796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:08.875 [2024-12-09 09:55:44.143808] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:08.875 [2024-12-09 09:55:44.144029] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:08.875 [2024-12-09 09:55:44.144236] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.875 [2024-12-09 09:55:44.144245] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.875 [2024-12-09 09:55:44.144250] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.875 [2024-12-09 09:55:44.144256] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.875 [2024-12-09 09:55:44.156753] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.875 [2024-12-09 09:55:44.157408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.875 [2024-12-09 09:55:44.157439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:08.875 [2024-12-09 09:55:44.157448] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:08.875 [2024-12-09 09:55:44.157674] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:08.875 [2024-12-09 09:55:44.157882] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.875 [2024-12-09 09:55:44.157890] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.875 [2024-12-09 09:55:44.157896] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.875 [2024-12-09 09:55:44.157902] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.876 [2024-12-09 09:55:44.170489] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.876 [2024-12-09 09:55:44.171140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.876 [2024-12-09 09:55:44.171171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:08.876 [2024-12-09 09:55:44.171181] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:08.876 [2024-12-09 09:55:44.171411] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:08.876 [2024-12-09 09:55:44.171618] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.876 [2024-12-09 09:55:44.171628] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.876 [2024-12-09 09:55:44.171634] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.876 [2024-12-09 09:55:44.171646] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.876 [2024-12-09 09:55:44.184132] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.876 [2024-12-09 09:55:44.184668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.876 [2024-12-09 09:55:44.184685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:08.876 [2024-12-09 09:55:44.184691] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:08.876 [2024-12-09 09:55:44.184894] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:08.876 [2024-12-09 09:55:44.185102] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.876 [2024-12-09 09:55:44.185110] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.876 [2024-12-09 09:55:44.185115] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.876 [2024-12-09 09:55:44.185121] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.876 [2024-12-09 09:55:44.197801] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.876 [2024-12-09 09:55:44.198470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.876 [2024-12-09 09:55:44.198502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:08.876 [2024-12-09 09:55:44.198511] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:08.876 [2024-12-09 09:55:44.198736] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:08.876 [2024-12-09 09:55:44.198944] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.876 [2024-12-09 09:55:44.198952] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.876 [2024-12-09 09:55:44.198958] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.876 [2024-12-09 09:55:44.198964] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.876 [2024-12-09 09:55:44.211446] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.876 [2024-12-09 09:55:44.212036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.876 [2024-12-09 09:55:44.212052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:08.876 [2024-12-09 09:55:44.212058] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:08.876 [2024-12-09 09:55:44.212261] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:08.876 [2024-12-09 09:55:44.212465] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.876 [2024-12-09 09:55:44.212473] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.876 [2024-12-09 09:55:44.212478] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.876 [2024-12-09 09:55:44.212484] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.876 [2024-12-09 09:55:44.224980] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.876 [2024-12-09 09:55:44.225533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.876 [2024-12-09 09:55:44.225547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:08.876 [2024-12-09 09:55:44.225552] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:08.876 [2024-12-09 09:55:44.225761] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:08.876 [2024-12-09 09:55:44.225965] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.876 [2024-12-09 09:55:44.225972] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.876 [2024-12-09 09:55:44.225981] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.876 [2024-12-09 09:55:44.225987] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.876 [2024-12-09 09:55:44.238657] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.876 [2024-12-09 09:55:44.239278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.876 [2024-12-09 09:55:44.239310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:08.876 [2024-12-09 09:55:44.239319] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:08.876 [2024-12-09 09:55:44.239539] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:08.876 [2024-12-09 09:55:44.239756] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.876 [2024-12-09 09:55:44.239765] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.876 [2024-12-09 09:55:44.239770] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.876 [2024-12-09 09:55:44.239776] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.876 [2024-12-09 09:55:44.252265] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.876 [2024-12-09 09:55:44.252831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.876 [2024-12-09 09:55:44.252847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:08.876 [2024-12-09 09:55:44.252853] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:08.876 [2024-12-09 09:55:44.253057] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:08.876 [2024-12-09 09:55:44.253260] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.876 [2024-12-09 09:55:44.253268] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.876 [2024-12-09 09:55:44.253274] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.876 [2024-12-09 09:55:44.253279] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.876 [2024-12-09 09:55:44.265963] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.876 [2024-12-09 09:55:44.266495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.876 [2024-12-09 09:55:44.266508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:08.876 [2024-12-09 09:55:44.266514] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:08.876 [2024-12-09 09:55:44.266721] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:08.876 [2024-12-09 09:55:44.266925] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.876 [2024-12-09 09:55:44.266933] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.876 [2024-12-09 09:55:44.266938] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.876 [2024-12-09 09:55:44.266944] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.876 [2024-12-09 09:55:44.279644] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.876 [2024-12-09 09:55:44.280161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.876 [2024-12-09 09:55:44.280174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:08.876 [2024-12-09 09:55:44.280180] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:08.876 [2024-12-09 09:55:44.280382] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:08.876 [2024-12-09 09:55:44.280585] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.876 [2024-12-09 09:55:44.280593] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.876 [2024-12-09 09:55:44.280598] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.876 [2024-12-09 09:55:44.280603] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.876 [2024-12-09 09:55:44.293381] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.876 [2024-12-09 09:55:44.293872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.876 [2024-12-09 09:55:44.293886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:08.876 [2024-12-09 09:55:44.293892] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:08.876 [2024-12-09 09:55:44.294095] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:08.876 [2024-12-09 09:55:44.294299] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.876 [2024-12-09 09:55:44.294307] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.877 [2024-12-09 09:55:44.294312] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.877 [2024-12-09 09:55:44.294317] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.877 [2024-12-09 09:55:44.307022] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.877 [2024-12-09 09:55:44.307537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.877 [2024-12-09 09:55:44.307550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:08.877 [2024-12-09 09:55:44.307555] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:08.877 [2024-12-09 09:55:44.307761] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:08.877 [2024-12-09 09:55:44.307964] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.877 [2024-12-09 09:55:44.307972] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.877 [2024-12-09 09:55:44.307978] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.877 [2024-12-09 09:55:44.307983] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:08.877 [2024-12-09 09:55:44.320663] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:08.877 [2024-12-09 09:55:44.321200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:08.877 [2024-12-09 09:55:44.321213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:08.877 [2024-12-09 09:55:44.321222] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:08.877 [2024-12-09 09:55:44.321424] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:08.877 [2024-12-09 09:55:44.321628] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:08.877 [2024-12-09 09:55:44.321635] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:08.877 [2024-12-09 09:55:44.321646] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:08.877 [2024-12-09 09:55:44.321651] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:09.139 [2024-12-09 09:55:44.334329] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:09.139 [2024-12-09 09:55:44.334798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:09.139 [2024-12-09 09:55:44.334830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:09.139 [2024-12-09 09:55:44.334839] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:09.139 [2024-12-09 09:55:44.335060] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:09.139 [2024-12-09 09:55:44.335267] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:09.139 [2024-12-09 09:55:44.335276] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:09.139 [2024-12-09 09:55:44.335282] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:09.139 [2024-12-09 09:55:44.335288] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:09.139 [2024-12-09 09:55:44.348064] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:09.139 [2024-12-09 09:55:44.348560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:09.139 [2024-12-09 09:55:44.348576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:09.139 [2024-12-09 09:55:44.348582] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:09.139 [2024-12-09 09:55:44.348795] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:09.139 [2024-12-09 09:55:44.348999] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:09.139 [2024-12-09 09:55:44.349007] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:09.139 [2024-12-09 09:55:44.349013] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:09.139 [2024-12-09 09:55:44.349017] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:09.139 [2024-12-09 09:55:44.361724] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:09.139 [2024-12-09 09:55:44.362325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:09.139 [2024-12-09 09:55:44.362356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:09.139 [2024-12-09 09:55:44.362365] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:09.139 [2024-12-09 09:55:44.362585] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:09.139 [2024-12-09 09:55:44.362807] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:09.139 [2024-12-09 09:55:44.362817] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:09.140 [2024-12-09 09:55:44.362823] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:09.140 [2024-12-09 09:55:44.362829] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:09.140 [2024-12-09 09:55:44.375324] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:09.140 [2024-12-09 09:55:44.375969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:09.140 [2024-12-09 09:55:44.376001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:09.140 [2024-12-09 09:55:44.376010] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:09.140 [2024-12-09 09:55:44.376229] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:09.140 [2024-12-09 09:55:44.376436] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:09.140 [2024-12-09 09:55:44.376444] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:09.140 [2024-12-09 09:55:44.376450] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:09.140 [2024-12-09 09:55:44.376455] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:09.140 [2024-12-09 09:55:44.388943] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:09.140 [2024-12-09 09:55:44.389553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:09.140 [2024-12-09 09:55:44.389584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:09.140 [2024-12-09 09:55:44.389593] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:09.140 [2024-12-09 09:55:44.389822] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:09.140 [2024-12-09 09:55:44.390030] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:09.140 [2024-12-09 09:55:44.390037] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:09.140 [2024-12-09 09:55:44.390043] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:09.140 [2024-12-09 09:55:44.390049] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:09.140 [2024-12-09 09:55:44.402487] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:09.140 [2024-12-09 09:55:44.403116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:09.140 [2024-12-09 09:55:44.403148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:09.140 [2024-12-09 09:55:44.403156] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:09.140 [2024-12-09 09:55:44.403376] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:09.140 [2024-12-09 09:55:44.403584] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:09.140 [2024-12-09 09:55:44.403591] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:09.140 [2024-12-09 09:55:44.403600] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:09.140 [2024-12-09 09:55:44.403606] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:09.140 [2024-12-09 09:55:44.416092] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:09.140 [2024-12-09 09:55:44.416709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:09.140 [2024-12-09 09:55:44.416741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:09.140 [2024-12-09 09:55:44.416750] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:09.140 [2024-12-09 09:55:44.416970] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:09.140 [2024-12-09 09:55:44.417177] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:09.140 [2024-12-09 09:55:44.417186] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:09.140 [2024-12-09 09:55:44.417191] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:09.140 [2024-12-09 09:55:44.417197] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:09.140 [2024-12-09 09:55:44.429686] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:09.140 [2024-12-09 09:55:44.430335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:09.140 [2024-12-09 09:55:44.430366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:09.140 [2024-12-09 09:55:44.430376] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:09.140 [2024-12-09 09:55:44.430595] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:09.140 [2024-12-09 09:55:44.430810] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:09.140 [2024-12-09 09:55:44.430820] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:09.140 [2024-12-09 09:55:44.430826] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:09.140 [2024-12-09 09:55:44.430833] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:09.140 [2024-12-09 09:55:44.443320] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:09.140 [2024-12-09 09:55:44.443953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:09.140 [2024-12-09 09:55:44.443985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:09.140 [2024-12-09 09:55:44.443994] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:09.140 [2024-12-09 09:55:44.444215] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:09.140 [2024-12-09 09:55:44.444422] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:09.140 [2024-12-09 09:55:44.444429] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:09.140 [2024-12-09 09:55:44.444435] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:09.140 [2024-12-09 09:55:44.444441] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:09.140 [2024-12-09 09:55:44.456929] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:09.140 [2024-12-09 09:55:44.457584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:09.140 [2024-12-09 09:55:44.457616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:09.140 [2024-12-09 09:55:44.457624] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:09.140 [2024-12-09 09:55:44.457852] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:09.140 [2024-12-09 09:55:44.458065] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:09.140 [2024-12-09 09:55:44.458073] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:09.140 [2024-12-09 09:55:44.458079] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:09.140 [2024-12-09 09:55:44.458084] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:09.140 5982.60 IOPS, 23.37 MiB/s [2024-12-09T08:55:44.593Z] [2024-12-09 09:55:44.470533] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:09.140 [2024-12-09 09:55:44.471145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:09.140 [2024-12-09 09:55:44.471177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:09.140 [2024-12-09 09:55:44.471186] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:09.140 [2024-12-09 09:55:44.471405] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:09.140 [2024-12-09 09:55:44.471613] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:09.140 [2024-12-09 09:55:44.471621] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:09.140 [2024-12-09 09:55:44.471626] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:09.140 [2024-12-09 09:55:44.471632] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:09.140 [2024-12-09 09:55:44.484124] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:09.140 [2024-12-09 09:55:44.484630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:09.140 [2024-12-09 09:55:44.484651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:09.140 [2024-12-09 09:55:44.484657] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:09.140 [2024-12-09 09:55:44.484861] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:09.140 [2024-12-09 09:55:44.485065] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:09.140 [2024-12-09 09:55:44.485073] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:09.140 [2024-12-09 09:55:44.485078] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:09.140 [2024-12-09 09:55:44.485083] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:09.140 [2024-12-09 09:55:44.497748] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:09.140 [2024-12-09 09:55:44.498255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:09.140 [2024-12-09 09:55:44.498268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:09.140 [2024-12-09 09:55:44.498278] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:09.141 [2024-12-09 09:55:44.498481] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:09.141 [2024-12-09 09:55:44.498689] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:09.141 [2024-12-09 09:55:44.498696] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:09.141 [2024-12-09 09:55:44.498702] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:09.141 [2024-12-09 09:55:44.498707] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:09.141 [2024-12-09 09:55:44.511368] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:09.141 [2024-12-09 09:55:44.511973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:09.141 [2024-12-09 09:55:44.512005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:09.141 [2024-12-09 09:55:44.512014] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:09.141 [2024-12-09 09:55:44.512234] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:09.141 [2024-12-09 09:55:44.512441] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:09.141 [2024-12-09 09:55:44.512449] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:09.141 [2024-12-09 09:55:44.512455] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:09.141 [2024-12-09 09:55:44.512460] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:09.141 [2024-12-09 09:55:44.524946] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:09.141 [2024-12-09 09:55:44.525561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:09.141 [2024-12-09 09:55:44.525593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:09.141 [2024-12-09 09:55:44.525602] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:09.141 [2024-12-09 09:55:44.525828] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:09.141 [2024-12-09 09:55:44.526036] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:09.141 [2024-12-09 09:55:44.526044] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:09.141 [2024-12-09 09:55:44.526049] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:09.141 [2024-12-09 09:55:44.526055] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:09.141 [2024-12-09 09:55:44.538527] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:09.141 [2024-12-09 09:55:44.539037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:09.141 [2024-12-09 09:55:44.539052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:09.141 [2024-12-09 09:55:44.539058] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:09.141 [2024-12-09 09:55:44.539262] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:09.141 [2024-12-09 09:55:44.539469] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:09.141 [2024-12-09 09:55:44.539478] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:09.141 [2024-12-09 09:55:44.539483] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:09.141 [2024-12-09 09:55:44.539488] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:09.141 [2024-12-09 09:55:44.552206] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:09.141 [2024-12-09 09:55:44.552914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:09.141 [2024-12-09 09:55:44.552945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:09.141 [2024-12-09 09:55:44.552954] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:09.141 [2024-12-09 09:55:44.553173] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:09.141 [2024-12-09 09:55:44.553381] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:09.141 [2024-12-09 09:55:44.553389] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:09.141 [2024-12-09 09:55:44.553395] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:09.141 [2024-12-09 09:55:44.553401] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:09.141 [2024-12-09 09:55:44.565900] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:09.141 [2024-12-09 09:55:44.566502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:09.141 [2024-12-09 09:55:44.566533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:09.141 [2024-12-09 09:55:44.566543] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:09.141 [2024-12-09 09:55:44.566769] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:09.141 [2024-12-09 09:55:44.566977] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:09.141 [2024-12-09 09:55:44.566985] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:09.141 [2024-12-09 09:55:44.566991] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:09.141 [2024-12-09 09:55:44.566997] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:09.141 [2024-12-09 09:55:44.579485] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:09.141 [2024-12-09 09:55:44.580117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:09.141 [2024-12-09 09:55:44.580149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:09.141 [2024-12-09 09:55:44.580158] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:09.141 [2024-12-09 09:55:44.580378] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:09.141 [2024-12-09 09:55:44.580585] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:09.141 [2024-12-09 09:55:44.580592] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:09.141 [2024-12-09 09:55:44.580602] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:09.141 [2024-12-09 09:55:44.580608] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:09.404 [2024-12-09 09:55:44.593097] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:09.404 [2024-12-09 09:55:44.593620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:09.404 [2024-12-09 09:55:44.593636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:09.404 [2024-12-09 09:55:44.593648] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:09.404 [2024-12-09 09:55:44.593852] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:09.404 [2024-12-09 09:55:44.594055] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:09.404 [2024-12-09 09:55:44.594064] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:09.404 [2024-12-09 09:55:44.594069] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:09.404 [2024-12-09 09:55:44.594075] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:09.404 [2024-12-09 09:55:44.606740] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:09.404 [2024-12-09 09:55:44.607380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:09.404 [2024-12-09 09:55:44.607412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:09.404 [2024-12-09 09:55:44.607420] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:09.404 [2024-12-09 09:55:44.607647] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:09.404 [2024-12-09 09:55:44.607855] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:09.404 [2024-12-09 09:55:44.607864] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:09.404 [2024-12-09 09:55:44.607870] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:09.404 [2024-12-09 09:55:44.607876] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:09.404 [2024-12-09 09:55:44.620351] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:09.404 [2024-12-09 09:55:44.620939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:09.404 [2024-12-09 09:55:44.620970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:09.404 [2024-12-09 09:55:44.620979] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:09.404 [2024-12-09 09:55:44.621198] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:09.404 [2024-12-09 09:55:44.621405] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:09.404 [2024-12-09 09:55:44.621414] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:09.404 [2024-12-09 09:55:44.621420] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:09.404 [2024-12-09 09:55:44.621426] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:09.404 [2024-12-09 09:55:44.633909] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:09.404 [2024-12-09 09:55:44.634565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:09.404 [2024-12-09 09:55:44.634596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:09.404 [2024-12-09 09:55:44.634605] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:09.404 [2024-12-09 09:55:44.634831] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:09.404 [2024-12-09 09:55:44.635039] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:09.404 [2024-12-09 09:55:44.635047] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:09.404 [2024-12-09 09:55:44.635053] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:09.404 [2024-12-09 09:55:44.635059] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:09.404 [2024-12-09 09:55:44.647538] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:09.404 [2024-12-09 09:55:44.648074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:09.404 [2024-12-09 09:55:44.648106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:09.404 [2024-12-09 09:55:44.648115] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:09.404 [2024-12-09 09:55:44.648336] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:09.404 [2024-12-09 09:55:44.648543] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:09.404 [2024-12-09 09:55:44.648551] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:09.404 [2024-12-09 09:55:44.648557] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:09.404 [2024-12-09 09:55:44.648563] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:09.404 [2024-12-09 09:55:44.661240] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:09.404 [2024-12-09 09:55:44.661761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:09.404 [2024-12-09 09:55:44.661793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:09.404 [2024-12-09 09:55:44.661802] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:09.404 [2024-12-09 09:55:44.662023] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:09.405 [2024-12-09 09:55:44.662230] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:09.405 [2024-12-09 09:55:44.662237] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:09.405 [2024-12-09 09:55:44.662243] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:09.405 [2024-12-09 09:55:44.662249] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:09.405 [2024-12-09 09:55:44.674944] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:09.405 [2024-12-09 09:55:44.675594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:09.405 [2024-12-09 09:55:44.675625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:09.405 [2024-12-09 09:55:44.675644] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:09.405 [2024-12-09 09:55:44.675864] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:09.405 [2024-12-09 09:55:44.676071] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:09.405 [2024-12-09 09:55:44.676079] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:09.405 [2024-12-09 09:55:44.676085] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:09.405 [2024-12-09 09:55:44.676091] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:09.405 [2024-12-09 09:55:44.688572] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:09.405 [2024-12-09 09:55:44.689205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:09.405 [2024-12-09 09:55:44.689237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:09.405 [2024-12-09 09:55:44.689246] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:09.405 [2024-12-09 09:55:44.689465] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:09.405 [2024-12-09 09:55:44.689680] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:09.405 [2024-12-09 09:55:44.689690] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:09.405 [2024-12-09 09:55:44.689696] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:09.405 [2024-12-09 09:55:44.689704] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:09.405 [2024-12-09 09:55:44.702190] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:09.405 [2024-12-09 09:55:44.702764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:09.405 [2024-12-09 09:55:44.702796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:09.405 [2024-12-09 09:55:44.702805] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:09.405 [2024-12-09 09:55:44.703027] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:09.405 [2024-12-09 09:55:44.703234] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:09.405 [2024-12-09 09:55:44.703241] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:09.405 [2024-12-09 09:55:44.703247] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:09.405 [2024-12-09 09:55:44.703253] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:09.405 [2024-12-09 09:55:44.715740] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:09.405 [2024-12-09 09:55:44.716387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:09.405 [2024-12-09 09:55:44.716420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:09.405 [2024-12-09 09:55:44.716428] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:09.405 [2024-12-09 09:55:44.716655] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:09.405 [2024-12-09 09:55:44.716867] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:09.405 [2024-12-09 09:55:44.716874] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:09.405 [2024-12-09 09:55:44.716880] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:09.405 [2024-12-09 09:55:44.716886] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:09.405 [2024-12-09 09:55:44.729366] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:09.405 [2024-12-09 09:55:44.730029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:09.405 [2024-12-09 09:55:44.730061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:09.405 [2024-12-09 09:55:44.730070] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:09.405 [2024-12-09 09:55:44.730289] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:09.405 [2024-12-09 09:55:44.730496] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:09.405 [2024-12-09 09:55:44.730505] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:09.405 [2024-12-09 09:55:44.730511] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:09.405 [2024-12-09 09:55:44.730517] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:09.405 [2024-12-09 09:55:44.743002] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:09.405 [2024-12-09 09:55:44.743507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:09.405 [2024-12-09 09:55:44.743523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:09.405 [2024-12-09 09:55:44.743528] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:09.405 [2024-12-09 09:55:44.743737] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:09.405 [2024-12-09 09:55:44.743942] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:09.405 [2024-12-09 09:55:44.743949] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:09.405 [2024-12-09 09:55:44.743954] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:09.405 [2024-12-09 09:55:44.743959] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:09.405 [2024-12-09 09:55:44.756648] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:09.405 [2024-12-09 09:55:44.757277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:09.405 [2024-12-09 09:55:44.757309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:09.405 [2024-12-09 09:55:44.757318] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:09.405 [2024-12-09 09:55:44.757537] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:09.405 [2024-12-09 09:55:44.757752] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:09.405 [2024-12-09 09:55:44.757761] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:09.405 [2024-12-09 09:55:44.757767] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:09.405 [2024-12-09 09:55:44.757777] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:09.405 [2024-12-09 09:55:44.770270] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:09.405 [2024-12-09 09:55:44.770940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:09.405 [2024-12-09 09:55:44.770972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:09.405 [2024-12-09 09:55:44.770981] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:09.405 [2024-12-09 09:55:44.771201] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:09.405 [2024-12-09 09:55:44.771409] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:09.405 [2024-12-09 09:55:44.771417] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:09.405 [2024-12-09 09:55:44.771423] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:09.405 [2024-12-09 09:55:44.771429] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:09.405 [2024-12-09 09:55:44.783921] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:09.405 [2024-12-09 09:55:44.784525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:09.405 [2024-12-09 09:55:44.784557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:09.405 [2024-12-09 09:55:44.784566] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:09.405 [2024-12-09 09:55:44.784793] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:09.405 [2024-12-09 09:55:44.785000] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:09.405 [2024-12-09 09:55:44.785008] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:09.405 [2024-12-09 09:55:44.785014] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:09.405 [2024-12-09 09:55:44.785020] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:09.405 [2024-12-09 09:55:44.797503] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:09.405 [2024-12-09 09:55:44.798152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:09.405 [2024-12-09 09:55:44.798185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:09.406 [2024-12-09 09:55:44.798193] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:09.406 [2024-12-09 09:55:44.798412] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:09.406 [2024-12-09 09:55:44.798620] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:09.406 [2024-12-09 09:55:44.798628] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:09.406 [2024-12-09 09:55:44.798634] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:09.406 [2024-12-09 09:55:44.798647] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:09.406 [2024-12-09 09:55:44.811129] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:09.406 [2024-12-09 09:55:44.811742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:09.406 [2024-12-09 09:55:44.811774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:09.406 [2024-12-09 09:55:44.811783] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:09.406 [2024-12-09 09:55:44.812005] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:09.406 [2024-12-09 09:55:44.812213] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:09.406 [2024-12-09 09:55:44.812221] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:09.406 [2024-12-09 09:55:44.812227] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:09.406 [2024-12-09 09:55:44.812233] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:09.406 [2024-12-09 09:55:44.824730] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:09.406 [2024-12-09 09:55:44.825377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:09.406 [2024-12-09 09:55:44.825408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:09.406 [2024-12-09 09:55:44.825417] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:09.406 [2024-12-09 09:55:44.825643] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:09.406 [2024-12-09 09:55:44.825851] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:09.406 [2024-12-09 09:55:44.825860] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:09.406 [2024-12-09 09:55:44.825867] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:09.406 [2024-12-09 09:55:44.825874] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:09.406 [2024-12-09 09:55:44.838360] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:09.406 [2024-12-09 09:55:44.839019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:09.406 [2024-12-09 09:55:44.839051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:09.406 [2024-12-09 09:55:44.839060] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:09.406 [2024-12-09 09:55:44.839279] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:09.406 [2024-12-09 09:55:44.839486] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:09.406 [2024-12-09 09:55:44.839494] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:09.406 [2024-12-09 09:55:44.839499] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:09.406 [2024-12-09 09:55:44.839505] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:09.406 [2024-12-09 09:55:44.851992] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:09.406 [2024-12-09 09:55:44.852496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:09.406 [2024-12-09 09:55:44.852512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:09.406 [2024-12-09 09:55:44.852518] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:09.406 [2024-12-09 09:55:44.852732] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:09.406 [2024-12-09 09:55:44.852936] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:09.406 [2024-12-09 09:55:44.852944] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:09.406 [2024-12-09 09:55:44.852950] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:09.406 [2024-12-09 09:55:44.852955] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:09.670 [2024-12-09 09:55:44.865682] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:09.670 [2024-12-09 09:55:44.866321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:09.670 [2024-12-09 09:55:44.866353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:09.670 [2024-12-09 09:55:44.866362] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:09.670 [2024-12-09 09:55:44.866581] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:09.670 [2024-12-09 09:55:44.866795] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:09.670 [2024-12-09 09:55:44.866804] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:09.670 [2024-12-09 09:55:44.866810] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:09.670 [2024-12-09 09:55:44.866816] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:09.670 [2024-12-09 09:55:44.879312] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:09.670 [2024-12-09 09:55:44.879862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:09.670 [2024-12-09 09:55:44.879878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:09.670 [2024-12-09 09:55:44.879885] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:09.670 [2024-12-09 09:55:44.880088] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:09.670 [2024-12-09 09:55:44.880291] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:09.670 [2024-12-09 09:55:44.880299] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:09.670 [2024-12-09 09:55:44.880305] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:09.670 [2024-12-09 09:55:44.880311] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:09.670 [2024-12-09 09:55:44.892980] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:09.670 [2024-12-09 09:55:44.893475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:09.670 [2024-12-09 09:55:44.893489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:09.670 [2024-12-09 09:55:44.893495] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:09.670 [2024-12-09 09:55:44.893702] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:09.670 [2024-12-09 09:55:44.893906] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:09.670 [2024-12-09 09:55:44.893917] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:09.670 [2024-12-09 09:55:44.893922] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:09.670 [2024-12-09 09:55:44.893927] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:09.670 [2024-12-09 09:55:44.906586] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:09.670 [2024-12-09 09:55:44.907083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:09.670 [2024-12-09 09:55:44.907096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:09.670 [2024-12-09 09:55:44.907102] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:09.670 [2024-12-09 09:55:44.907304] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:09.670 [2024-12-09 09:55:44.907508] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:09.671 [2024-12-09 09:55:44.907515] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:09.671 [2024-12-09 09:55:44.907520] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:09.671 [2024-12-09 09:55:44.907525] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:09.671 [2024-12-09 09:55:44.920196] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:09.671 [2024-12-09 09:55:44.920736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:09.671 [2024-12-09 09:55:44.920749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:09.671 [2024-12-09 09:55:44.920755] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:09.671 [2024-12-09 09:55:44.920958] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:09.671 [2024-12-09 09:55:44.921161] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:09.671 [2024-12-09 09:55:44.921169] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:09.671 [2024-12-09 09:55:44.921174] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:09.671 [2024-12-09 09:55:44.921179] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:09.671 [2024-12-09 09:55:44.933852] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:09.671 [2024-12-09 09:55:44.934458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:09.671 [2024-12-09 09:55:44.934489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:09.671 [2024-12-09 09:55:44.934498] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:09.671 [2024-12-09 09:55:44.934726] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:09.671 [2024-12-09 09:55:44.934934] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:09.671 [2024-12-09 09:55:44.934943] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:09.671 [2024-12-09 09:55:44.934949] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:09.671 [2024-12-09 09:55:44.934959] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:09.671 [2024-12-09 09:55:44.947444] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:09.671 [2024-12-09 09:55:44.948089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:09.671 [2024-12-09 09:55:44.948121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:09.671 [2024-12-09 09:55:44.948130] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:09.671 [2024-12-09 09:55:44.948351] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:09.671 [2024-12-09 09:55:44.948558] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:09.671 [2024-12-09 09:55:44.948567] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:09.671 [2024-12-09 09:55:44.948574] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:09.671 [2024-12-09 09:55:44.948579] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:09.671 [2024-12-09 09:55:44.961083] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:09.671 [2024-12-09 09:55:44.961634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:09.671 [2024-12-09 09:55:44.961655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:09.671 [2024-12-09 09:55:44.961661] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:09.671 [2024-12-09 09:55:44.961865] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:09.671 [2024-12-09 09:55:44.962069] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:09.671 [2024-12-09 09:55:44.962077] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:09.671 [2024-12-09 09:55:44.962083] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:09.671 [2024-12-09 09:55:44.962088] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:09.671 [2024-12-09 09:55:44.974810] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:09.671 [2024-12-09 09:55:44.975452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:09.671 [2024-12-09 09:55:44.975484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:09.671 [2024-12-09 09:55:44.975493] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:09.671 [2024-12-09 09:55:44.975721] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:09.671 [2024-12-09 09:55:44.975929] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:09.671 [2024-12-09 09:55:44.975937] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:09.671 [2024-12-09 09:55:44.975943] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:09.671 [2024-12-09 09:55:44.975950] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:09.671 [2024-12-09 09:55:44.988460] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:09.671 [2024-12-09 09:55:44.989007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:09.671 [2024-12-09 09:55:44.989039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:09.671 [2024-12-09 09:55:44.989048] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:09.671 [2024-12-09 09:55:44.989267] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:09.671 [2024-12-09 09:55:44.989474] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:09.671 [2024-12-09 09:55:44.989482] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:09.671 [2024-12-09 09:55:44.989488] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:09.671 [2024-12-09 09:55:44.989493] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:09.671 [2024-12-09 09:55:45.002198] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:09.671 [2024-12-09 09:55:45.002734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:09.671 [2024-12-09 09:55:45.002766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:09.671 [2024-12-09 09:55:45.002775] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:09.671 [2024-12-09 09:55:45.002996] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:09.671 [2024-12-09 09:55:45.003203] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:09.671 [2024-12-09 09:55:45.003212] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:09.671 [2024-12-09 09:55:45.003217] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:09.671 [2024-12-09 09:55:45.003224] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:09.671 [2024-12-09 09:55:45.015905] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:09.671 [2024-12-09 09:55:45.016551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:09.671 [2024-12-09 09:55:45.016582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:09.671 [2024-12-09 09:55:45.016591] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:09.671 [2024-12-09 09:55:45.016818] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:09.671 [2024-12-09 09:55:45.017026] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:09.671 [2024-12-09 09:55:45.017033] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:09.671 [2024-12-09 09:55:45.017039] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:09.671 [2024-12-09 09:55:45.017045] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:09.671 [2024-12-09 09:55:45.029531] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:09.671 [2024-12-09 09:55:45.030180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:09.671 [2024-12-09 09:55:45.030212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:09.671 [2024-12-09 09:55:45.030221] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:09.671 [2024-12-09 09:55:45.030447] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:09.671 [2024-12-09 09:55:45.030663] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:09.671 [2024-12-09 09:55:45.030671] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:09.671 [2024-12-09 09:55:45.030677] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:09.671 [2024-12-09 09:55:45.030684] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:09.671 [2024-12-09 09:55:45.043163] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:09.671 [2024-12-09 09:55:45.043793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:09.671 [2024-12-09 09:55:45.043825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:09.672 [2024-12-09 09:55:45.043834] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:09.672 [2024-12-09 09:55:45.044053] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:09.672 [2024-12-09 09:55:45.044261] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:09.672 [2024-12-09 09:55:45.044270] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:09.672 [2024-12-09 09:55:45.044275] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:09.672 [2024-12-09 09:55:45.044281] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:09.672 [2024-12-09 09:55:45.056772] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:09.672 [2024-12-09 09:55:45.057391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:09.672 [2024-12-09 09:55:45.057423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:09.672 [2024-12-09 09:55:45.057432] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:09.672 [2024-12-09 09:55:45.057658] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:09.672 [2024-12-09 09:55:45.057866] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:09.672 [2024-12-09 09:55:45.057875] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:09.672 [2024-12-09 09:55:45.057881] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:09.672 [2024-12-09 09:55:45.057887] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:09.672 [2024-12-09 09:55:45.070390] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:09.672 [2024-12-09 09:55:45.070988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:09.672 [2024-12-09 09:55:45.071020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:09.672 [2024-12-09 09:55:45.071029] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:09.672 [2024-12-09 09:55:45.071248] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:09.672 [2024-12-09 09:55:45.071455] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:09.672 [2024-12-09 09:55:45.071468] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:09.672 [2024-12-09 09:55:45.071475] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:09.672 [2024-12-09 09:55:45.071481] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:09.672 [2024-12-09 09:55:45.083983] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:09.672 [2024-12-09 09:55:45.084590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:09.672 [2024-12-09 09:55:45.084622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:09.672 [2024-12-09 09:55:45.084631] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:09.672 [2024-12-09 09:55:45.084860] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:09.672 [2024-12-09 09:55:45.085068] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:09.672 [2024-12-09 09:55:45.085076] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:09.672 [2024-12-09 09:55:45.085082] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:09.672 [2024-12-09 09:55:45.085088] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:09.672 [2024-12-09 09:55:45.097574] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:09.672 [2024-12-09 09:55:45.098179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:09.672 [2024-12-09 09:55:45.098211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:09.672 [2024-12-09 09:55:45.098220] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:09.672 [2024-12-09 09:55:45.098439] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:09.672 [2024-12-09 09:55:45.098657] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:09.672 [2024-12-09 09:55:45.098665] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:09.672 [2024-12-09 09:55:45.098671] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:09.672 [2024-12-09 09:55:45.098677] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:09.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3057780 Killed "${NVMF_APP[@]}" "$@" 00:38:09.672 09:55:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:38:09.672 09:55:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:38:09.672 09:55:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:09.672 09:55:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:09.672 09:55:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:09.672 [2024-12-09 09:55:45.111162] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:09.672 [2024-12-09 09:55:45.111745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:09.672 [2024-12-09 09:55:45.111777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:09.672 [2024-12-09 09:55:45.111786] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:09.672 [2024-12-09 09:55:45.112012] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:09.672 [2024-12-09 09:55:45.112220] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:09.672 [2024-12-09 09:55:45.112227] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:09.672 [2024-12-09 09:55:45.112233] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:09.672 [2024-12-09 09:55:45.112239] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:09.672 09:55:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=3059431 00:38:09.672 09:55:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 3059431 00:38:09.672 09:55:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:38:09.672 09:55:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 3059431 ']' 00:38:09.672 09:55:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:09.672 09:55:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:09.672 09:55:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:09.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:09.672 09:55:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:09.672 09:55:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:09.934 [2024-12-09 09:55:45.124726] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:09.934 [2024-12-09 09:55:45.125155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:09.934 [2024-12-09 09:55:45.125170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:09.934 [2024-12-09 09:55:45.125177] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:09.934 [2024-12-09 09:55:45.125380] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:09.934 [2024-12-09 09:55:45.125584] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:09.934 [2024-12-09 09:55:45.125592] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:09.934 [2024-12-09 09:55:45.125597] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:09.934 [2024-12-09 09:55:45.125602] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:09.934 [2024-12-09 09:55:45.138275] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:09.934 [2024-12-09 09:55:45.138981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:09.934 [2024-12-09 09:55:45.139013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:09.934 [2024-12-09 09:55:45.139022] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:09.934 [2024-12-09 09:55:45.139241] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:09.934 [2024-12-09 09:55:45.139448] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:09.934 [2024-12-09 09:55:45.139456] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:09.935 [2024-12-09 09:55:45.139466] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:09.935 [2024-12-09 09:55:45.139472] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:09.935 [2024-12-09 09:55:45.151960] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:09.935 [2024-12-09 09:55:45.152621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:09.935 [2024-12-09 09:55:45.152658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:09.935 [2024-12-09 09:55:45.152667] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:09.935 [2024-12-09 09:55:45.152886] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:09.935 [2024-12-09 09:55:45.153093] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:09.935 [2024-12-09 09:55:45.153101] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:09.935 [2024-12-09 09:55:45.153106] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:09.935 [2024-12-09 09:55:45.153112] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:09.935 [2024-12-09 09:55:45.164560] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:38:09.935 [2024-12-09 09:55:45.164608] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:09.935 [2024-12-09 09:55:45.165600] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:09.935 [2024-12-09 09:55:45.166121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:09.935 [2024-12-09 09:55:45.166137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:09.935 [2024-12-09 09:55:45.166143] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:09.935 [2024-12-09 09:55:45.166346] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:09.935 [2024-12-09 09:55:45.166550] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:09.935 [2024-12-09 09:55:45.166557] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:09.935 [2024-12-09 09:55:45.166563] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:09.935 [2024-12-09 09:55:45.166568] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:09.935 [2024-12-09 09:55:45.179285] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:09.935 [2024-12-09 09:55:45.179965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:09.935 [2024-12-09 09:55:45.179997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:09.935 [2024-12-09 09:55:45.180006] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:09.935 [2024-12-09 09:55:45.180225] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:09.935 [2024-12-09 09:55:45.180433] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:09.935 [2024-12-09 09:55:45.180441] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:09.935 [2024-12-09 09:55:45.180451] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:09.935 [2024-12-09 09:55:45.180457] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:09.935 [2024-12-09 09:55:45.192946] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:09.935 [2024-12-09 09:55:45.193603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:09.935 [2024-12-09 09:55:45.193636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:09.935 [2024-12-09 09:55:45.193650] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:09.935 [2024-12-09 09:55:45.193871] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:09.935 [2024-12-09 09:55:45.194077] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:09.935 [2024-12-09 09:55:45.194086] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:09.935 [2024-12-09 09:55:45.194092] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:09.935 [2024-12-09 09:55:45.194098] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:09.935 [2024-12-09 09:55:45.206484] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:09.935 [2024-12-09 09:55:45.207104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:09.935 [2024-12-09 09:55:45.207135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:09.935 [2024-12-09 09:55:45.207144] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:09.935 [2024-12-09 09:55:45.207364] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:09.935 [2024-12-09 09:55:45.207571] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:09.935 [2024-12-09 09:55:45.207579] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:09.935 [2024-12-09 09:55:45.207586] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:09.935 [2024-12-09 09:55:45.207592] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:09.935 [2024-12-09 09:55:45.220082] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:09.935 [2024-12-09 09:55:45.220722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:09.935 [2024-12-09 09:55:45.220754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:09.935 [2024-12-09 09:55:45.220763] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:09.935 [2024-12-09 09:55:45.220984] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:09.935 [2024-12-09 09:55:45.221190] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:09.935 [2024-12-09 09:55:45.221198] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:09.935 [2024-12-09 09:55:45.221204] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:09.935 [2024-12-09 09:55:45.221211] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:09.935 [2024-12-09 09:55:45.233706] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:09.935 [2024-12-09 09:55:45.234342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:09.935 [2024-12-09 09:55:45.234374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:09.935 [2024-12-09 09:55:45.234382] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:09.935 [2024-12-09 09:55:45.234602] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:09.935 [2024-12-09 09:55:45.234815] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:09.935 [2024-12-09 09:55:45.234824] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:09.935 [2024-12-09 09:55:45.234830] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:09.935 [2024-12-09 09:55:45.234836] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:09.935 [2024-12-09 09:55:45.247321] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:09.935 [2024-12-09 09:55:45.247968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:09.935 [2024-12-09 09:55:45.248000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:09.935 [2024-12-09 09:55:45.248009] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:09.935 [2024-12-09 09:55:45.248228] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:09.935 [2024-12-09 09:55:45.248436] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:09.935 [2024-12-09 09:55:45.248444] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:09.935 [2024-12-09 09:55:45.248450] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:09.935 [2024-12-09 09:55:45.248456] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:09.935 [2024-12-09 09:55:45.253052] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:38:09.935 [2024-12-09 09:55:45.260947] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:09.935 [2024-12-09 09:55:45.261612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:09.935 [2024-12-09 09:55:45.261649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:09.935 [2024-12-09 09:55:45.261659] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:09.935 [2024-12-09 09:55:45.261880] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:09.935 [2024-12-09 09:55:45.262088] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:09.935 [2024-12-09 09:55:45.262096] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:09.935 [2024-12-09 09:55:45.262102] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:09.935 [2024-12-09 09:55:45.262108] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:09.935 [2024-12-09 09:55:45.268691] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:09.935 [2024-12-09 09:55:45.268713] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:09.936 [2024-12-09 09:55:45.268723] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:09.936 [2024-12-09 09:55:45.268729] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:09.936 [2024-12-09 09:55:45.268734] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:09.936 [2024-12-09 09:55:45.269719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:09.936 [2024-12-09 09:55:45.269883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:09.936 [2024-12-09 09:55:45.269884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:38:09.936 [2024-12-09 09:55:45.274618] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:09.936 [2024-12-09 09:55:45.275233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:09.936 [2024-12-09 09:55:45.275251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:09.936 [2024-12-09 09:55:45.275257] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:09.936 [2024-12-09 09:55:45.275464] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:09.936 [2024-12-09 09:55:45.275673] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:09.936 [2024-12-09 09:55:45.275681] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:09.936 [2024-12-09 09:55:45.275687] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:09.936 [2024-12-09 09:55:45.275693] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:09.936 [2024-12-09 09:55:45.288184] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:09.936 [2024-12-09 09:55:45.288858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:09.936 [2024-12-09 09:55:45.288893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:09.936 [2024-12-09 09:55:45.288902] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:09.936 [2024-12-09 09:55:45.289126] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:09.936 [2024-12-09 09:55:45.289333] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:09.936 [2024-12-09 09:55:45.289341] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:09.936 [2024-12-09 09:55:45.289347] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:09.936 [2024-12-09 09:55:45.289354] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:09.936 [2024-12-09 09:55:45.301847] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:09.936 [2024-12-09 09:55:45.302559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:09.936 [2024-12-09 09:55:45.302592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:09.936 [2024-12-09 09:55:45.302601] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:09.936 [2024-12-09 09:55:45.302830] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:09.936 [2024-12-09 09:55:45.303037] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:09.936 [2024-12-09 09:55:45.303046] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:09.936 [2024-12-09 09:55:45.303057] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:09.936 [2024-12-09 09:55:45.303064] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:09.936 [2024-12-09 09:55:45.315553] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:09.936 [2024-12-09 09:55:45.316227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:09.936 [2024-12-09 09:55:45.316260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:09.936 [2024-12-09 09:55:45.316269] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:09.936 [2024-12-09 09:55:45.316489] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:09.936 [2024-12-09 09:55:45.316702] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:09.936 [2024-12-09 09:55:45.316711] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:09.936 [2024-12-09 09:55:45.316717] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:09.936 [2024-12-09 09:55:45.316723] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:09.936 [2024-12-09 09:55:45.329203] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:09.936 [2024-12-09 09:55:45.329932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:09.936 [2024-12-09 09:55:45.329964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:09.936 [2024-12-09 09:55:45.329973] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:09.936 [2024-12-09 09:55:45.330192] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:09.936 [2024-12-09 09:55:45.330399] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:09.936 [2024-12-09 09:55:45.330408] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:09.936 [2024-12-09 09:55:45.330413] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:09.936 [2024-12-09 09:55:45.330420] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:09.936 [2024-12-09 09:55:45.342905] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:09.936 [2024-12-09 09:55:45.343469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:09.936 [2024-12-09 09:55:45.343485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:09.936 [2024-12-09 09:55:45.343491] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:09.936 [2024-12-09 09:55:45.343699] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:09.936 [2024-12-09 09:55:45.343904] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:09.936 [2024-12-09 09:55:45.343920] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:09.936 [2024-12-09 09:55:45.343925] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:09.936 [2024-12-09 09:55:45.343932] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:09.936 [2024-12-09 09:55:45.356606] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:09.936 [2024-12-09 09:55:45.357264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:09.936 [2024-12-09 09:55:45.357296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:09.936 [2024-12-09 09:55:45.357305] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:09.936 [2024-12-09 09:55:45.357525] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:09.936 [2024-12-09 09:55:45.357737] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:09.936 [2024-12-09 09:55:45.357746] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:09.936 [2024-12-09 09:55:45.357752] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:09.936 [2024-12-09 09:55:45.357759] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:09.936 [2024-12-09 09:55:45.370264] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:09.936 [2024-12-09 09:55:45.370734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:09.936 [2024-12-09 09:55:45.370766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:09.936 [2024-12-09 09:55:45.370775] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:09.936 [2024-12-09 09:55:45.370996] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:09.936 [2024-12-09 09:55:45.371203] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:09.936 [2024-12-09 09:55:45.371211] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:09.936 [2024-12-09 09:55:45.371217] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:09.936 [2024-12-09 09:55:45.371224] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:09.936 [2024-12-09 09:55:45.383975] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:09.936 [2024-12-09 09:55:45.384645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:09.936 [2024-12-09 09:55:45.384677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:09.936 [2024-12-09 09:55:45.384687] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:09.936 [2024-12-09 09:55:45.384909] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:10.198 [2024-12-09 09:55:45.385116] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:10.198 [2024-12-09 09:55:45.385125] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:10.198 [2024-12-09 09:55:45.385131] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:10.198 [2024-12-09 09:55:45.385137] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:10.198 [2024-12-09 09:55:45.397803] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:10.198 [2024-12-09 09:55:45.398429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.198 [2024-12-09 09:55:45.398460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:10.198 [2024-12-09 09:55:45.398473] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:10.198 [2024-12-09 09:55:45.398699] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:10.198 [2024-12-09 09:55:45.398906] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:10.198 [2024-12-09 09:55:45.398915] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:10.198 [2024-12-09 09:55:45.398921] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:10.199 [2024-12-09 09:55:45.398927] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:10.199 [2024-12-09 09:55:45.411415] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:10.199 [2024-12-09 09:55:45.411876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.199 [2024-12-09 09:55:45.411892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:10.199 [2024-12-09 09:55:45.411898] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:10.199 [2024-12-09 09:55:45.412102] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:10.199 [2024-12-09 09:55:45.412306] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:10.199 [2024-12-09 09:55:45.412314] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:10.199 [2024-12-09 09:55:45.412320] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:10.199 [2024-12-09 09:55:45.412325] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:10.199 [2024-12-09 09:55:45.425005] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:10.199 [2024-12-09 09:55:45.425521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.199 [2024-12-09 09:55:45.425534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:10.199 [2024-12-09 09:55:45.425541] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:10.199 [2024-12-09 09:55:45.425747] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:10.199 [2024-12-09 09:55:45.425952] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:10.199 [2024-12-09 09:55:45.425960] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:10.199 [2024-12-09 09:55:45.425965] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:10.199 [2024-12-09 09:55:45.425970] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:10.199 [2024-12-09 09:55:45.438647] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:10.199 [2024-12-09 09:55:45.439262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.199 [2024-12-09 09:55:45.439294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:10.199 [2024-12-09 09:55:45.439304] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:10.199 [2024-12-09 09:55:45.439523] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:10.199 [2024-12-09 09:55:45.439741] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:10.199 [2024-12-09 09:55:45.439751] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:10.199 [2024-12-09 09:55:45.439757] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:10.199 [2024-12-09 09:55:45.439763] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:10.199 [2024-12-09 09:55:45.452252] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:10.199 [2024-12-09 09:55:45.452940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.199 [2024-12-09 09:55:45.452972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:10.199 [2024-12-09 09:55:45.452981] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:10.199 [2024-12-09 09:55:45.453201] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:10.199 [2024-12-09 09:55:45.453408] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:10.199 [2024-12-09 09:55:45.453417] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:10.199 [2024-12-09 09:55:45.453423] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:10.199 [2024-12-09 09:55:45.453429] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:10.199 4985.50 IOPS, 19.47 MiB/s [2024-12-09T08:55:45.652Z] [2024-12-09 09:55:45.467067] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:10.199 [2024-12-09 09:55:45.467537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.199 [2024-12-09 09:55:45.467568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:10.199 [2024-12-09 09:55:45.467578] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:10.199 [2024-12-09 09:55:45.467806] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:10.199 [2024-12-09 09:55:45.468014] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:10.199 [2024-12-09 09:55:45.468023] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:10.199 [2024-12-09 09:55:45.468029] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:10.199 [2024-12-09 09:55:45.468035] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:10.199 [2024-12-09 09:55:45.480741] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:10.199 [2024-12-09 09:55:45.481357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.199 [2024-12-09 09:55:45.481389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:10.199 [2024-12-09 09:55:45.481398] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:10.199 [2024-12-09 09:55:45.481618] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:10.199 [2024-12-09 09:55:45.481832] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:10.199 [2024-12-09 09:55:45.481842] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:10.199 [2024-12-09 09:55:45.481852] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:10.199 [2024-12-09 09:55:45.481858] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:10.199 [2024-12-09 09:55:45.494351] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:10.199 [2024-12-09 09:55:45.495063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.199 [2024-12-09 09:55:45.495095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:10.199 [2024-12-09 09:55:45.495105] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:10.199 [2024-12-09 09:55:45.495324] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:10.199 [2024-12-09 09:55:45.495532] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:10.199 [2024-12-09 09:55:45.495540] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:10.199 [2024-12-09 09:55:45.495546] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:10.199 [2024-12-09 09:55:45.495552] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:10.199 [2024-12-09 09:55:45.508049] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:10.199 [2024-12-09 09:55:45.508568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.199 [2024-12-09 09:55:45.508584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:10.199 [2024-12-09 09:55:45.508589] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:10.199 [2024-12-09 09:55:45.508798] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:10.199 [2024-12-09 09:55:45.509002] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:10.199 [2024-12-09 09:55:45.509010] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:10.199 [2024-12-09 09:55:45.509015] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:10.199 [2024-12-09 09:55:45.509020] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:10.199 [2024-12-09 09:55:45.521704] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:10.199 [2024-12-09 09:55:45.522356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.199 [2024-12-09 09:55:45.522389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:10.199 [2024-12-09 09:55:45.522398] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:10.199 [2024-12-09 09:55:45.522617] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:10.199 [2024-12-09 09:55:45.522831] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:10.199 [2024-12-09 09:55:45.522841] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:10.199 [2024-12-09 09:55:45.522847] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:10.199 [2024-12-09 09:55:45.522852] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:10.199 [2024-12-09 09:55:45.535348] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:10.199 [2024-12-09 09:55:45.535920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.199 [2024-12-09 09:55:45.535950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:10.199 [2024-12-09 09:55:45.535959] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:10.200 [2024-12-09 09:55:45.536179] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:10.200 [2024-12-09 09:55:45.536386] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:10.200 [2024-12-09 09:55:45.536394] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:10.200 [2024-12-09 09:55:45.536401] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:10.200 [2024-12-09 09:55:45.536406] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:10.200 [2024-12-09 09:55:45.548910] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:10.200 [2024-12-09 09:55:45.549580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.200 [2024-12-09 09:55:45.549612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:10.200 [2024-12-09 09:55:45.549621] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:10.200 [2024-12-09 09:55:45.549847] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:10.200 [2024-12-09 09:55:45.550055] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:10.200 [2024-12-09 09:55:45.550064] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:10.200 [2024-12-09 09:55:45.550070] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:10.200 [2024-12-09 09:55:45.550076] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:10.200 [2024-12-09 09:55:45.562569] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:10.200 [2024-12-09 09:55:45.563090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.200 [2024-12-09 09:55:45.563106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:10.200 [2024-12-09 09:55:45.563112] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:10.200 [2024-12-09 09:55:45.563316] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:10.200 [2024-12-09 09:55:45.563520] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:10.200 [2024-12-09 09:55:45.563527] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:10.200 [2024-12-09 09:55:45.563533] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:10.200 [2024-12-09 09:55:45.563538] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:10.200 [2024-12-09 09:55:45.576124] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:10.200 [2024-12-09 09:55:45.576779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.200 [2024-12-09 09:55:45.576812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:10.200 [2024-12-09 09:55:45.576824] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:10.200 [2024-12-09 09:55:45.577043] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:10.200 [2024-12-09 09:55:45.577250] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:10.200 [2024-12-09 09:55:45.577258] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:10.200 [2024-12-09 09:55:45.577264] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:10.200 [2024-12-09 09:55:45.577270] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:10.200 [2024-12-09 09:55:45.589798] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:10.200 [2024-12-09 09:55:45.590312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.200 [2024-12-09 09:55:45.590343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:10.200 [2024-12-09 09:55:45.590352] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:10.200 [2024-12-09 09:55:45.590572] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:10.200 [2024-12-09 09:55:45.590785] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:10.200 [2024-12-09 09:55:45.590795] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:10.200 [2024-12-09 09:55:45.590800] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:10.200 [2024-12-09 09:55:45.590807] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:10.200 [2024-12-09 09:55:45.603495] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:10.200 [2024-12-09 09:55:45.604028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.200 [2024-12-09 09:55:45.604044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:10.200 [2024-12-09 09:55:45.604050] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:10.200 [2024-12-09 09:55:45.604253] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:10.200 [2024-12-09 09:55:45.604457] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:10.200 [2024-12-09 09:55:45.604466] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:10.200 [2024-12-09 09:55:45.604471] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:10.200 [2024-12-09 09:55:45.604476] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:10.200 [2024-12-09 09:55:45.617157] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:10.200 [2024-12-09 09:55:45.617741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.200 [2024-12-09 09:55:45.617773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:10.200 [2024-12-09 09:55:45.617782] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:10.200 [2024-12-09 09:55:45.618004] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:10.200 [2024-12-09 09:55:45.618215] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:10.200 [2024-12-09 09:55:45.618224] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:10.200 [2024-12-09 09:55:45.618230] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:10.200 [2024-12-09 09:55:45.618236] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:10.200 [2024-12-09 09:55:45.630737] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:10.200 [2024-12-09 09:55:45.631254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.200 [2024-12-09 09:55:45.631270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:10.200 [2024-12-09 09:55:45.631276] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:10.200 [2024-12-09 09:55:45.631479] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:10.200 [2024-12-09 09:55:45.631687] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:10.200 [2024-12-09 09:55:45.631695] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:10.200 [2024-12-09 09:55:45.631700] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:10.200 [2024-12-09 09:55:45.631706] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:10.200 [2024-12-09 09:55:45.644384] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:10.200 [2024-12-09 09:55:45.644970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.200 [2024-12-09 09:55:45.645002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:10.200 [2024-12-09 09:55:45.645011] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:10.200 [2024-12-09 09:55:45.645230] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:10.200 [2024-12-09 09:55:45.645438] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:10.200 [2024-12-09 09:55:45.645445] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:10.200 [2024-12-09 09:55:45.645452] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:10.200 [2024-12-09 09:55:45.645458] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:10.463 [2024-12-09 09:55:45.657959] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:10.463 [2024-12-09 09:55:45.658467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.463 [2024-12-09 09:55:45.658499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:10.463 [2024-12-09 09:55:45.658508] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:10.463 [2024-12-09 09:55:45.658734] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:10.463 [2024-12-09 09:55:45.658941] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:10.463 [2024-12-09 09:55:45.658950] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:10.463 [2024-12-09 09:55:45.658960] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:10.463 [2024-12-09 09:55:45.658966] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:10.463 [2024-12-09 09:55:45.671668] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:10.463 [2024-12-09 09:55:45.672187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.463 [2024-12-09 09:55:45.672203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:10.463 [2024-12-09 09:55:45.672209] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:10.463 [2024-12-09 09:55:45.672412] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:10.463 [2024-12-09 09:55:45.672616] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:10.463 [2024-12-09 09:55:45.672623] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:10.463 [2024-12-09 09:55:45.672629] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:10.463 [2024-12-09 09:55:45.672634] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:10.463 [2024-12-09 09:55:45.685326] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:10.463 [2024-12-09 09:55:45.685939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.463 [2024-12-09 09:55:45.685971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:10.463 [2024-12-09 09:55:45.685980] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:10.463 [2024-12-09 09:55:45.686199] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:10.463 [2024-12-09 09:55:45.686407] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:10.463 [2024-12-09 09:55:45.686414] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:10.463 [2024-12-09 09:55:45.686420] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:10.463 [2024-12-09 09:55:45.686426] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:10.463 [2024-12-09 09:55:45.698925] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:10.463 [2024-12-09 09:55:45.699590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.463 [2024-12-09 09:55:45.699622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:10.463 [2024-12-09 09:55:45.699630] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:10.463 [2024-12-09 09:55:45.699856] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:10.463 [2024-12-09 09:55:45.700065] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:10.463 [2024-12-09 09:55:45.700073] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:10.463 [2024-12-09 09:55:45.700079] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:10.463 [2024-12-09 09:55:45.700085] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:10.463 [2024-12-09 09:55:45.712578] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:10.463 [2024-12-09 09:55:45.713233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.463 [2024-12-09 09:55:45.713265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:10.463 [2024-12-09 09:55:45.713275] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:10.463 [2024-12-09 09:55:45.713494] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:10.463 [2024-12-09 09:55:45.713708] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:10.463 [2024-12-09 09:55:45.713717] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:10.463 [2024-12-09 09:55:45.713722] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:10.463 [2024-12-09 09:55:45.713728] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:10.463 [2024-12-09 09:55:45.726215] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:10.463 [2024-12-09 09:55:45.726952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.463 [2024-12-09 09:55:45.726984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:10.463 [2024-12-09 09:55:45.726993] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:10.463 [2024-12-09 09:55:45.727213] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:10.463 [2024-12-09 09:55:45.727420] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:10.463 [2024-12-09 09:55:45.727429] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:10.463 [2024-12-09 09:55:45.727435] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:10.463 [2024-12-09 09:55:45.727441] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:10.463 [2024-12-09 09:55:45.739945] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:10.463 [2024-12-09 09:55:45.740571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.463 [2024-12-09 09:55:45.740604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:10.463 [2024-12-09 09:55:45.740613] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:10.463 [2024-12-09 09:55:45.740841] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:10.463 [2024-12-09 09:55:45.741051] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:10.463 [2024-12-09 09:55:45.741059] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:10.463 [2024-12-09 09:55:45.741066] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:10.463 [2024-12-09 09:55:45.741072] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:10.463 [2024-12-09 09:55:45.753567] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:10.463 [2024-12-09 09:55:45.754232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.463 [2024-12-09 09:55:45.754265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:10.463 [2024-12-09 09:55:45.754277] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:10.463 [2024-12-09 09:55:45.754496] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:10.463 [2024-12-09 09:55:45.754710] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:10.463 [2024-12-09 09:55:45.754718] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:10.463 [2024-12-09 09:55:45.754724] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:10.463 [2024-12-09 09:55:45.754729] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:10.463 [2024-12-09 09:55:45.767229] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:10.463 [2024-12-09 09:55:45.767792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.463 [2024-12-09 09:55:45.767810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:10.463 [2024-12-09 09:55:45.767816] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:10.463 [2024-12-09 09:55:45.768019] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:10.464 [2024-12-09 09:55:45.768223] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:10.464 [2024-12-09 09:55:45.768230] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:10.464 [2024-12-09 09:55:45.768236] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:10.464 [2024-12-09 09:55:45.768241] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:10.464 [2024-12-09 09:55:45.780931] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:10.464 [2024-12-09 09:55:45.781342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.464 [2024-12-09 09:55:45.781357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:10.464 [2024-12-09 09:55:45.781363] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:10.464 [2024-12-09 09:55:45.781566] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:10.464 [2024-12-09 09:55:45.781774] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:10.464 [2024-12-09 09:55:45.781785] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:10.464 [2024-12-09 09:55:45.781792] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:10.464 [2024-12-09 09:55:45.781799] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:10.464 [2024-12-09 09:55:45.794481] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:10.464 [2024-12-09 09:55:45.795005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.464 [2024-12-09 09:55:45.795019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:10.464 [2024-12-09 09:55:45.795025] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:10.464 [2024-12-09 09:55:45.795228] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:10.464 [2024-12-09 09:55:45.795437] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:10.464 [2024-12-09 09:55:45.795445] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:10.464 [2024-12-09 09:55:45.795451] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:10.464 [2024-12-09 09:55:45.795456] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:10.464 [2024-12-09 09:55:45.808158] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:10.464 [2024-12-09 09:55:45.808753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.464 [2024-12-09 09:55:45.808785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:10.464 [2024-12-09 09:55:45.808794] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:10.464 [2024-12-09 09:55:45.809017] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:10.464 [2024-12-09 09:55:45.809224] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:10.464 [2024-12-09 09:55:45.809232] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:10.464 [2024-12-09 09:55:45.809237] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:10.464 [2024-12-09 09:55:45.809243] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:10.464 [2024-12-09 09:55:45.821739] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:10.464 [2024-12-09 09:55:45.822407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.464 [2024-12-09 09:55:45.822439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:10.464 [2024-12-09 09:55:45.822448] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:10.464 [2024-12-09 09:55:45.822674] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:10.464 [2024-12-09 09:55:45.822882] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:10.464 [2024-12-09 09:55:45.822890] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:10.464 [2024-12-09 09:55:45.822896] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:10.464 [2024-12-09 09:55:45.822902] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:10.464 [2024-12-09 09:55:45.835398] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:10.464 [2024-12-09 09:55:45.835705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.464 [2024-12-09 09:55:45.835721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:10.464 [2024-12-09 09:55:45.835728] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:10.464 [2024-12-09 09:55:45.835931] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:10.464 [2024-12-09 09:55:45.836135] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:10.464 [2024-12-09 09:55:45.836143] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:10.464 [2024-12-09 09:55:45.836149] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:10.464 [2024-12-09 09:55:45.836158] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:10.464 [2024-12-09 09:55:45.849038] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:10.464 [2024-12-09 09:55:45.849394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.464 [2024-12-09 09:55:45.849407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:10.464 [2024-12-09 09:55:45.849413] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:10.464 [2024-12-09 09:55:45.849616] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:10.464 [2024-12-09 09:55:45.849825] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:10.464 [2024-12-09 09:55:45.849833] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:10.464 [2024-12-09 09:55:45.849838] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:10.464 [2024-12-09 09:55:45.849843] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:10.464 [2024-12-09 09:55:45.862719] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:10.464 [2024-12-09 09:55:45.863222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.464 [2024-12-09 09:55:45.863235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:10.464 [2024-12-09 09:55:45.863241] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:10.464 [2024-12-09 09:55:45.863443] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:10.464 [2024-12-09 09:55:45.863659] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:10.464 [2024-12-09 09:55:45.863668] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:10.464 [2024-12-09 09:55:45.863673] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:10.464 [2024-12-09 09:55:45.863678] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:10.464 [2024-12-09 09:55:45.876359] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:10.464 [2024-12-09 09:55:45.876981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.464 [2024-12-09 09:55:45.877013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:10.464 [2024-12-09 09:55:45.877022] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:10.464 [2024-12-09 09:55:45.877250] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:10.464 [2024-12-09 09:55:45.877459] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:10.464 [2024-12-09 09:55:45.877467] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:10.464 [2024-12-09 09:55:45.877473] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:10.464 [2024-12-09 09:55:45.877479] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:10.464 [2024-12-09 09:55:45.890020] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:10.464 [2024-12-09 09:55:45.890649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.464 [2024-12-09 09:55:45.890682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:10.464 [2024-12-09 09:55:45.890691] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:10.464 [2024-12-09 09:55:45.890911] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:10.464 [2024-12-09 09:55:45.891119] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:10.464 [2024-12-09 09:55:45.891126] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:10.464 [2024-12-09 09:55:45.891132] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:10.464 [2024-12-09 09:55:45.891138] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:10.464 [2024-12-09 09:55:45.903634] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:10.464 [2024-12-09 09:55:45.904153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.464 [2024-12-09 09:55:45.904169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:10.464 [2024-12-09 09:55:45.904175] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:10.465 [2024-12-09 09:55:45.904379] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:10.465 [2024-12-09 09:55:45.904582] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:10.465 [2024-12-09 09:55:45.904591] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:10.465 [2024-12-09 09:55:45.904597] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:10.465 [2024-12-09 09:55:45.904602] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:10.726 [2024-12-09 09:55:45.917290] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:10.726 [2024-12-09 09:55:45.917795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.726 [2024-12-09 09:55:45.917810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:10.726 [2024-12-09 09:55:45.917816] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:10.726 [2024-12-09 09:55:45.918019] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:10.726 [2024-12-09 09:55:45.918224] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:10.726 [2024-12-09 09:55:45.918231] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:10.726 [2024-12-09 09:55:45.918238] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:10.726 [2024-12-09 09:55:45.918245] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:10.726 [2024-12-09 09:55:45.930931] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:10.726 [2024-12-09 09:55:45.931537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.726 [2024-12-09 09:55:45.931569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:10.726 [2024-12-09 09:55:45.931582] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:10.726 [2024-12-09 09:55:45.931810] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:10.726 [2024-12-09 09:55:45.932018] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:10.726 [2024-12-09 09:55:45.932026] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:10.726 [2024-12-09 09:55:45.932032] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:10.726 [2024-12-09 09:55:45.932038] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:10.726 [2024-12-09 09:55:45.944529] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:10.726 [2024-12-09 09:55:45.945184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.726 [2024-12-09 09:55:45.945216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:10.726 [2024-12-09 09:55:45.945225] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:10.726 [2024-12-09 09:55:45.945444] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:10.726 [2024-12-09 09:55:45.945658] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:10.726 [2024-12-09 09:55:45.945667] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:10.726 [2024-12-09 09:55:45.945673] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:10.726 [2024-12-09 09:55:45.945679] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:10.726 [2024-12-09 09:55:45.958169] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:10.726 [2024-12-09 09:55:45.958746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.727 [2024-12-09 09:55:45.958779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:10.727 [2024-12-09 09:55:45.958788] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:10.727 [2024-12-09 09:55:45.959009] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:10.727 [2024-12-09 09:55:45.959216] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:10.727 [2024-12-09 09:55:45.959224] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:10.727 [2024-12-09 09:55:45.959230] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:10.727 [2024-12-09 09:55:45.959236] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:10.727 09:55:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:10.727 09:55:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:38:10.727 09:55:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:10.727 09:55:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:10.727 09:55:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:10.727 [2024-12-09 09:55:45.971750] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:10.727 [2024-12-09 09:55:45.972305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.727 [2024-12-09 09:55:45.972321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:10.727 [2024-12-09 09:55:45.972331] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:10.727 [2024-12-09 09:55:45.972537] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:10.727 [2024-12-09 09:55:45.972746] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:10.727 [2024-12-09 09:55:45.972755] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:10.727 [2024-12-09 09:55:45.972763] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:10.727 [2024-12-09 09:55:45.972770] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:10.727 [2024-12-09 09:55:45.985461] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:10.727 [2024-12-09 09:55:45.985848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.727 [2024-12-09 09:55:45.985864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:10.727 [2024-12-09 09:55:45.985870] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:10.727 [2024-12-09 09:55:45.986074] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:10.727 [2024-12-09 09:55:45.986278] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:10.727 [2024-12-09 09:55:45.986286] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:10.727 [2024-12-09 09:55:45.986291] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:10.727 [2024-12-09 09:55:45.986297] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:10.727 [2024-12-09 09:55:45.999174] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:10.727 [2024-12-09 09:55:45.999675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.727 [2024-12-09 09:55:45.999689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:10.727 [2024-12-09 09:55:45.999694] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:10.727 [2024-12-09 09:55:45.999897] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:10.727 [2024-12-09 09:55:46.000099] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:10.727 [2024-12-09 09:55:46.000106] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:10.727 [2024-12-09 09:55:46.000112] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:10.727 [2024-12-09 09:55:46.000117] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:10.727 09:55:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:10.727 09:55:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:10.727 09:55:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:10.727 09:55:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:10.727 [2024-12-09 09:55:46.008901] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:10.727 [2024-12-09 09:55:46.012826] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:10.727 [2024-12-09 09:55:46.013442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.727 [2024-12-09 09:55:46.013474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:10.727 [2024-12-09 09:55:46.013484] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:10.727 [2024-12-09 09:55:46.013709] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:10.727 [2024-12-09 09:55:46.013916] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:10.727 [2024-12-09 09:55:46.013925] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:10.727 [2024-12-09 09:55:46.013931] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:10.727 [2024-12-09 09:55:46.013937] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:10.727 09:55:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:10.727 09:55:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:10.727 09:55:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:10.727 09:55:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:10.727 [2024-12-09 09:55:46.026432] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:10.727 [2024-12-09 09:55:46.027099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.727 [2024-12-09 09:55:46.027131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:10.727 [2024-12-09 09:55:46.027140] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:10.727 [2024-12-09 09:55:46.027360] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:10.727 [2024-12-09 09:55:46.027567] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:10.727 [2024-12-09 09:55:46.027575] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:10.727 [2024-12-09 09:55:46.027581] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:10.727 [2024-12-09 09:55:46.027587] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:10.727 [2024-12-09 09:55:46.040082] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:10.727 [2024-12-09 09:55:46.040607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.727 [2024-12-09 09:55:46.040623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:10.727 [2024-12-09 09:55:46.040629] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:10.727 [2024-12-09 09:55:46.040837] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:10.727 [2024-12-09 09:55:46.041041] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:10.727 [2024-12-09 09:55:46.041050] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:10.727 [2024-12-09 09:55:46.041055] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:10.727 [2024-12-09 09:55:46.041060] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:10.727 Malloc0 00:38:10.727 09:55:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:10.727 09:55:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:10.727 09:55:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:10.727 09:55:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:10.727 [2024-12-09 09:55:46.053746] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:10.727 [2024-12-09 09:55:46.054280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.727 [2024-12-09 09:55:46.054309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:10.727 [2024-12-09 09:55:46.054318] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:10.727 [2024-12-09 09:55:46.054539] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:10.727 [2024-12-09 09:55:46.054751] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:10.727 [2024-12-09 09:55:46.054760] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:10.727 [2024-12-09 09:55:46.054766] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:10.727 [2024-12-09 09:55:46.054773] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:10.727 09:55:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:10.727 09:55:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:10.727 09:55:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:10.727 09:55:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:10.727 [2024-12-09 09:55:46.067468] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:10.728 [2024-12-09 09:55:46.067962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:10.728 [2024-12-09 09:55:46.067978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa6f790 with addr=10.0.0.2, port=4420 00:38:10.728 [2024-12-09 09:55:46.067984] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa6f790 is same with the state(6) to be set 00:38:10.728 [2024-12-09 09:55:46.068188] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6f790 (9): Bad file descriptor 00:38:10.728 [2024-12-09 09:55:46.068392] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:38:10.728 [2024-12-09 09:55:46.068401] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:38:10.728 [2024-12-09 09:55:46.068406] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:38:10.728 [2024-12-09 09:55:46.068411] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:38:10.728 09:55:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:10.728 09:55:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:10.728 09:55:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:10.728 09:55:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:10.728 [2024-12-09 09:55:46.076435] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:10.728 [2024-12-09 09:55:46.081112] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:38:10.728 09:55:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:10.728 09:55:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3058412 00:38:10.728 [2024-12-09 09:55:46.105569] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:38:12.239 4927.71 IOPS, 19.25 MiB/s [2024-12-09T08:55:48.630Z] 5924.12 IOPS, 23.14 MiB/s [2024-12-09T08:55:49.571Z] 6717.22 IOPS, 26.24 MiB/s [2024-12-09T08:55:50.511Z] 7343.60 IOPS, 28.69 MiB/s [2024-12-09T08:55:51.894Z] 7829.73 IOPS, 30.58 MiB/s [2024-12-09T08:55:52.850Z] 8262.83 IOPS, 32.28 MiB/s [2024-12-09T08:55:53.792Z] 8629.15 IOPS, 33.71 MiB/s [2024-12-09T08:55:54.738Z] 8942.29 IOPS, 34.93 MiB/s [2024-12-09T08:55:54.738Z] 9202.53 IOPS, 35.95 MiB/s 00:38:19.285 Latency(us) 00:38:19.285 [2024-12-09T08:55:54.738Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:19.285 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:38:19.285 Verification LBA range: start 0x0 length 0x4000 00:38:19.285 Nvme1n1 : 15.01 9207.06 35.97 9579.90 0.00 6790.22 689.49 15073.28 00:38:19.285 [2024-12-09T08:55:54.738Z] =================================================================================================================== 00:38:19.285 [2024-12-09T08:55:54.738Z] Total : 9207.06 35.97 9579.90 0.00 6790.22 689.49 15073.28 00:38:19.285 09:55:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:38:19.285 09:55:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:19.285 09:55:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:19.285 09:55:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:19.285 09:55:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:19.285 09:55:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:38:19.285 09:55:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:38:19.285 09:55:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:19.285 09:55:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:38:19.285 09:55:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:19.285 09:55:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:38:19.285 09:55:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:19.285 09:55:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:19.285 rmmod nvme_tcp 00:38:19.285 rmmod nvme_fabrics 00:38:19.285 rmmod nvme_keyring 00:38:19.285 09:55:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:19.285 09:55:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:38:19.285 09:55:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:38:19.285 09:55:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 3059431 ']' 00:38:19.285 09:55:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 3059431 00:38:19.285 09:55:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 3059431 ']' 00:38:19.285 09:55:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 3059431 00:38:19.285 09:55:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:38:19.285 09:55:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:19.285 09:55:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3059431 00:38:19.547 09:55:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:19.547 09:55:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:19.547 09:55:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3059431' 00:38:19.547 killing process with pid 3059431 00:38:19.547 09:55:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 3059431 00:38:19.547 09:55:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 3059431 00:38:19.547 09:55:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:19.547 09:55:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:19.547 09:55:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:19.547 09:55:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:38:19.547 09:55:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:38:19.547 09:55:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:19.547 09:55:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:38:19.547 09:55:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:19.547 09:55:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:19.547 09:55:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:19.547 09:55:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:19.547 09:55:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:22.091 09:55:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:22.091 00:38:22.091 real 0m27.891s 00:38:22.091 user 1m2.675s 00:38:22.091 sys 0m7.447s 00:38:22.091 09:55:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:22.091 09:55:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:22.091 ************************************ 00:38:22.091 END TEST nvmf_bdevperf 00:38:22.091 ************************************ 00:38:22.091 09:55:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:38:22.091 09:55:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:38:22.091 09:55:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:22.091 09:55:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:38:22.091 ************************************ 00:38:22.091 START TEST nvmf_target_disconnect 00:38:22.091 ************************************ 00:38:22.091 09:55:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:38:22.091 * Looking for test storage... 00:38:22.091 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:38:22.091 09:55:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:22.091 09:55:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:22.091 09:55:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:38:22.091 09:55:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:22.091 09:55:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:22.091 09:55:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:22.091 09:55:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:22.091 09:55:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:38:22.091 09:55:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:38:22.091 09:55:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:38:22.091 09:55:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:38:22.091 09:55:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:38:22.091 09:55:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:38:22.091 09:55:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:38:22.091 09:55:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:22.091 09:55:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:38:22.091 09:55:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:38:22.091 09:55:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:22.091 09:55:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:22.091 09:55:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:38:22.091 09:55:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:38:22.091 09:55:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:22.091 09:55:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:38:22.091 09:55:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:38:22.091 09:55:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:38:22.091 09:55:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:38:22.091 09:55:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:22.091 09:55:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:38:22.091 09:55:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:38:22.091 09:55:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:22.091 09:55:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:22.091 09:55:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:38:22.091 09:55:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:22.091 09:55:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:22.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:22.091 --rc genhtml_branch_coverage=1 00:38:22.091 --rc genhtml_function_coverage=1 00:38:22.091 --rc genhtml_legend=1 00:38:22.091 --rc geninfo_all_blocks=1 00:38:22.091 --rc geninfo_unexecuted_blocks=1 00:38:22.091 00:38:22.091 ' 00:38:22.091 09:55:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:22.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:22.091 --rc genhtml_branch_coverage=1 00:38:22.091 --rc genhtml_function_coverage=1 00:38:22.091 --rc genhtml_legend=1 00:38:22.091 --rc geninfo_all_blocks=1 00:38:22.091 --rc geninfo_unexecuted_blocks=1 00:38:22.091 00:38:22.091 ' 00:38:22.091 09:55:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:22.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:22.091 --rc genhtml_branch_coverage=1 00:38:22.091 --rc genhtml_function_coverage=1 00:38:22.091 --rc genhtml_legend=1 00:38:22.091 --rc geninfo_all_blocks=1 00:38:22.091 --rc geninfo_unexecuted_blocks=1 00:38:22.091 00:38:22.091 ' 00:38:22.091 09:55:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:22.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:22.091 --rc genhtml_branch_coverage=1 00:38:22.091 --rc genhtml_function_coverage=1 00:38:22.091 --rc genhtml_legend=1 00:38:22.091 --rc geninfo_all_blocks=1 00:38:22.091 --rc geninfo_unexecuted_blocks=1 00:38:22.091 00:38:22.091 ' 00:38:22.091 09:55:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:22.091 09:55:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:38:22.091 09:55:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:22.091 09:55:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:22.091 09:55:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:22.091 09:55:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:22.091 09:55:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:22.091 09:55:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:22.091 09:55:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:22.091 09:55:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:22.091 09:55:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:22.091 09:55:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:22.091 09:55:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:22.091 09:55:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:22.091 09:55:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:22.091 09:55:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:22.091 09:55:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:22.091 09:55:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:22.091 09:55:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:22.091 09:55:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:38:22.091 09:55:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:22.091 09:55:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:22.091 09:55:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:22.091 09:55:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:22.091 09:55:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:22.091 09:55:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:22.091 09:55:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:38:22.092 09:55:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:22.092 09:55:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:38:22.092 09:55:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:22.092 09:55:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:22.092 09:55:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:22.092 09:55:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:22.092 09:55:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:22.092 09:55:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:22.092 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:22.092 09:55:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:22.092 09:55:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:22.092 09:55:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:22.092 09:55:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:38:22.092 09:55:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:38:22.092 09:55:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:38:22.092 09:55:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:38:22.092 09:55:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:22.092 09:55:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:22.092 09:55:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:22.092 09:55:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:22.092 09:55:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:22.092 09:55:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:22.092 09:55:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:22.092 09:55:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:22.092 09:55:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:22.092 09:55:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:22.092 09:55:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:38:22.092 09:55:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:38:30.234 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:30.234 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:38:30.234 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:30.234 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:30.234 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:30.234 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:30.234 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:30.234 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:38:30.234 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:30.234 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:38:30.234 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:38:30.234 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:38:30.234 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:38:30.234 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:38:30.234 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:38:30.234 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:30.234 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:30.234 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:30.234 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:30.234 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:30.234 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:30.234 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:30.234 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:30.234 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:30.234 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:30.234 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:30.234 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:30.234 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:30.234 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:30.234 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:30.234 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:30.234 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:30.234 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:30.234 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:30.234 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:38:30.234 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:38:30.234 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:30.234 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:30.234 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:30.234 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:30.234 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:30.234 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:30.234 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:38:30.234 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:38:30.234 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:30.234 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:30.234 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:30.234 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:30.234 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:30.234 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:30.234 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:30.234 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:30.234 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:30.234 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:30.234 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:30.234 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:30.234 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:30.234 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:30.234 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:30.234 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:38:30.234 Found net devices under 0000:4b:00.0: cvl_0_0 00:38:30.234 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:30.234 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:30.234 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:30.234 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:30.234 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:30.234 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:30.234 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:30.234 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:30.234 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:38:30.234 Found net devices under 0000:4b:00.1: cvl_0_1 00:38:30.234 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:30.234 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:30.234 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:38:30.234 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:30.234 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:30.234 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:30.234 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:30.234 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:30.234 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:30.234 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:30.234 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:30.234 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:30.234 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:30.234 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:30.234 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:30.234 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:30.234 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:30.234 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:30.234 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:30.234 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:30.234 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:30.234 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:30.234 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:30.234 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:30.234 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:30.234 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:30.234 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:30.234 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:30.234 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:30.234 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:30.234 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.528 ms 00:38:30.234 00:38:30.234 --- 10.0.0.2 ping statistics --- 00:38:30.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:30.234 rtt min/avg/max/mdev = 0.528/0.528/0.528/0.000 ms 00:38:30.234 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:30.234 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:30.234 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:38:30.234 00:38:30.234 --- 10.0.0.1 ping statistics --- 00:38:30.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:30.234 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:38:30.234 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:30.235 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:38:30.235 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:30.235 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:30.235 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:30.235 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:30.235 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:30.235 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:30.235 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:30.235 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:38:30.235 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:30.235 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:30.235 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:38:30.235 ************************************ 00:38:30.235 START TEST nvmf_target_disconnect_tc1 00:38:30.235 ************************************ 00:38:30.235 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:38:30.235 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:38:30.235 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:38:30.235 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:38:30.235 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:38:30.235 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:30.235 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:38:30.235 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:30.235 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:38:30.235 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:30.235 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:38:30.235 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:38:30.235 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:38:30.235 [2024-12-09 09:56:04.735232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:30.235 [2024-12-09 09:56:04.735324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x723020 with addr=10.0.0.2, port=4420 00:38:30.235 [2024-12-09 09:56:04.735369] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:38:30.235 [2024-12-09 09:56:04.735386] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:38:30.235 [2024-12-09 09:56:04.735395] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:38:30.235 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:38:30.235 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:38:30.235 Initializing NVMe Controllers 00:38:30.235 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:38:30.235 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:30.235 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:30.235 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:30.235 00:38:30.235 real 0m0.141s 00:38:30.235 user 0m0.064s 00:38:30.235 sys 0m0.076s 00:38:30.235 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:30.235 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:38:30.235 ************************************ 00:38:30.235 END TEST nvmf_target_disconnect_tc1 00:38:30.235 ************************************ 00:38:30.235 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:38:30.235 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:30.235 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:30.235 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:38:30.235 ************************************ 00:38:30.235 START TEST nvmf_target_disconnect_tc2 00:38:30.235 ************************************ 00:38:30.235 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:38:30.235 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:38:30.235 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:38:30.235 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:30.235 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:30.235 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:30.235 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3065474 00:38:30.235 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3065474 00:38:30.235 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:38:30.235 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3065474 ']' 00:38:30.235 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:30.235 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:30.235 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:30.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:30.235 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:30.235 09:56:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:30.235 [2024-12-09 09:56:04.896151] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:38:30.235 [2024-12-09 09:56:04.896214] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:30.235 [2024-12-09 09:56:04.994175] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:30.235 [2024-12-09 09:56:05.013312] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:30.235 [2024-12-09 09:56:05.013350] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:30.235 [2024-12-09 09:56:05.013359] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:30.235 [2024-12-09 09:56:05.013365] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:30.235 [2024-12-09 09:56:05.013371] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:30.235 [2024-12-09 09:56:05.014916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:38:30.235 [2024-12-09 09:56:05.015067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:38:30.235 [2024-12-09 09:56:05.015218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:38:30.235 [2024-12-09 09:56:05.015219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:38:30.496 09:56:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:30.496 09:56:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:38:30.496 09:56:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:30.496 09:56:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:30.496 09:56:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:30.496 09:56:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:30.496 09:56:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:30.496 09:56:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:30.496 09:56:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:30.496 Malloc0 00:38:30.496 09:56:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:30.496 09:56:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:38:30.496 09:56:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:30.496 09:56:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:30.496 [2024-12-09 09:56:05.785845] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:30.496 09:56:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:30.496 09:56:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:30.496 09:56:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:30.496 09:56:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:30.496 09:56:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:30.496 09:56:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:30.496 09:56:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:30.496 09:56:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:30.496 09:56:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:30.496 09:56:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:30.496 09:56:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:30.496 09:56:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:30.496 [2024-12-09 09:56:05.826125] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:30.496 09:56:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:30.496 09:56:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:30.496 09:56:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:30.496 09:56:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:30.496 09:56:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:30.496 09:56:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3065679 00:38:30.496 09:56:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:38:30.496 09:56:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:38:32.529 09:56:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3065474 00:38:32.529 09:56:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:38:32.529 Read completed with error (sct=0, sc=8) 00:38:32.529 starting I/O failed 00:38:32.529 Read completed with error (sct=0, sc=8) 00:38:32.529 starting I/O failed 00:38:32.529 Read completed with error (sct=0, sc=8) 00:38:32.529 starting I/O failed 00:38:32.529 Read completed with error (sct=0, sc=8) 00:38:32.529 starting I/O failed 00:38:32.529 Read completed with error (sct=0, sc=8) 00:38:32.529 starting I/O failed 00:38:32.529 Read completed with error (sct=0, sc=8) 00:38:32.529 starting I/O failed 00:38:32.529 Read completed with error (sct=0, sc=8) 00:38:32.529 starting I/O failed 00:38:32.529 Read completed with error (sct=0, sc=8) 00:38:32.529 starting I/O failed 00:38:32.529 Read completed with error (sct=0, sc=8) 00:38:32.529 starting I/O failed 00:38:32.529 Read completed with error (sct=0, sc=8) 00:38:32.529 starting I/O failed 00:38:32.529 Read completed with error (sct=0, sc=8) 00:38:32.529 starting I/O failed 00:38:32.529 Read completed with error (sct=0, sc=8) 00:38:32.529 starting I/O failed 00:38:32.529 Read completed with error (sct=0, sc=8) 00:38:32.529 starting I/O failed 00:38:32.529 Read completed with error (sct=0, sc=8) 00:38:32.529 starting I/O failed 00:38:32.529 Read completed with error (sct=0, sc=8) 00:38:32.529 starting I/O failed 00:38:32.529 Read completed with error (sct=0, sc=8) 00:38:32.529 starting I/O failed 00:38:32.529 Read completed with error (sct=0, sc=8) 00:38:32.529 starting I/O failed 00:38:32.529 Read completed with error (sct=0, sc=8) 00:38:32.529 starting I/O failed 00:38:32.529 Write completed with error (sct=0, sc=8) 00:38:32.529 starting I/O failed 00:38:32.529 Read completed with error (sct=0, sc=8) 00:38:32.529 starting I/O failed 00:38:32.529 Write completed with error (sct=0, sc=8) 00:38:32.529 starting I/O failed 00:38:32.529 Write completed with error (sct=0, sc=8) 00:38:32.529 starting I/O failed 00:38:32.529 Write completed with error (sct=0, sc=8) 00:38:32.529 starting I/O failed 00:38:32.529 Write completed with error (sct=0, sc=8) 00:38:32.529 starting I/O failed 00:38:32.529 Read completed with error (sct=0, sc=8) 00:38:32.529 starting I/O failed 00:38:32.529 Read completed with error (sct=0, sc=8) 00:38:32.529 starting I/O failed 00:38:32.529 Write completed with error (sct=0, sc=8) 00:38:32.529 starting I/O failed 00:38:32.529 Write completed with error (sct=0, sc=8) 00:38:32.529 starting I/O failed 00:38:32.529 Read completed with error (sct=0, sc=8) 00:38:32.529 starting I/O failed 00:38:32.529 Read completed with error (sct=0, sc=8) 00:38:32.529 starting I/O failed 00:38:32.529 Read completed with error (sct=0, sc=8) 00:38:32.529 starting I/O failed 00:38:32.529 Read completed with error (sct=0, sc=8) 00:38:32.529 starting I/O failed 00:38:32.529 [2024-12-09 09:56:07.859281] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:32.529 Read completed with error (sct=0, sc=8) 00:38:32.529 starting I/O failed 00:38:32.529 Read completed with error (sct=0, sc=8) 00:38:32.529 starting I/O failed 00:38:32.529 Read completed with error (sct=0, sc=8) 00:38:32.529 starting I/O failed 00:38:32.529 Read completed with error (sct=0, sc=8) 00:38:32.529 starting I/O failed 00:38:32.529 Read completed with error (sct=0, sc=8) 00:38:32.529 starting I/O failed 00:38:32.529 Read completed with error (sct=0, sc=8) 00:38:32.529 starting I/O failed 00:38:32.529 Read completed with error (sct=0, sc=8) 00:38:32.529 starting I/O failed 00:38:32.529 Read completed with error (sct=0, sc=8) 00:38:32.529 starting I/O failed 00:38:32.529 Read completed with error (sct=0, sc=8) 00:38:32.529 starting I/O failed 00:38:32.529 Read completed with error (sct=0, sc=8) 00:38:32.529 starting I/O failed 00:38:32.529 Read completed with error (sct=0, sc=8) 00:38:32.529 starting I/O failed 00:38:32.529 Read completed with error (sct=0, sc=8) 00:38:32.529 starting I/O failed 00:38:32.529 Write completed with error (sct=0, sc=8) 00:38:32.529 starting I/O failed 00:38:32.529 Read completed with error (sct=0, sc=8) 00:38:32.529 starting I/O failed 00:38:32.529 Read completed with error (sct=0, sc=8) 00:38:32.529 starting I/O failed 00:38:32.529 Read completed with error (sct=0, sc=8) 00:38:32.529 starting I/O failed 00:38:32.529 Read completed with error (sct=0, sc=8) 00:38:32.529 starting I/O failed 00:38:32.529 Read completed with error (sct=0, sc=8) 00:38:32.529 starting I/O failed 00:38:32.529 Read completed with error (sct=0, sc=8) 00:38:32.529 starting I/O failed 00:38:32.529 Read completed with error (sct=0, sc=8) 00:38:32.529 starting I/O failed 00:38:32.529 Read completed with error (sct=0, sc=8) 00:38:32.529 starting I/O failed 00:38:32.529 Write completed with error (sct=0, sc=8) 00:38:32.529 starting I/O failed 00:38:32.529 Write completed with error (sct=0, sc=8) 00:38:32.529 starting I/O failed 00:38:32.529 Read completed with error (sct=0, sc=8) 00:38:32.529 starting I/O failed 00:38:32.529 Write completed with error (sct=0, sc=8) 00:38:32.529 starting I/O failed 00:38:32.529 Read completed with error (sct=0, sc=8) 00:38:32.529 starting I/O failed 00:38:32.529 Write completed with error (sct=0, sc=8) 00:38:32.529 starting I/O failed 00:38:32.529 Read completed with error (sct=0, sc=8) 00:38:32.529 starting I/O failed 00:38:32.529 Read completed with error (sct=0, sc=8) 00:38:32.529 starting I/O failed 00:38:32.529 Write completed with error (sct=0, sc=8) 00:38:32.529 starting I/O failed 00:38:32.529 Read completed with error (sct=0, sc=8) 00:38:32.529 starting I/O failed 00:38:32.529 Write completed with error (sct=0, sc=8) 00:38:32.529 starting I/O failed 00:38:32.529 [2024-12-09 09:56:07.859614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:32.529 [2024-12-09 09:56:07.859933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.529 [2024-12-09 09:56:07.859976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.529 qpair failed and we were unable to recover it. 00:38:32.529 [2024-12-09 09:56:07.860278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.529 [2024-12-09 09:56:07.860291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.529 qpair failed and we were unable to recover it. 00:38:32.529 [2024-12-09 09:56:07.860628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.529 [2024-12-09 09:56:07.860646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.529 qpair failed and we were unable to recover it. 00:38:32.529 [2024-12-09 09:56:07.861098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.529 [2024-12-09 09:56:07.861136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.529 qpair failed and we were unable to recover it. 00:38:32.529 [2024-12-09 09:56:07.861482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.529 [2024-12-09 09:56:07.861495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.529 qpair failed and we were unable to recover it. 00:38:32.529 [2024-12-09 09:56:07.861872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.529 [2024-12-09 09:56:07.861911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.529 qpair failed and we were unable to recover it. 00:38:32.529 [2024-12-09 09:56:07.862273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.529 [2024-12-09 09:56:07.862285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.529 qpair failed and we were unable to recover it. 00:38:32.529 [2024-12-09 09:56:07.862589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.529 [2024-12-09 09:56:07.862600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.529 qpair failed and we were unable to recover it. 00:38:32.529 [2024-12-09 09:56:07.862918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.529 [2024-12-09 09:56:07.862956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.529 qpair failed and we were unable to recover it. 00:38:32.529 [2024-12-09 09:56:07.863285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.529 [2024-12-09 09:56:07.863297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.530 qpair failed and we were unable to recover it. 00:38:32.530 [2024-12-09 09:56:07.863487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.530 [2024-12-09 09:56:07.863503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.530 qpair failed and we were unable to recover it. 00:38:32.530 [2024-12-09 09:56:07.863824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.530 [2024-12-09 09:56:07.863836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.530 qpair failed and we were unable to recover it. 00:38:32.530 [2024-12-09 09:56:07.864200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.530 [2024-12-09 09:56:07.864211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.530 qpair failed and we were unable to recover it. 00:38:32.530 [2024-12-09 09:56:07.864547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.530 [2024-12-09 09:56:07.864557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.530 qpair failed and we were unable to recover it. 00:38:32.530 [2024-12-09 09:56:07.864900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.530 [2024-12-09 09:56:07.864910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.530 qpair failed and we were unable to recover it. 00:38:32.530 [2024-12-09 09:56:07.865199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.530 [2024-12-09 09:56:07.865209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.530 qpair failed and we were unable to recover it. 00:38:32.530 [2024-12-09 09:56:07.865555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.530 [2024-12-09 09:56:07.865566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.530 qpair failed and we were unable to recover it. 00:38:32.530 [2024-12-09 09:56:07.865893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.530 [2024-12-09 09:56:07.865903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.530 qpair failed and we were unable to recover it. 00:38:32.530 [2024-12-09 09:56:07.866139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.530 [2024-12-09 09:56:07.866149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.530 qpair failed and we were unable to recover it. 00:38:32.530 [2024-12-09 09:56:07.866344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.530 [2024-12-09 09:56:07.866354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.530 qpair failed and we were unable to recover it. 00:38:32.530 [2024-12-09 09:56:07.866694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.530 [2024-12-09 09:56:07.866705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.530 qpair failed and we were unable to recover it. 00:38:32.530 [2024-12-09 09:56:07.867018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.530 [2024-12-09 09:56:07.867029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.530 qpair failed and we were unable to recover it. 00:38:32.530 [2024-12-09 09:56:07.867319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.530 [2024-12-09 09:56:07.867329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.530 qpair failed and we were unable to recover it. 00:38:32.530 [2024-12-09 09:56:07.867672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.530 [2024-12-09 09:56:07.867683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.530 qpair failed and we were unable to recover it. 00:38:32.530 [2024-12-09 09:56:07.867910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.530 [2024-12-09 09:56:07.867920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.530 qpair failed and we were unable to recover it. 00:38:32.530 [2024-12-09 09:56:07.868558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.530 [2024-12-09 09:56:07.868569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.530 qpair failed and we were unable to recover it. 00:38:32.530 [2024-12-09 09:56:07.868720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.530 [2024-12-09 09:56:07.868731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.530 qpair failed and we were unable to recover it. 00:38:32.530 [2024-12-09 09:56:07.869066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.530 [2024-12-09 09:56:07.869076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.530 qpair failed and we were unable to recover it. 00:38:32.530 [2024-12-09 09:56:07.869404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.530 [2024-12-09 09:56:07.869414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.530 qpair failed and we were unable to recover it. 00:38:32.530 [2024-12-09 09:56:07.869593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.530 [2024-12-09 09:56:07.869604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.530 qpair failed and we were unable to recover it. 00:38:32.530 [2024-12-09 09:56:07.869930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.530 [2024-12-09 09:56:07.869941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.530 qpair failed and we were unable to recover it. 00:38:32.530 [2024-12-09 09:56:07.870220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.530 [2024-12-09 09:56:07.870231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.530 qpair failed and we were unable to recover it. 00:38:32.530 [2024-12-09 09:56:07.870522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.530 [2024-12-09 09:56:07.870533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.530 qpair failed and we were unable to recover it. 00:38:32.530 [2024-12-09 09:56:07.870838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.530 [2024-12-09 09:56:07.870849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.530 qpair failed and we were unable to recover it. 00:38:32.530 [2024-12-09 09:56:07.871147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.530 [2024-12-09 09:56:07.871157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.530 qpair failed and we were unable to recover it. 00:38:32.530 [2024-12-09 09:56:07.871465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.530 [2024-12-09 09:56:07.871475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.530 qpair failed and we were unable to recover it. 00:38:32.530 [2024-12-09 09:56:07.871753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.530 [2024-12-09 09:56:07.871764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.530 qpair failed and we were unable to recover it. 00:38:32.530 [2024-12-09 09:56:07.872072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.530 [2024-12-09 09:56:07.872082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.530 qpair failed and we were unable to recover it. 00:38:32.530 [2024-12-09 09:56:07.872293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.530 [2024-12-09 09:56:07.872304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.530 qpair failed and we were unable to recover it. 00:38:32.530 [2024-12-09 09:56:07.872365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.530 [2024-12-09 09:56:07.872376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.530 qpair failed and we were unable to recover it. 00:38:32.530 [2024-12-09 09:56:07.872565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.530 [2024-12-09 09:56:07.872575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.530 qpair failed and we were unable to recover it. 00:38:32.530 [2024-12-09 09:56:07.872861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.530 [2024-12-09 09:56:07.872872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.530 qpair failed and we were unable to recover it. 00:38:32.530 [2024-12-09 09:56:07.873189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.530 [2024-12-09 09:56:07.873199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.530 qpair failed and we were unable to recover it. 00:38:32.530 [2024-12-09 09:56:07.873489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.530 [2024-12-09 09:56:07.873499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.530 qpair failed and we were unable to recover it. 00:38:32.530 [2024-12-09 09:56:07.873713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.530 [2024-12-09 09:56:07.873724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.530 qpair failed and we were unable to recover it. 00:38:32.530 [2024-12-09 09:56:07.874045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.530 [2024-12-09 09:56:07.874055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.530 qpair failed and we were unable to recover it. 00:38:32.530 [2024-12-09 09:56:07.874388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.530 [2024-12-09 09:56:07.874399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.530 qpair failed and we were unable to recover it. 00:38:32.530 [2024-12-09 09:56:07.874590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.531 [2024-12-09 09:56:07.874600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.531 qpair failed and we were unable to recover it. 00:38:32.531 [2024-12-09 09:56:07.874879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.531 [2024-12-09 09:56:07.874890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.531 qpair failed and we were unable to recover it. 00:38:32.531 [2024-12-09 09:56:07.875226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.531 [2024-12-09 09:56:07.875236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.531 qpair failed and we were unable to recover it. 00:38:32.531 [2024-12-09 09:56:07.875533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.531 [2024-12-09 09:56:07.875543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.531 qpair failed and we were unable to recover it. 00:38:32.531 [2024-12-09 09:56:07.875856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.531 [2024-12-09 09:56:07.875868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.531 qpair failed and we were unable to recover it. 00:38:32.531 [2024-12-09 09:56:07.876158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.531 [2024-12-09 09:56:07.876168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.531 qpair failed and we were unable to recover it. 00:38:32.531 [2024-12-09 09:56:07.876323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.531 [2024-12-09 09:56:07.876333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.531 qpair failed and we were unable to recover it. 00:38:32.531 [2024-12-09 09:56:07.876624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.531 [2024-12-09 09:56:07.876634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.531 qpair failed and we were unable to recover it. 00:38:32.531 [2024-12-09 09:56:07.877002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.531 [2024-12-09 09:56:07.877013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.531 qpair failed and we were unable to recover it. 00:38:32.531 [2024-12-09 09:56:07.877291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.531 [2024-12-09 09:56:07.877301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.531 qpair failed and we were unable to recover it. 00:38:32.531 [2024-12-09 09:56:07.877583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.531 [2024-12-09 09:56:07.877592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.531 qpair failed and we were unable to recover it. 00:38:32.531 [2024-12-09 09:56:07.877962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.531 [2024-12-09 09:56:07.877972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.531 qpair failed and we were unable to recover it. 00:38:32.531 [2024-12-09 09:56:07.878276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.531 [2024-12-09 09:56:07.878286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.531 qpair failed and we were unable to recover it. 00:38:32.531 [2024-12-09 09:56:07.878577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.531 [2024-12-09 09:56:07.878587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.531 qpair failed and we were unable to recover it. 00:38:32.531 [2024-12-09 09:56:07.878882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.531 [2024-12-09 09:56:07.878892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.531 qpair failed and we were unable to recover it. 00:38:32.531 [2024-12-09 09:56:07.879211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.531 [2024-12-09 09:56:07.879222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.531 qpair failed and we were unable to recover it. 00:38:32.531 [2024-12-09 09:56:07.879576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.531 [2024-12-09 09:56:07.879586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.531 qpair failed and we were unable to recover it. 00:38:32.531 [2024-12-09 09:56:07.879893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.531 [2024-12-09 09:56:07.879903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.531 qpair failed and we were unable to recover it. 00:38:32.531 [2024-12-09 09:56:07.880278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.531 [2024-12-09 09:56:07.880288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.531 qpair failed and we were unable to recover it. 00:38:32.531 [2024-12-09 09:56:07.880568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.531 [2024-12-09 09:56:07.880578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.531 qpair failed and we were unable to recover it. 00:38:32.531 [2024-12-09 09:56:07.880933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.531 [2024-12-09 09:56:07.880943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.531 qpair failed and we were unable to recover it. 00:38:32.531 [2024-12-09 09:56:07.881235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.531 [2024-12-09 09:56:07.881245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.531 qpair failed and we were unable to recover it. 00:38:32.531 [2024-12-09 09:56:07.881417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.531 [2024-12-09 09:56:07.881426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.531 qpair failed and we were unable to recover it. 00:38:32.531 [2024-12-09 09:56:07.881757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.531 [2024-12-09 09:56:07.881768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.531 qpair failed and we were unable to recover it. 00:38:32.531 [2024-12-09 09:56:07.882088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.531 [2024-12-09 09:56:07.882097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.531 qpair failed and we were unable to recover it. 00:38:32.531 [2024-12-09 09:56:07.882397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.531 [2024-12-09 09:56:07.882407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.531 qpair failed and we were unable to recover it. 00:38:32.531 [2024-12-09 09:56:07.882681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.531 [2024-12-09 09:56:07.882690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.531 qpair failed and we were unable to recover it. 00:38:32.531 [2024-12-09 09:56:07.882996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.531 [2024-12-09 09:56:07.883006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.531 qpair failed and we were unable to recover it. 00:38:32.531 [2024-12-09 09:56:07.883288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.531 [2024-12-09 09:56:07.883298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.531 qpair failed and we were unable to recover it. 00:38:32.531 [2024-12-09 09:56:07.883614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.531 [2024-12-09 09:56:07.883623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.531 qpair failed and we were unable to recover it. 00:38:32.531 [2024-12-09 09:56:07.884018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.531 [2024-12-09 09:56:07.884029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.531 qpair failed and we were unable to recover it. 00:38:32.531 [2024-12-09 09:56:07.884327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.531 [2024-12-09 09:56:07.884339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.531 qpair failed and we were unable to recover it. 00:38:32.531 [2024-12-09 09:56:07.884611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.531 [2024-12-09 09:56:07.884620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.531 qpair failed and we were unable to recover it. 00:38:32.531 [2024-12-09 09:56:07.884970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.531 [2024-12-09 09:56:07.884981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.531 qpair failed and we were unable to recover it. 00:38:32.531 [2024-12-09 09:56:07.885293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.531 [2024-12-09 09:56:07.885304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.531 qpair failed and we were unable to recover it. 00:38:32.531 [2024-12-09 09:56:07.885584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.531 [2024-12-09 09:56:07.885593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.531 qpair failed and we were unable to recover it. 00:38:32.531 [2024-12-09 09:56:07.885895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.531 [2024-12-09 09:56:07.885906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.531 qpair failed and we were unable to recover it. 00:38:32.531 [2024-12-09 09:56:07.886229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.531 [2024-12-09 09:56:07.886239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.531 qpair failed and we were unable to recover it. 00:38:32.532 [2024-12-09 09:56:07.886580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.532 [2024-12-09 09:56:07.886590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.532 qpair failed and we were unable to recover it. 00:38:32.532 [2024-12-09 09:56:07.886851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.532 [2024-12-09 09:56:07.886861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.532 qpair failed and we were unable to recover it. 00:38:32.532 [2024-12-09 09:56:07.887142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.532 [2024-12-09 09:56:07.887151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.532 qpair failed and we were unable to recover it. 00:38:32.532 [2024-12-09 09:56:07.887430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.532 [2024-12-09 09:56:07.887440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.532 qpair failed and we were unable to recover it. 00:38:32.532 [2024-12-09 09:56:07.887739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.532 [2024-12-09 09:56:07.887749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.532 qpair failed and we were unable to recover it. 00:38:32.532 [2024-12-09 09:56:07.887923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.532 [2024-12-09 09:56:07.887936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.532 qpair failed and we were unable to recover it. 00:38:32.532 [2024-12-09 09:56:07.888245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.532 [2024-12-09 09:56:07.888258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.532 qpair failed and we were unable to recover it. 00:38:32.532 [2024-12-09 09:56:07.888554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.532 [2024-12-09 09:56:07.888567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.532 qpair failed and we were unable to recover it. 00:38:32.532 [2024-12-09 09:56:07.888927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.532 [2024-12-09 09:56:07.888939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.532 qpair failed and we were unable to recover it. 00:38:32.532 [2024-12-09 09:56:07.889304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.532 [2024-12-09 09:56:07.889316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.532 qpair failed and we were unable to recover it. 00:38:32.532 [2024-12-09 09:56:07.889596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.532 [2024-12-09 09:56:07.889608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.532 qpair failed and we were unable to recover it. 00:38:32.532 [2024-12-09 09:56:07.889924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.532 [2024-12-09 09:56:07.889937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.532 qpair failed and we were unable to recover it. 00:38:32.532 [2024-12-09 09:56:07.890118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.532 [2024-12-09 09:56:07.890130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.532 qpair failed and we were unable to recover it. 00:38:32.532 [2024-12-09 09:56:07.890434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.532 [2024-12-09 09:56:07.890446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.532 qpair failed and we were unable to recover it. 00:38:32.532 [2024-12-09 09:56:07.890725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.532 [2024-12-09 09:56:07.890738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.532 qpair failed and we were unable to recover it. 00:38:32.532 [2024-12-09 09:56:07.891039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.532 [2024-12-09 09:56:07.891051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.532 qpair failed and we were unable to recover it. 00:38:32.532 [2024-12-09 09:56:07.891372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.532 [2024-12-09 09:56:07.891385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.532 qpair failed and we were unable to recover it. 00:38:32.532 [2024-12-09 09:56:07.891674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.532 [2024-12-09 09:56:07.891686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.532 qpair failed and we were unable to recover it. 00:38:32.532 [2024-12-09 09:56:07.891996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.532 [2024-12-09 09:56:07.892008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.532 qpair failed and we were unable to recover it. 00:38:32.532 [2024-12-09 09:56:07.892217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.532 [2024-12-09 09:56:07.892229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.532 qpair failed and we were unable to recover it. 00:38:32.532 [2024-12-09 09:56:07.892520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.532 [2024-12-09 09:56:07.892533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.532 qpair failed and we were unable to recover it. 00:38:32.532 [2024-12-09 09:56:07.892819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.532 [2024-12-09 09:56:07.892832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.532 qpair failed and we were unable to recover it. 00:38:32.532 [2024-12-09 09:56:07.893206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.532 [2024-12-09 09:56:07.893218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.532 qpair failed and we were unable to recover it. 00:38:32.532 [2024-12-09 09:56:07.893417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.532 [2024-12-09 09:56:07.893429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.532 qpair failed and we were unable to recover it. 00:38:32.532 [2024-12-09 09:56:07.893714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.532 [2024-12-09 09:56:07.893727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.532 qpair failed and we were unable to recover it. 00:38:32.532 [2024-12-09 09:56:07.893915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.532 [2024-12-09 09:56:07.893927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.532 qpair failed and we were unable to recover it. 00:38:32.532 [2024-12-09 09:56:07.894245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.532 [2024-12-09 09:56:07.894258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.532 qpair failed and we were unable to recover it. 00:38:32.532 [2024-12-09 09:56:07.894593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.532 [2024-12-09 09:56:07.894605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.532 qpair failed and we were unable to recover it. 00:38:32.532 [2024-12-09 09:56:07.894966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.532 [2024-12-09 09:56:07.894979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.532 qpair failed and we were unable to recover it. 00:38:32.532 [2024-12-09 09:56:07.895182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.532 [2024-12-09 09:56:07.895194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.532 qpair failed and we were unable to recover it. 00:38:32.532 [2024-12-09 09:56:07.895480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.532 [2024-12-09 09:56:07.895492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.532 qpair failed and we were unable to recover it. 00:38:32.532 [2024-12-09 09:56:07.895800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.532 [2024-12-09 09:56:07.895813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.532 qpair failed and we were unable to recover it. 00:38:32.532 [2024-12-09 09:56:07.895998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.532 [2024-12-09 09:56:07.896010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.532 qpair failed and we were unable to recover it. 00:38:32.532 [2024-12-09 09:56:07.896311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.532 [2024-12-09 09:56:07.896323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.532 qpair failed and we were unable to recover it. 00:38:32.532 [2024-12-09 09:56:07.896597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.532 [2024-12-09 09:56:07.896612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.532 qpair failed and we were unable to recover it. 00:38:32.532 [2024-12-09 09:56:07.896895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.532 [2024-12-09 09:56:07.896908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.532 qpair failed and we were unable to recover it. 00:38:32.532 [2024-12-09 09:56:07.897204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.532 [2024-12-09 09:56:07.897217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.532 qpair failed and we were unable to recover it. 00:38:32.532 [2024-12-09 09:56:07.897539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.532 [2024-12-09 09:56:07.897552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.533 qpair failed and we were unable to recover it. 00:38:32.533 [2024-12-09 09:56:07.897867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.533 [2024-12-09 09:56:07.897880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.533 qpair failed and we were unable to recover it. 00:38:32.533 [2024-12-09 09:56:07.898248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.533 [2024-12-09 09:56:07.898260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.533 qpair failed and we were unable to recover it. 00:38:32.533 [2024-12-09 09:56:07.898631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.533 [2024-12-09 09:56:07.898699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.533 qpair failed and we were unable to recover it. 00:38:32.533 [2024-12-09 09:56:07.898994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.533 [2024-12-09 09:56:07.899007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.533 qpair failed and we were unable to recover it. 00:38:32.533 [2024-12-09 09:56:07.899317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.533 [2024-12-09 09:56:07.899330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.533 qpair failed and we were unable to recover it. 00:38:32.533 [2024-12-09 09:56:07.899606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.533 [2024-12-09 09:56:07.899618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.533 qpair failed and we were unable to recover it. 00:38:32.533 [2024-12-09 09:56:07.899825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.533 [2024-12-09 09:56:07.899837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.533 qpair failed and we were unable to recover it. 00:38:32.533 [2024-12-09 09:56:07.900079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.533 [2024-12-09 09:56:07.900091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.533 qpair failed and we were unable to recover it. 00:38:32.533 [2024-12-09 09:56:07.900398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.533 [2024-12-09 09:56:07.900410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.533 qpair failed and we were unable to recover it. 00:38:32.533 [2024-12-09 09:56:07.900699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.533 [2024-12-09 09:56:07.900712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.533 qpair failed and we were unable to recover it. 00:38:32.533 [2024-12-09 09:56:07.900994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.533 [2024-12-09 09:56:07.901006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.533 qpair failed and we were unable to recover it. 00:38:32.533 [2024-12-09 09:56:07.901227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.533 [2024-12-09 09:56:07.901240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.533 qpair failed and we were unable to recover it. 00:38:32.533 [2024-12-09 09:56:07.901533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.533 [2024-12-09 09:56:07.901546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.533 qpair failed and we were unable to recover it. 00:38:32.533 [2024-12-09 09:56:07.901740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.533 [2024-12-09 09:56:07.901753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.533 qpair failed and we were unable to recover it. 00:38:32.533 [2024-12-09 09:56:07.902099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.533 [2024-12-09 09:56:07.902113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.533 qpair failed and we were unable to recover it. 00:38:32.533 [2024-12-09 09:56:07.902493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.533 [2024-12-09 09:56:07.902509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.533 qpair failed and we were unable to recover it. 00:38:32.533 [2024-12-09 09:56:07.902803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.533 [2024-12-09 09:56:07.902818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.533 qpair failed and we were unable to recover it. 00:38:32.533 [2024-12-09 09:56:07.903104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.533 [2024-12-09 09:56:07.903119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.533 qpair failed and we were unable to recover it. 00:38:32.533 [2024-12-09 09:56:07.903439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.533 [2024-12-09 09:56:07.903455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.533 qpair failed and we were unable to recover it. 00:38:32.533 [2024-12-09 09:56:07.903759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.533 [2024-12-09 09:56:07.903775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.533 qpair failed and we were unable to recover it. 00:38:32.533 [2024-12-09 09:56:07.904116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.533 [2024-12-09 09:56:07.904131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.533 qpair failed and we were unable to recover it. 00:38:32.533 [2024-12-09 09:56:07.904532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.533 [2024-12-09 09:56:07.904548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.533 qpair failed and we were unable to recover it. 00:38:32.533 [2024-12-09 09:56:07.904926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.533 [2024-12-09 09:56:07.904941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.533 qpair failed and we were unable to recover it. 00:38:32.533 [2024-12-09 09:56:07.905257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.533 [2024-12-09 09:56:07.905272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.533 qpair failed and we were unable to recover it. 00:38:32.533 [2024-12-09 09:56:07.905635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.533 [2024-12-09 09:56:07.905658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.533 qpair failed and we were unable to recover it. 00:38:32.533 [2024-12-09 09:56:07.905987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.533 [2024-12-09 09:56:07.906003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.533 qpair failed and we were unable to recover it. 00:38:32.533 [2024-12-09 09:56:07.906331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.533 [2024-12-09 09:56:07.906346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.533 qpair failed and we were unable to recover it. 00:38:32.533 [2024-12-09 09:56:07.906676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.533 [2024-12-09 09:56:07.906692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.533 qpair failed and we were unable to recover it. 00:38:32.533 [2024-12-09 09:56:07.907026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.533 [2024-12-09 09:56:07.907041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.533 qpair failed and we were unable to recover it. 00:38:32.533 [2024-12-09 09:56:07.907325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.533 [2024-12-09 09:56:07.907339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.533 qpair failed and we were unable to recover it. 00:38:32.533 [2024-12-09 09:56:07.907676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.533 [2024-12-09 09:56:07.907692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.533 qpair failed and we were unable to recover it. 00:38:32.534 [2024-12-09 09:56:07.907966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.534 [2024-12-09 09:56:07.907981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.534 qpair failed and we were unable to recover it. 00:38:32.534 [2024-12-09 09:56:07.908299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.534 [2024-12-09 09:56:07.908314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.534 qpair failed and we were unable to recover it. 00:38:32.534 [2024-12-09 09:56:07.908626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.534 [2024-12-09 09:56:07.908646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.534 qpair failed and we were unable to recover it. 00:38:32.534 [2024-12-09 09:56:07.908951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.534 [2024-12-09 09:56:07.908966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.534 qpair failed and we were unable to recover it. 00:38:32.534 [2024-12-09 09:56:07.909265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.534 [2024-12-09 09:56:07.909279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.534 qpair failed and we were unable to recover it. 00:38:32.534 [2024-12-09 09:56:07.909658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.534 [2024-12-09 09:56:07.909672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.534 qpair failed and we were unable to recover it. 00:38:32.534 [2024-12-09 09:56:07.910040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.534 [2024-12-09 09:56:07.910054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.534 qpair failed and we were unable to recover it. 00:38:32.534 [2024-12-09 09:56:07.910418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.534 [2024-12-09 09:56:07.910433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.534 qpair failed and we were unable to recover it. 00:38:32.534 [2024-12-09 09:56:07.910727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.534 [2024-12-09 09:56:07.910742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.534 qpair failed and we were unable to recover it. 00:38:32.534 [2024-12-09 09:56:07.911072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.534 [2024-12-09 09:56:07.911087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.534 qpair failed and we were unable to recover it. 00:38:32.534 [2024-12-09 09:56:07.911280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.534 [2024-12-09 09:56:07.911296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.534 qpair failed and we were unable to recover it. 00:38:32.534 [2024-12-09 09:56:07.911606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.534 [2024-12-09 09:56:07.911621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.534 qpair failed and we were unable to recover it. 00:38:32.534 [2024-12-09 09:56:07.911936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.534 [2024-12-09 09:56:07.911952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.534 qpair failed and we were unable to recover it. 00:38:32.534 [2024-12-09 09:56:07.912281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.534 [2024-12-09 09:56:07.912297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.534 qpair failed and we were unable to recover it. 00:38:32.534 [2024-12-09 09:56:07.912659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.534 [2024-12-09 09:56:07.912674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.534 qpair failed and we were unable to recover it. 00:38:32.534 [2024-12-09 09:56:07.912974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.534 [2024-12-09 09:56:07.912989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.534 qpair failed and we were unable to recover it. 00:38:32.534 [2024-12-09 09:56:07.913159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.534 [2024-12-09 09:56:07.913174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.534 qpair failed and we were unable to recover it. 00:38:32.534 [2024-12-09 09:56:07.913494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.534 [2024-12-09 09:56:07.913509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.534 qpair failed and we were unable to recover it. 00:38:32.534 [2024-12-09 09:56:07.913807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.534 [2024-12-09 09:56:07.913822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.534 qpair failed and we were unable to recover it. 00:38:32.534 [2024-12-09 09:56:07.914118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.534 [2024-12-09 09:56:07.914133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.534 qpair failed and we were unable to recover it. 00:38:32.534 [2024-12-09 09:56:07.914476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.534 [2024-12-09 09:56:07.914491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.534 qpair failed and we were unable to recover it. 00:38:32.534 [2024-12-09 09:56:07.914797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.534 [2024-12-09 09:56:07.914814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.534 qpair failed and we were unable to recover it. 00:38:32.534 [2024-12-09 09:56:07.915153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.534 [2024-12-09 09:56:07.915168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.534 qpair failed and we were unable to recover it. 00:38:32.534 [2024-12-09 09:56:07.915376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.534 [2024-12-09 09:56:07.915390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.534 qpair failed and we were unable to recover it. 00:38:32.534 [2024-12-09 09:56:07.915704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.534 [2024-12-09 09:56:07.915720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.534 qpair failed and we were unable to recover it. 00:38:32.534 [2024-12-09 09:56:07.916031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.534 [2024-12-09 09:56:07.916046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.534 qpair failed and we were unable to recover it. 00:38:32.534 [2024-12-09 09:56:07.916364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.534 [2024-12-09 09:56:07.916380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.534 qpair failed and we were unable to recover it. 00:38:32.534 [2024-12-09 09:56:07.916568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.534 [2024-12-09 09:56:07.916582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.534 qpair failed and we were unable to recover it. 00:38:32.534 [2024-12-09 09:56:07.916899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.534 [2024-12-09 09:56:07.916914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.534 qpair failed and we were unable to recover it. 00:38:32.534 [2024-12-09 09:56:07.917300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.534 [2024-12-09 09:56:07.917316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.534 qpair failed and we were unable to recover it. 00:38:32.534 [2024-12-09 09:56:07.917511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.534 [2024-12-09 09:56:07.917525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.534 qpair failed and we were unable to recover it. 00:38:32.534 [2024-12-09 09:56:07.917847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.534 [2024-12-09 09:56:07.917863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.534 qpair failed and we were unable to recover it. 00:38:32.534 [2024-12-09 09:56:07.918189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.534 [2024-12-09 09:56:07.918204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.534 qpair failed and we were unable to recover it. 00:38:32.534 [2024-12-09 09:56:07.918534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.534 [2024-12-09 09:56:07.918551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.534 qpair failed and we were unable to recover it. 00:38:32.534 [2024-12-09 09:56:07.918862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.534 [2024-12-09 09:56:07.918877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.534 qpair failed and we were unable to recover it. 00:38:32.534 [2024-12-09 09:56:07.919183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.534 [2024-12-09 09:56:07.919198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.534 qpair failed and we were unable to recover it. 00:38:32.534 [2024-12-09 09:56:07.919515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.534 [2024-12-09 09:56:07.919530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.534 qpair failed and we were unable to recover it. 00:38:32.534 [2024-12-09 09:56:07.919718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.534 [2024-12-09 09:56:07.919733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.535 qpair failed and we were unable to recover it. 00:38:32.535 [2024-12-09 09:56:07.920118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.535 [2024-12-09 09:56:07.920132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.535 qpair failed and we were unable to recover it. 00:38:32.535 [2024-12-09 09:56:07.920429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.535 [2024-12-09 09:56:07.920445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.535 qpair failed and we were unable to recover it. 00:38:32.535 [2024-12-09 09:56:07.920735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.535 [2024-12-09 09:56:07.920751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.535 qpair failed and we were unable to recover it. 00:38:32.535 [2024-12-09 09:56:07.920975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.535 [2024-12-09 09:56:07.920989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.535 qpair failed and we were unable to recover it. 00:38:32.535 [2024-12-09 09:56:07.921286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.535 [2024-12-09 09:56:07.921301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.535 qpair failed and we were unable to recover it. 00:38:32.535 [2024-12-09 09:56:07.921682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.535 [2024-12-09 09:56:07.921698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.535 qpair failed and we were unable to recover it. 00:38:32.535 [2024-12-09 09:56:07.922043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.535 [2024-12-09 09:56:07.922058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.535 qpair failed and we were unable to recover it. 00:38:32.535 [2024-12-09 09:56:07.922364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.535 [2024-12-09 09:56:07.922379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.535 qpair failed and we were unable to recover it. 00:38:32.535 [2024-12-09 09:56:07.922680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.535 [2024-12-09 09:56:07.922696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.535 qpair failed and we were unable to recover it. 00:38:32.535 [2024-12-09 09:56:07.923047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.535 [2024-12-09 09:56:07.923062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.535 qpair failed and we were unable to recover it. 00:38:32.535 [2024-12-09 09:56:07.923368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.535 [2024-12-09 09:56:07.923383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.535 qpair failed and we were unable to recover it. 00:38:32.535 [2024-12-09 09:56:07.923692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.535 [2024-12-09 09:56:07.923708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.535 qpair failed and we were unable to recover it. 00:38:32.535 [2024-12-09 09:56:07.924036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.535 [2024-12-09 09:56:07.924051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.535 qpair failed and we were unable to recover it. 00:38:32.535 [2024-12-09 09:56:07.924267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.535 [2024-12-09 09:56:07.924281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.535 qpair failed and we were unable to recover it. 00:38:32.535 [2024-12-09 09:56:07.924658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.535 [2024-12-09 09:56:07.924674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.535 qpair failed and we were unable to recover it. 00:38:32.535 [2024-12-09 09:56:07.924990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.535 [2024-12-09 09:56:07.925005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.535 qpair failed and we were unable to recover it. 00:38:32.535 [2024-12-09 09:56:07.925310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.535 [2024-12-09 09:56:07.925325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.535 qpair failed and we were unable to recover it. 00:38:32.535 [2024-12-09 09:56:07.925646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.535 [2024-12-09 09:56:07.925662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.535 qpair failed and we were unable to recover it. 00:38:32.535 [2024-12-09 09:56:07.925955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.535 [2024-12-09 09:56:07.925970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.535 qpair failed and we were unable to recover it. 00:38:32.535 [2024-12-09 09:56:07.926276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.535 [2024-12-09 09:56:07.926291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.535 qpair failed and we were unable to recover it. 00:38:32.535 [2024-12-09 09:56:07.926626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.535 [2024-12-09 09:56:07.926649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.535 qpair failed and we were unable to recover it. 00:38:32.535 [2024-12-09 09:56:07.926993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.535 [2024-12-09 09:56:07.927007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.535 qpair failed and we were unable to recover it. 00:38:32.535 [2024-12-09 09:56:07.927317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.535 [2024-12-09 09:56:07.927332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.535 qpair failed and we were unable to recover it. 00:38:32.535 [2024-12-09 09:56:07.927658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.535 [2024-12-09 09:56:07.927674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.535 qpair failed and we were unable to recover it. 00:38:32.535 [2024-12-09 09:56:07.928000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.535 [2024-12-09 09:56:07.928014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.535 qpair failed and we were unable to recover it. 00:38:32.535 [2024-12-09 09:56:07.928323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.535 [2024-12-09 09:56:07.928339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.535 qpair failed and we were unable to recover it. 00:38:32.535 [2024-12-09 09:56:07.928647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.535 [2024-12-09 09:56:07.928662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.535 qpair failed and we were unable to recover it. 00:38:32.535 [2024-12-09 09:56:07.928983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.535 [2024-12-09 09:56:07.928998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.535 qpair failed and we were unable to recover it. 00:38:32.535 [2024-12-09 09:56:07.929366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.535 [2024-12-09 09:56:07.929380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.535 qpair failed and we were unable to recover it. 00:38:32.535 [2024-12-09 09:56:07.929718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.535 [2024-12-09 09:56:07.929733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.535 qpair failed and we were unable to recover it. 00:38:32.535 [2024-12-09 09:56:07.930035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.535 [2024-12-09 09:56:07.930051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.535 qpair failed and we were unable to recover it. 00:38:32.535 [2024-12-09 09:56:07.930377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.535 [2024-12-09 09:56:07.930393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.535 qpair failed and we were unable to recover it. 00:38:32.535 [2024-12-09 09:56:07.930726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.535 [2024-12-09 09:56:07.930741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.535 qpair failed and we were unable to recover it. 00:38:32.535 [2024-12-09 09:56:07.931036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.535 [2024-12-09 09:56:07.931050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.535 qpair failed and we were unable to recover it. 00:38:32.535 [2024-12-09 09:56:07.931366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.535 [2024-12-09 09:56:07.931381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.535 qpair failed and we were unable to recover it. 00:38:32.535 [2024-12-09 09:56:07.931663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.535 [2024-12-09 09:56:07.931679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.535 qpair failed and we were unable to recover it. 00:38:32.535 [2024-12-09 09:56:07.931978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.535 [2024-12-09 09:56:07.931995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.535 qpair failed and we were unable to recover it. 00:38:32.536 [2024-12-09 09:56:07.932286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.536 [2024-12-09 09:56:07.932302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.536 qpair failed and we were unable to recover it. 00:38:32.536 [2024-12-09 09:56:07.932648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.536 [2024-12-09 09:56:07.932663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.536 qpair failed and we were unable to recover it. 00:38:32.536 [2024-12-09 09:56:07.932950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.536 [2024-12-09 09:56:07.932966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.536 qpair failed and we were unable to recover it. 00:38:32.536 [2024-12-09 09:56:07.933298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.536 [2024-12-09 09:56:07.933313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.536 qpair failed and we were unable to recover it. 00:38:32.536 [2024-12-09 09:56:07.933609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.536 [2024-12-09 09:56:07.933625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.536 qpair failed and we were unable to recover it. 00:38:32.536 [2024-12-09 09:56:07.933963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.536 [2024-12-09 09:56:07.933978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.536 qpair failed and we were unable to recover it. 00:38:32.536 [2024-12-09 09:56:07.934281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.536 [2024-12-09 09:56:07.934296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.536 qpair failed and we were unable to recover it. 00:38:32.536 [2024-12-09 09:56:07.934643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.536 [2024-12-09 09:56:07.934658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.536 qpair failed and we were unable to recover it. 00:38:32.536 [2024-12-09 09:56:07.934980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.536 [2024-12-09 09:56:07.934995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.536 qpair failed and we were unable to recover it. 00:38:32.536 [2024-12-09 09:56:07.935292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.536 [2024-12-09 09:56:07.935307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.536 qpair failed and we were unable to recover it. 00:38:32.536 [2024-12-09 09:56:07.935607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.536 [2024-12-09 09:56:07.935623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.536 qpair failed and we were unable to recover it. 00:38:32.536 [2024-12-09 09:56:07.935940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.536 [2024-12-09 09:56:07.935955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.536 qpair failed and we were unable to recover it. 00:38:32.536 [2024-12-09 09:56:07.936269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.536 [2024-12-09 09:56:07.936284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.536 qpair failed and we were unable to recover it. 00:38:32.536 [2024-12-09 09:56:07.936612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.536 [2024-12-09 09:56:07.936628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.536 qpair failed and we were unable to recover it. 00:38:32.536 [2024-12-09 09:56:07.936939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.536 [2024-12-09 09:56:07.936954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.536 qpair failed and we were unable to recover it. 00:38:32.536 [2024-12-09 09:56:07.937171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.536 [2024-12-09 09:56:07.937186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.536 qpair failed and we were unable to recover it. 00:38:32.536 [2024-12-09 09:56:07.937510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.536 [2024-12-09 09:56:07.937525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.536 qpair failed and we were unable to recover it. 00:38:32.536 [2024-12-09 09:56:07.937850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.536 [2024-12-09 09:56:07.937866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.536 qpair failed and we were unable to recover it. 00:38:32.536 [2024-12-09 09:56:07.938181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.536 [2024-12-09 09:56:07.938196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.536 qpair failed and we were unable to recover it. 00:38:32.536 [2024-12-09 09:56:07.938584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.536 [2024-12-09 09:56:07.938600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.536 qpair failed and we were unable to recover it. 00:38:32.536 [2024-12-09 09:56:07.938945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.536 [2024-12-09 09:56:07.938960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.536 qpair failed and we were unable to recover it. 00:38:32.536 [2024-12-09 09:56:07.939276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.536 [2024-12-09 09:56:07.939291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.536 qpair failed and we were unable to recover it. 00:38:32.536 [2024-12-09 09:56:07.939587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.536 [2024-12-09 09:56:07.939603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.536 qpair failed and we were unable to recover it. 00:38:32.536 [2024-12-09 09:56:07.939816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.536 [2024-12-09 09:56:07.939832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.536 qpair failed and we were unable to recover it. 00:38:32.536 [2024-12-09 09:56:07.940030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.536 [2024-12-09 09:56:07.940046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.536 qpair failed and we were unable to recover it. 00:38:32.536 [2024-12-09 09:56:07.940361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.536 [2024-12-09 09:56:07.940377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.536 qpair failed and we were unable to recover it. 00:38:32.536 [2024-12-09 09:56:07.940672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.536 [2024-12-09 09:56:07.940690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.536 qpair failed and we were unable to recover it. 00:38:32.536 [2024-12-09 09:56:07.941016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.536 [2024-12-09 09:56:07.941030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.536 qpair failed and we were unable to recover it. 00:38:32.536 [2024-12-09 09:56:07.941303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.536 [2024-12-09 09:56:07.941317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.536 qpair failed and we were unable to recover it. 00:38:32.536 [2024-12-09 09:56:07.941650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.536 [2024-12-09 09:56:07.941665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.536 qpair failed and we were unable to recover it. 00:38:32.536 [2024-12-09 09:56:07.942019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.536 [2024-12-09 09:56:07.942035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.536 qpair failed and we were unable to recover it. 00:38:32.536 [2024-12-09 09:56:07.942345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.536 [2024-12-09 09:56:07.942359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.536 qpair failed and we were unable to recover it. 00:38:32.536 [2024-12-09 09:56:07.942703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.536 [2024-12-09 09:56:07.942719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.536 qpair failed and we were unable to recover it. 00:38:32.536 [2024-12-09 09:56:07.943042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.536 [2024-12-09 09:56:07.943057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.536 qpair failed and we were unable to recover it. 00:38:32.536 [2024-12-09 09:56:07.943393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.536 [2024-12-09 09:56:07.943407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.536 qpair failed and we were unable to recover it. 00:38:32.536 [2024-12-09 09:56:07.943716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.536 [2024-12-09 09:56:07.943731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.536 qpair failed and we were unable to recover it. 00:38:32.536 [2024-12-09 09:56:07.943956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.536 [2024-12-09 09:56:07.943970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.536 qpair failed and we were unable to recover it. 00:38:32.536 [2024-12-09 09:56:07.944253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.536 [2024-12-09 09:56:07.944268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.536 qpair failed and we were unable to recover it. 00:38:32.537 [2024-12-09 09:56:07.944472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.537 [2024-12-09 09:56:07.944487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.537 qpair failed and we were unable to recover it. 00:38:32.537 [2024-12-09 09:56:07.944808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.537 [2024-12-09 09:56:07.944823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.537 qpair failed and we were unable to recover it. 00:38:32.537 [2024-12-09 09:56:07.945135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.537 [2024-12-09 09:56:07.945150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.537 qpair failed and we were unable to recover it. 00:38:32.537 [2024-12-09 09:56:07.945501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.537 [2024-12-09 09:56:07.945515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.537 qpair failed and we were unable to recover it. 00:38:32.537 [2024-12-09 09:56:07.945797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.537 [2024-12-09 09:56:07.945812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.537 qpair failed and we were unable to recover it. 00:38:32.537 [2024-12-09 09:56:07.946119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.537 [2024-12-09 09:56:07.946134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.537 qpair failed and we were unable to recover it. 00:38:32.537 [2024-12-09 09:56:07.946439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.537 [2024-12-09 09:56:07.946453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.537 qpair failed and we were unable to recover it. 00:38:32.537 [2024-12-09 09:56:07.946768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.537 [2024-12-09 09:56:07.946784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.537 qpair failed and we were unable to recover it. 00:38:32.537 [2024-12-09 09:56:07.947089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.537 [2024-12-09 09:56:07.947104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.537 qpair failed and we were unable to recover it. 00:38:32.537 [2024-12-09 09:56:07.947404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.537 [2024-12-09 09:56:07.947418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.537 qpair failed and we were unable to recover it. 00:38:32.537 [2024-12-09 09:56:07.947736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.537 [2024-12-09 09:56:07.947752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.537 qpair failed and we were unable to recover it. 00:38:32.537 [2024-12-09 09:56:07.948092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.537 [2024-12-09 09:56:07.948107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.537 qpair failed and we were unable to recover it. 00:38:32.537 [2024-12-09 09:56:07.948435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.537 [2024-12-09 09:56:07.948451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.537 qpair failed and we were unable to recover it. 00:38:32.537 [2024-12-09 09:56:07.948755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.537 [2024-12-09 09:56:07.948771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.537 qpair failed and we were unable to recover it. 00:38:32.537 [2024-12-09 09:56:07.948967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.537 [2024-12-09 09:56:07.948981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.537 qpair failed and we were unable to recover it. 00:38:32.537 [2024-12-09 09:56:07.949286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.537 [2024-12-09 09:56:07.949301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.537 qpair failed and we were unable to recover it. 00:38:32.537 [2024-12-09 09:56:07.949630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.537 [2024-12-09 09:56:07.949652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.537 qpair failed and we were unable to recover it. 00:38:32.537 [2024-12-09 09:56:07.949948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.537 [2024-12-09 09:56:07.949962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.537 qpair failed and we were unable to recover it. 00:38:32.537 [2024-12-09 09:56:07.950245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.537 [2024-12-09 09:56:07.950265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.537 qpair failed and we were unable to recover it. 00:38:32.537 [2024-12-09 09:56:07.950582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.537 [2024-12-09 09:56:07.950596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.537 qpair failed and we were unable to recover it. 00:38:32.537 [2024-12-09 09:56:07.950904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.537 [2024-12-09 09:56:07.950920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.537 qpair failed and we were unable to recover it. 00:38:32.537 [2024-12-09 09:56:07.951263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.537 [2024-12-09 09:56:07.951278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.537 qpair failed and we were unable to recover it. 00:38:32.537 [2024-12-09 09:56:07.951595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.537 [2024-12-09 09:56:07.951611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.537 qpair failed and we were unable to recover it. 00:38:32.537 [2024-12-09 09:56:07.951951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.537 [2024-12-09 09:56:07.951966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.537 qpair failed and we were unable to recover it. 00:38:32.537 [2024-12-09 09:56:07.952192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.537 [2024-12-09 09:56:07.952206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.537 qpair failed and we were unable to recover it. 00:38:32.537 [2024-12-09 09:56:07.952530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.537 [2024-12-09 09:56:07.952544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.537 qpair failed and we were unable to recover it. 00:38:32.537 [2024-12-09 09:56:07.952853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.537 [2024-12-09 09:56:07.952868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.537 qpair failed and we were unable to recover it. 00:38:32.537 [2024-12-09 09:56:07.953073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.537 [2024-12-09 09:56:07.953088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.537 qpair failed and we were unable to recover it. 00:38:32.537 [2024-12-09 09:56:07.953359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.537 [2024-12-09 09:56:07.953374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.537 qpair failed and we were unable to recover it. 00:38:32.537 [2024-12-09 09:56:07.953751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.537 [2024-12-09 09:56:07.953769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.537 qpair failed and we were unable to recover it. 00:38:32.537 [2024-12-09 09:56:07.954068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.537 [2024-12-09 09:56:07.954083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.537 qpair failed and we were unable to recover it. 00:38:32.537 [2024-12-09 09:56:07.954301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.537 [2024-12-09 09:56:07.954316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.537 qpair failed and we were unable to recover it. 00:38:32.537 [2024-12-09 09:56:07.954645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.537 [2024-12-09 09:56:07.954660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.537 qpair failed and we were unable to recover it. 00:38:32.537 [2024-12-09 09:56:07.954953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.537 [2024-12-09 09:56:07.954968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.537 qpair failed and we were unable to recover it. 00:38:32.537 [2024-12-09 09:56:07.955293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.537 [2024-12-09 09:56:07.955307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.537 qpair failed and we were unable to recover it. 00:38:32.537 [2024-12-09 09:56:07.955593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.537 [2024-12-09 09:56:07.955609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.537 qpair failed and we were unable to recover it. 00:38:32.537 [2024-12-09 09:56:07.955944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.537 [2024-12-09 09:56:07.955959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.537 qpair failed and we were unable to recover it. 00:38:32.537 [2024-12-09 09:56:07.956268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.537 [2024-12-09 09:56:07.956283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.538 qpair failed and we were unable to recover it. 00:38:32.538 [2024-12-09 09:56:07.956599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.538 [2024-12-09 09:56:07.956614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.538 qpair failed and we were unable to recover it. 00:38:32.538 [2024-12-09 09:56:07.957044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.538 [2024-12-09 09:56:07.957059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.538 qpair failed and we were unable to recover it. 00:38:32.538 [2024-12-09 09:56:07.957355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.538 [2024-12-09 09:56:07.957371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.538 qpair failed and we were unable to recover it. 00:38:32.538 [2024-12-09 09:56:07.957707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.538 [2024-12-09 09:56:07.957722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.538 qpair failed and we were unable to recover it. 00:38:32.538 [2024-12-09 09:56:07.958030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.538 [2024-12-09 09:56:07.958044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.538 qpair failed and we were unable to recover it. 00:38:32.538 [2024-12-09 09:56:07.958385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.538 [2024-12-09 09:56:07.958400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.538 qpair failed and we were unable to recover it. 00:38:32.538 [2024-12-09 09:56:07.958755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.538 [2024-12-09 09:56:07.958770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.538 qpair failed and we were unable to recover it. 00:38:32.538 [2024-12-09 09:56:07.959063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.538 [2024-12-09 09:56:07.959078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.538 qpair failed and we were unable to recover it. 00:38:32.538 [2024-12-09 09:56:07.959349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.538 [2024-12-09 09:56:07.959363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.538 qpair failed and we were unable to recover it. 00:38:32.538 [2024-12-09 09:56:07.959672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.538 [2024-12-09 09:56:07.959687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.538 qpair failed and we were unable to recover it. 00:38:32.538 [2024-12-09 09:56:07.959957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.538 [2024-12-09 09:56:07.959972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.538 qpair failed and we were unable to recover it. 00:38:32.538 [2024-12-09 09:56:07.960291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.538 [2024-12-09 09:56:07.960305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.538 qpair failed and we were unable to recover it. 00:38:32.538 [2024-12-09 09:56:07.960642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.538 [2024-12-09 09:56:07.960658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.538 qpair failed and we were unable to recover it. 00:38:32.538 [2024-12-09 09:56:07.960961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.538 [2024-12-09 09:56:07.960976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.538 qpair failed and we were unable to recover it. 00:38:32.538 [2024-12-09 09:56:07.961320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.538 [2024-12-09 09:56:07.961336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.538 qpair failed and we were unable to recover it. 00:38:32.538 [2024-12-09 09:56:07.961672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.538 [2024-12-09 09:56:07.961688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.538 qpair failed and we were unable to recover it. 00:38:32.538 [2024-12-09 09:56:07.961987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.538 [2024-12-09 09:56:07.962008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.538 qpair failed and we were unable to recover it. 00:38:32.538 [2024-12-09 09:56:07.962220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.538 [2024-12-09 09:56:07.962235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.538 qpair failed and we were unable to recover it. 00:38:32.538 [2024-12-09 09:56:07.962532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.538 [2024-12-09 09:56:07.962550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.538 qpair failed and we were unable to recover it. 00:38:32.538 [2024-12-09 09:56:07.962877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.538 [2024-12-09 09:56:07.962892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.538 qpair failed and we were unable to recover it. 00:38:32.538 [2024-12-09 09:56:07.963201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.538 [2024-12-09 09:56:07.963216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.538 qpair failed and we were unable to recover it. 00:38:32.538 [2024-12-09 09:56:07.963516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.538 [2024-12-09 09:56:07.963531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.538 qpair failed and we were unable to recover it. 00:38:32.538 [2024-12-09 09:56:07.963859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.538 [2024-12-09 09:56:07.963875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.538 qpair failed and we were unable to recover it. 00:38:32.538 [2024-12-09 09:56:07.964219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.538 [2024-12-09 09:56:07.964234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.538 qpair failed and we were unable to recover it. 00:38:32.538 [2024-12-09 09:56:07.964564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.538 [2024-12-09 09:56:07.964580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.538 qpair failed and we were unable to recover it. 00:38:32.538 [2024-12-09 09:56:07.964908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.538 [2024-12-09 09:56:07.964923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.538 qpair failed and we were unable to recover it. 00:38:32.538 [2024-12-09 09:56:07.965236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.538 [2024-12-09 09:56:07.965250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.538 qpair failed and we were unable to recover it. 00:38:32.538 [2024-12-09 09:56:07.965571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.538 [2024-12-09 09:56:07.965586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.538 qpair failed and we were unable to recover it. 00:38:32.538 [2024-12-09 09:56:07.965965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.538 [2024-12-09 09:56:07.965980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.538 qpair failed and we were unable to recover it. 00:38:32.538 [2024-12-09 09:56:07.966389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.538 [2024-12-09 09:56:07.966405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.538 qpair failed and we were unable to recover it. 00:38:32.538 [2024-12-09 09:56:07.966720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.538 [2024-12-09 09:56:07.966736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.538 qpair failed and we were unable to recover it. 00:38:32.538 [2024-12-09 09:56:07.967061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.538 [2024-12-09 09:56:07.967076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.538 qpair failed and we were unable to recover it. 00:38:32.538 [2024-12-09 09:56:07.967459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.538 [2024-12-09 09:56:07.967473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.538 qpair failed and we were unable to recover it. 00:38:32.539 [2024-12-09 09:56:07.967768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.539 [2024-12-09 09:56:07.967783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.539 qpair failed and we were unable to recover it. 00:38:32.539 [2024-12-09 09:56:07.968114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.539 [2024-12-09 09:56:07.968128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.539 qpair failed and we were unable to recover it. 00:38:32.539 [2024-12-09 09:56:07.968525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.539 [2024-12-09 09:56:07.968539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.539 qpair failed and we were unable to recover it. 00:38:32.539 [2024-12-09 09:56:07.968855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.539 [2024-12-09 09:56:07.968876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.539 qpair failed and we were unable to recover it. 00:38:32.539 [2024-12-09 09:56:07.969192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.539 [2024-12-09 09:56:07.969207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.539 qpair failed and we were unable to recover it. 00:38:32.539 [2024-12-09 09:56:07.969522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.539 [2024-12-09 09:56:07.969538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.539 qpair failed and we were unable to recover it. 00:38:32.539 [2024-12-09 09:56:07.970690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.539 [2024-12-09 09:56:07.970722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.539 qpair failed and we were unable to recover it. 00:38:32.539 [2024-12-09 09:56:07.971057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.539 [2024-12-09 09:56:07.971073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.539 qpair failed and we were unable to recover it. 00:38:32.539 [2024-12-09 09:56:07.972144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.539 [2024-12-09 09:56:07.972172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.539 qpair failed and we were unable to recover it. 00:38:32.539 [2024-12-09 09:56:07.972535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.539 [2024-12-09 09:56:07.972552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.539 qpair failed and we were unable to recover it. 00:38:32.539 [2024-12-09 09:56:07.972888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.539 [2024-12-09 09:56:07.972904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.539 qpair failed and we were unable to recover it. 00:38:32.539 [2024-12-09 09:56:07.973216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.539 [2024-12-09 09:56:07.973231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.539 qpair failed and we were unable to recover it. 00:38:32.539 [2024-12-09 09:56:07.973547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.539 [2024-12-09 09:56:07.973567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.539 qpair failed and we were unable to recover it. 00:38:32.539 [2024-12-09 09:56:07.973892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.539 [2024-12-09 09:56:07.973908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.539 qpair failed and we were unable to recover it. 00:38:32.539 [2024-12-09 09:56:07.974292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.539 [2024-12-09 09:56:07.974307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.539 qpair failed and we were unable to recover it. 00:38:32.811 [2024-12-09 09:56:07.974600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.811 [2024-12-09 09:56:07.974616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.811 qpair failed and we were unable to recover it. 00:38:32.811 [2024-12-09 09:56:07.974940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.811 [2024-12-09 09:56:07.974955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.811 qpair failed and we were unable to recover it. 00:38:32.811 [2024-12-09 09:56:07.975268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.811 [2024-12-09 09:56:07.975291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.811 qpair failed and we were unable to recover it. 00:38:32.811 [2024-12-09 09:56:07.975591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.811 [2024-12-09 09:56:07.975606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.811 qpair failed and we were unable to recover it. 00:38:32.811 [2024-12-09 09:56:07.976002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.811 [2024-12-09 09:56:07.976018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.811 qpair failed and we were unable to recover it. 00:38:32.811 [2024-12-09 09:56:07.976345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.811 [2024-12-09 09:56:07.976359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.811 qpair failed and we were unable to recover it. 00:38:32.811 [2024-12-09 09:56:07.976603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.811 [2024-12-09 09:56:07.976618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.811 qpair failed and we were unable to recover it. 00:38:32.811 [2024-12-09 09:56:07.976955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.811 [2024-12-09 09:56:07.976971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.811 qpair failed and we were unable to recover it. 00:38:32.811 [2024-12-09 09:56:07.977286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.811 [2024-12-09 09:56:07.977301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.811 qpair failed and we were unable to recover it. 00:38:32.811 [2024-12-09 09:56:07.977631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.811 [2024-12-09 09:56:07.977654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.811 qpair failed and we were unable to recover it. 00:38:32.811 [2024-12-09 09:56:07.977939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.811 [2024-12-09 09:56:07.977953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.811 qpair failed and we were unable to recover it. 00:38:32.812 [2024-12-09 09:56:07.978267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.812 [2024-12-09 09:56:07.978284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.812 qpair failed and we were unable to recover it. 00:38:32.812 [2024-12-09 09:56:07.978603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.812 [2024-12-09 09:56:07.978620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.812 qpair failed and we were unable to recover it. 00:38:32.812 [2024-12-09 09:56:07.978953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.812 [2024-12-09 09:56:07.978968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.812 qpair failed and we were unable to recover it. 00:38:32.812 [2024-12-09 09:56:07.979285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.812 [2024-12-09 09:56:07.979300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.812 qpair failed and we were unable to recover it. 00:38:32.812 [2024-12-09 09:56:07.979634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.812 [2024-12-09 09:56:07.979664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.812 qpair failed and we were unable to recover it. 00:38:32.812 [2024-12-09 09:56:07.979983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.812 [2024-12-09 09:56:07.979998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.812 qpair failed and we were unable to recover it. 00:38:32.812 [2024-12-09 09:56:07.980318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.812 [2024-12-09 09:56:07.980333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.812 qpair failed and we were unable to recover it. 00:38:32.812 [2024-12-09 09:56:07.980618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.812 [2024-12-09 09:56:07.980633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.812 qpair failed and we were unable to recover it. 00:38:32.812 [2024-12-09 09:56:07.980970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.812 [2024-12-09 09:56:07.980986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.812 qpair failed and we were unable to recover it. 00:38:32.812 [2024-12-09 09:56:07.981300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.812 [2024-12-09 09:56:07.981315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.812 qpair failed and we were unable to recover it. 00:38:32.812 [2024-12-09 09:56:07.981648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.812 [2024-12-09 09:56:07.981662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.812 qpair failed and we were unable to recover it. 00:38:32.812 [2024-12-09 09:56:07.981940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.812 [2024-12-09 09:56:07.981955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.812 qpair failed and we were unable to recover it. 00:38:32.812 [2024-12-09 09:56:07.982342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.812 [2024-12-09 09:56:07.982357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.812 qpair failed and we were unable to recover it. 00:38:32.812 [2024-12-09 09:56:07.982725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.812 [2024-12-09 09:56:07.982741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.812 qpair failed and we were unable to recover it. 00:38:32.812 [2024-12-09 09:56:07.983068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.812 [2024-12-09 09:56:07.983083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.812 qpair failed and we were unable to recover it. 00:38:32.812 [2024-12-09 09:56:07.983282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.812 [2024-12-09 09:56:07.983297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.812 qpair failed and we were unable to recover it. 00:38:32.812 [2024-12-09 09:56:07.983627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.812 [2024-12-09 09:56:07.983664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.812 qpair failed and we were unable to recover it. 00:38:32.812 [2024-12-09 09:56:07.983966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.812 [2024-12-09 09:56:07.983980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.812 qpair failed and we were unable to recover it. 00:38:32.812 [2024-12-09 09:56:07.984293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.812 [2024-12-09 09:56:07.984308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.812 qpair failed and we were unable to recover it. 00:38:32.812 [2024-12-09 09:56:07.984502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.812 [2024-12-09 09:56:07.984516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.812 qpair failed and we were unable to recover it. 00:38:32.812 [2024-12-09 09:56:07.984838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.812 [2024-12-09 09:56:07.984854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.812 qpair failed and we were unable to recover it. 00:38:32.812 [2024-12-09 09:56:07.985189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.812 [2024-12-09 09:56:07.985204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.812 qpair failed and we were unable to recover it. 00:38:32.812 [2024-12-09 09:56:07.985539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.812 [2024-12-09 09:56:07.985554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.812 qpair failed and we were unable to recover it. 00:38:32.812 [2024-12-09 09:56:07.985854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.812 [2024-12-09 09:56:07.985875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.812 qpair failed and we were unable to recover it. 00:38:32.812 [2024-12-09 09:56:07.986195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.812 [2024-12-09 09:56:07.986209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.812 qpair failed and we were unable to recover it. 00:38:32.812 [2024-12-09 09:56:07.986504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.812 [2024-12-09 09:56:07.986518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.812 qpair failed and we were unable to recover it. 00:38:32.812 [2024-12-09 09:56:07.986862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.812 [2024-12-09 09:56:07.986878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.812 qpair failed and we were unable to recover it. 00:38:32.812 [2024-12-09 09:56:07.987204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.812 [2024-12-09 09:56:07.987222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.812 qpair failed and we were unable to recover it. 00:38:32.812 [2024-12-09 09:56:07.987512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.812 [2024-12-09 09:56:07.987528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.812 qpair failed and we were unable to recover it. 00:38:32.812 [2024-12-09 09:56:07.987814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.812 [2024-12-09 09:56:07.987829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.812 qpair failed and we were unable to recover it. 00:38:32.812 [2024-12-09 09:56:07.988149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.812 [2024-12-09 09:56:07.988164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.812 qpair failed and we were unable to recover it. 00:38:32.812 [2024-12-09 09:56:07.988468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.812 [2024-12-09 09:56:07.988483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.812 qpair failed and we were unable to recover it. 00:38:32.812 [2024-12-09 09:56:07.988847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.812 [2024-12-09 09:56:07.988862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.812 qpair failed and we were unable to recover it. 00:38:32.812 [2024-12-09 09:56:07.989174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.812 [2024-12-09 09:56:07.989189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.812 qpair failed and we were unable to recover it. 00:38:32.812 [2024-12-09 09:56:07.989386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.812 [2024-12-09 09:56:07.989404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.812 qpair failed and we were unable to recover it. 00:38:32.812 [2024-12-09 09:56:07.989717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.812 [2024-12-09 09:56:07.989734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.812 qpair failed and we were unable to recover it. 00:38:32.812 [2024-12-09 09:56:07.990078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.812 [2024-12-09 09:56:07.990093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.812 qpair failed and we were unable to recover it. 00:38:32.812 [2024-12-09 09:56:07.990423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.813 [2024-12-09 09:56:07.990438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.813 qpair failed and we were unable to recover it. 00:38:32.813 [2024-12-09 09:56:07.990632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.813 [2024-12-09 09:56:07.990656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.813 qpair failed and we were unable to recover it. 00:38:32.813 [2024-12-09 09:56:07.990983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.813 [2024-12-09 09:56:07.990998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.813 qpair failed and we were unable to recover it. 00:38:32.813 [2024-12-09 09:56:07.991333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.813 [2024-12-09 09:56:07.991348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.813 qpair failed and we were unable to recover it. 00:38:32.813 [2024-12-09 09:56:07.991720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.813 [2024-12-09 09:56:07.991736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.813 qpair failed and we were unable to recover it. 00:38:32.813 [2024-12-09 09:56:07.992016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.813 [2024-12-09 09:56:07.992037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.813 qpair failed and we were unable to recover it. 00:38:32.813 [2024-12-09 09:56:07.992349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.813 [2024-12-09 09:56:07.992363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.813 qpair failed and we were unable to recover it. 00:38:32.813 [2024-12-09 09:56:07.992659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.813 [2024-12-09 09:56:07.992675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.813 qpair failed and we were unable to recover it. 00:38:32.813 [2024-12-09 09:56:07.993004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.813 [2024-12-09 09:56:07.993021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.813 qpair failed and we were unable to recover it. 00:38:32.813 [2024-12-09 09:56:07.996659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.813 [2024-12-09 09:56:07.996710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.813 qpair failed and we were unable to recover it. 00:38:32.813 [2024-12-09 09:56:07.997109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.813 [2024-12-09 09:56:07.997139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.813 qpair failed and we were unable to recover it. 00:38:32.813 [2024-12-09 09:56:07.997512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.813 [2024-12-09 09:56:07.997542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.813 qpair failed and we were unable to recover it. 00:38:32.813 [2024-12-09 09:56:07.997941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.813 [2024-12-09 09:56:07.997974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.813 qpair failed and we were unable to recover it. 00:38:32.813 [2024-12-09 09:56:07.998318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.813 [2024-12-09 09:56:07.998347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.813 qpair failed and we were unable to recover it. 00:38:32.813 [2024-12-09 09:56:07.998707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.813 [2024-12-09 09:56:07.998737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.813 qpair failed and we were unable to recover it. 00:38:32.813 [2024-12-09 09:56:07.999105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.813 [2024-12-09 09:56:07.999134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.813 qpair failed and we were unable to recover it. 00:38:32.813 [2024-12-09 09:56:07.999492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.813 [2024-12-09 09:56:07.999521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.813 qpair failed and we were unable to recover it. 00:38:32.813 [2024-12-09 09:56:07.999872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.813 [2024-12-09 09:56:07.999903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.813 qpair failed and we were unable to recover it. 00:38:32.813 [2024-12-09 09:56:08.000294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.813 [2024-12-09 09:56:08.000324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.813 qpair failed and we were unable to recover it. 00:38:32.813 [2024-12-09 09:56:08.000678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.813 [2024-12-09 09:56:08.000708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.813 qpair failed and we were unable to recover it. 00:38:32.813 [2024-12-09 09:56:08.001067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.813 [2024-12-09 09:56:08.001096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.813 qpair failed and we were unable to recover it. 00:38:32.813 [2024-12-09 09:56:08.001455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.813 [2024-12-09 09:56:08.001486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.813 qpair failed and we were unable to recover it. 00:38:32.813 [2024-12-09 09:56:08.001819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.813 [2024-12-09 09:56:08.001850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.813 qpair failed and we were unable to recover it. 00:38:32.813 [2024-12-09 09:56:08.002262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.813 [2024-12-09 09:56:08.002303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.813 qpair failed and we were unable to recover it. 00:38:32.813 [2024-12-09 09:56:08.002691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.813 [2024-12-09 09:56:08.002731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.813 qpair failed and we were unable to recover it. 00:38:32.813 [2024-12-09 09:56:08.003100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.813 [2024-12-09 09:56:08.003135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.813 qpair failed and we were unable to recover it. 00:38:32.813 [2024-12-09 09:56:08.003515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.813 [2024-12-09 09:56:08.003552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.813 qpair failed and we were unable to recover it. 00:38:32.813 [2024-12-09 09:56:08.003879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.813 [2024-12-09 09:56:08.003917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.813 qpair failed and we were unable to recover it. 00:38:32.813 [2024-12-09 09:56:08.004313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.813 [2024-12-09 09:56:08.004349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.813 qpair failed and we were unable to recover it. 00:38:32.813 [2024-12-09 09:56:08.004748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.813 [2024-12-09 09:56:08.004783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.813 qpair failed and we were unable to recover it. 00:38:32.813 [2024-12-09 09:56:08.005141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.813 [2024-12-09 09:56:08.005176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.813 qpair failed and we were unable to recover it. 00:38:32.813 [2024-12-09 09:56:08.005446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.813 [2024-12-09 09:56:08.005494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.813 qpair failed and we were unable to recover it. 00:38:32.813 [2024-12-09 09:56:08.005781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.813 [2024-12-09 09:56:08.005823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.813 qpair failed and we were unable to recover it. 00:38:32.813 [2024-12-09 09:56:08.006546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.813 [2024-12-09 09:56:08.006584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.813 qpair failed and we were unable to recover it. 00:38:32.813 [2024-12-09 09:56:08.007127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.813 [2024-12-09 09:56:08.007162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.813 qpair failed and we were unable to recover it. 00:38:32.813 [2024-12-09 09:56:08.007501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.813 [2024-12-09 09:56:08.007524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.813 qpair failed and we were unable to recover it. 00:38:32.813 [2024-12-09 09:56:08.007853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.813 [2024-12-09 09:56:08.007876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.813 qpair failed and we were unable to recover it. 00:38:32.813 [2024-12-09 09:56:08.008212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.814 [2024-12-09 09:56:08.008234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.814 qpair failed and we were unable to recover it. 00:38:32.814 [2024-12-09 09:56:08.008536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.814 [2024-12-09 09:56:08.008558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.814 qpair failed and we were unable to recover it. 00:38:32.814 [2024-12-09 09:56:08.008891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.814 [2024-12-09 09:56:08.008914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.814 qpair failed and we were unable to recover it. 00:38:32.814 [2024-12-09 09:56:08.009222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.814 [2024-12-09 09:56:08.009245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.814 qpair failed and we were unable to recover it. 00:38:32.814 [2024-12-09 09:56:08.009558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.814 [2024-12-09 09:56:08.009580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.814 qpair failed and we were unable to recover it. 00:38:32.814 [2024-12-09 09:56:08.009927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.814 [2024-12-09 09:56:08.009949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.814 qpair failed and we were unable to recover it. 00:38:32.814 [2024-12-09 09:56:08.010254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.814 [2024-12-09 09:56:08.010277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.814 qpair failed and we were unable to recover it. 00:38:32.814 [2024-12-09 09:56:08.010602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.814 [2024-12-09 09:56:08.010624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.814 qpair failed and we were unable to recover it. 00:38:32.814 [2024-12-09 09:56:08.010955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.814 [2024-12-09 09:56:08.010977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.814 qpair failed and we were unable to recover it. 00:38:32.814 [2024-12-09 09:56:08.011280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.814 [2024-12-09 09:56:08.011308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.814 qpair failed and we were unable to recover it. 00:38:32.814 [2024-12-09 09:56:08.011660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.814 [2024-12-09 09:56:08.011683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.814 qpair failed and we were unable to recover it. 00:38:32.814 [2024-12-09 09:56:08.012046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.814 [2024-12-09 09:56:08.012067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.814 qpair failed and we were unable to recover it. 00:38:32.814 [2024-12-09 09:56:08.012376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.814 [2024-12-09 09:56:08.012398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.814 qpair failed and we were unable to recover it. 00:38:32.814 [2024-12-09 09:56:08.012752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.814 [2024-12-09 09:56:08.012774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.814 qpair failed and we were unable to recover it. 00:38:32.814 [2024-12-09 09:56:08.013083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.814 [2024-12-09 09:56:08.013104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.814 qpair failed and we were unable to recover it. 00:38:32.814 [2024-12-09 09:56:08.013312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.814 [2024-12-09 09:56:08.013333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.814 qpair failed and we were unable to recover it. 00:38:32.814 [2024-12-09 09:56:08.013672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.814 [2024-12-09 09:56:08.013694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.814 qpair failed and we were unable to recover it. 00:38:32.814 [2024-12-09 09:56:08.014017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.814 [2024-12-09 09:56:08.014038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.814 qpair failed and we were unable to recover it. 00:38:32.814 [2024-12-09 09:56:08.014366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.814 [2024-12-09 09:56:08.014387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.814 qpair failed and we were unable to recover it. 00:38:32.814 [2024-12-09 09:56:08.014691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.814 [2024-12-09 09:56:08.014713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.814 qpair failed and we were unable to recover it. 00:38:32.814 [2024-12-09 09:56:08.015036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.814 [2024-12-09 09:56:08.015057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.814 qpair failed and we were unable to recover it. 00:38:32.814 [2024-12-09 09:56:08.015403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.814 [2024-12-09 09:56:08.015424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.814 qpair failed and we were unable to recover it. 00:38:32.814 [2024-12-09 09:56:08.015773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.814 [2024-12-09 09:56:08.015796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.814 qpair failed and we were unable to recover it. 00:38:32.814 [2024-12-09 09:56:08.016168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.814 [2024-12-09 09:56:08.016189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.814 qpair failed and we were unable to recover it. 00:38:32.814 [2024-12-09 09:56:08.016514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.814 [2024-12-09 09:56:08.016536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.814 qpair failed and we were unable to recover it. 00:38:32.814 [2024-12-09 09:56:08.016844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.814 [2024-12-09 09:56:08.016866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.814 qpair failed and we were unable to recover it. 00:38:32.814 [2024-12-09 09:56:08.017198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.814 [2024-12-09 09:56:08.017220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.814 qpair failed and we were unable to recover it. 00:38:32.814 [2024-12-09 09:56:08.017562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.814 [2024-12-09 09:56:08.017584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.814 qpair failed and we were unable to recover it. 00:38:32.814 [2024-12-09 09:56:08.017913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.814 [2024-12-09 09:56:08.017936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.814 qpair failed and we were unable to recover it. 00:38:32.814 [2024-12-09 09:56:08.018228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.814 [2024-12-09 09:56:08.018250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.814 qpair failed and we were unable to recover it. 00:38:32.814 [2024-12-09 09:56:08.018557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.814 [2024-12-09 09:56:08.018586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.814 qpair failed and we were unable to recover it. 00:38:32.814 [2024-12-09 09:56:08.018940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.814 [2024-12-09 09:56:08.018962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.814 qpair failed and we were unable to recover it. 00:38:32.814 [2024-12-09 09:56:08.019266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.814 [2024-12-09 09:56:08.019295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.814 qpair failed and we were unable to recover it. 00:38:32.814 [2024-12-09 09:56:08.019606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.814 [2024-12-09 09:56:08.019627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.814 qpair failed and we were unable to recover it. 00:38:32.814 [2024-12-09 09:56:08.019980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.814 [2024-12-09 09:56:08.020011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.814 qpair failed and we were unable to recover it. 00:38:32.814 [2024-12-09 09:56:08.020366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.814 [2024-12-09 09:56:08.020397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.814 qpair failed and we were unable to recover it. 00:38:32.814 [2024-12-09 09:56:08.020741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.814 [2024-12-09 09:56:08.020772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.814 qpair failed and we were unable to recover it. 00:38:32.814 [2024-12-09 09:56:08.021096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.814 [2024-12-09 09:56:08.021126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.814 qpair failed and we were unable to recover it. 00:38:32.815 [2024-12-09 09:56:08.021483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.815 [2024-12-09 09:56:08.021513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.815 qpair failed and we were unable to recover it. 00:38:32.815 [2024-12-09 09:56:08.021760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.815 [2024-12-09 09:56:08.021790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.815 qpair failed and we were unable to recover it. 00:38:32.815 [2024-12-09 09:56:08.022146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.815 [2024-12-09 09:56:08.022175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.815 qpair failed and we were unable to recover it. 00:38:32.815 [2024-12-09 09:56:08.022503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.815 [2024-12-09 09:56:08.022533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.815 qpair failed and we were unable to recover it. 00:38:32.815 [2024-12-09 09:56:08.022902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.815 [2024-12-09 09:56:08.022932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.815 qpair failed and we were unable to recover it. 00:38:32.815 [2024-12-09 09:56:08.023155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.815 [2024-12-09 09:56:08.023184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.815 qpair failed and we were unable to recover it. 00:38:32.815 [2024-12-09 09:56:08.023531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.815 [2024-12-09 09:56:08.023560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.815 qpair failed and we were unable to recover it. 00:38:32.815 [2024-12-09 09:56:08.023889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.815 [2024-12-09 09:56:08.023922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.815 qpair failed and we were unable to recover it. 00:38:32.815 [2024-12-09 09:56:08.024258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.815 [2024-12-09 09:56:08.024287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.815 qpair failed and we were unable to recover it. 00:38:32.815 [2024-12-09 09:56:08.024599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.815 [2024-12-09 09:56:08.024628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.815 qpair failed and we were unable to recover it. 00:38:32.815 [2024-12-09 09:56:08.024968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.815 [2024-12-09 09:56:08.024997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.815 qpair failed and we were unable to recover it. 00:38:32.815 [2024-12-09 09:56:08.025339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.815 [2024-12-09 09:56:08.025367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.815 qpair failed and we were unable to recover it. 00:38:32.815 [2024-12-09 09:56:08.025706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.815 [2024-12-09 09:56:08.025737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.815 qpair failed and we were unable to recover it. 00:38:32.815 [2024-12-09 09:56:08.025986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.815 [2024-12-09 09:56:08.026019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.815 qpair failed and we were unable to recover it. 00:38:32.815 [2024-12-09 09:56:08.026367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.815 [2024-12-09 09:56:08.026396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.815 qpair failed and we were unable to recover it. 00:38:32.815 [2024-12-09 09:56:08.026752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.815 [2024-12-09 09:56:08.026782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.815 qpair failed and we were unable to recover it. 00:38:32.815 [2024-12-09 09:56:08.027139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.815 [2024-12-09 09:56:08.027168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.815 qpair failed and we were unable to recover it. 00:38:32.815 [2024-12-09 09:56:08.027529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.815 [2024-12-09 09:56:08.027558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.815 qpair failed and we were unable to recover it. 00:38:32.815 [2024-12-09 09:56:08.027936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.815 [2024-12-09 09:56:08.027967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.815 qpair failed and we were unable to recover it. 00:38:32.815 [2024-12-09 09:56:08.028302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.815 [2024-12-09 09:56:08.028331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.815 qpair failed and we were unable to recover it. 00:38:32.815 [2024-12-09 09:56:08.028697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.815 [2024-12-09 09:56:08.028728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.815 qpair failed and we were unable to recover it. 00:38:32.815 [2024-12-09 09:56:08.029059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.815 [2024-12-09 09:56:08.029088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.815 qpair failed and we were unable to recover it. 00:38:32.815 [2024-12-09 09:56:08.029347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.815 [2024-12-09 09:56:08.029378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.815 qpair failed and we were unable to recover it. 00:38:32.815 [2024-12-09 09:56:08.029724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.815 [2024-12-09 09:56:08.029755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.815 qpair failed and we were unable to recover it. 00:38:32.815 [2024-12-09 09:56:08.030112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.815 [2024-12-09 09:56:08.030148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.815 qpair failed and we were unable to recover it. 00:38:32.815 [2024-12-09 09:56:08.030465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.815 [2024-12-09 09:56:08.030494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.815 qpair failed and we were unable to recover it. 00:38:32.815 [2024-12-09 09:56:08.030792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.815 [2024-12-09 09:56:08.030822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.815 qpair failed and we were unable to recover it. 00:38:32.815 [2024-12-09 09:56:08.031172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.815 [2024-12-09 09:56:08.031201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.815 qpair failed and we were unable to recover it. 00:38:32.815 [2024-12-09 09:56:08.031542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.815 [2024-12-09 09:56:08.031571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.815 qpair failed and we were unable to recover it. 00:38:32.815 [2024-12-09 09:56:08.031899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.815 [2024-12-09 09:56:08.031929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.815 qpair failed and we were unable to recover it. 00:38:32.815 [2024-12-09 09:56:08.032292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.815 [2024-12-09 09:56:08.032322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.815 qpair failed and we were unable to recover it. 00:38:32.815 [2024-12-09 09:56:08.032665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.815 [2024-12-09 09:56:08.032695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.815 qpair failed and we were unable to recover it. 00:38:32.815 [2024-12-09 09:56:08.033046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.815 [2024-12-09 09:56:08.033075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.815 qpair failed and we were unable to recover it. 00:38:32.815 [2024-12-09 09:56:08.033415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.815 [2024-12-09 09:56:08.033445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.815 qpair failed and we were unable to recover it. 00:38:32.815 [2024-12-09 09:56:08.033796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.815 [2024-12-09 09:56:08.033825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.815 qpair failed and we were unable to recover it. 00:38:32.815 [2024-12-09 09:56:08.034193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.815 [2024-12-09 09:56:08.034223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.815 qpair failed and we were unable to recover it. 00:38:32.815 [2024-12-09 09:56:08.034451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.815 [2024-12-09 09:56:08.034480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.815 qpair failed and we were unable to recover it. 00:38:32.815 [2024-12-09 09:56:08.034816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.815 [2024-12-09 09:56:08.034846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.815 qpair failed and we were unable to recover it. 00:38:32.816 [2024-12-09 09:56:08.035210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.816 [2024-12-09 09:56:08.035239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.816 qpair failed and we were unable to recover it. 00:38:32.816 [2024-12-09 09:56:08.035580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.816 [2024-12-09 09:56:08.035608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.816 qpair failed and we were unable to recover it. 00:38:32.816 [2024-12-09 09:56:08.035974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.816 [2024-12-09 09:56:08.036005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.816 qpair failed and we were unable to recover it. 00:38:32.816 [2024-12-09 09:56:08.036218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.816 [2024-12-09 09:56:08.036246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.816 qpair failed and we were unable to recover it. 00:38:32.816 [2024-12-09 09:56:08.036483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.816 [2024-12-09 09:56:08.036512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.816 qpair failed and we were unable to recover it. 00:38:32.816 [2024-12-09 09:56:08.036868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.816 [2024-12-09 09:56:08.036898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.816 qpair failed and we were unable to recover it. 00:38:32.816 [2024-12-09 09:56:08.037240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.816 [2024-12-09 09:56:08.037269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.816 qpair failed and we were unable to recover it. 00:38:32.816 [2024-12-09 09:56:08.037623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.816 [2024-12-09 09:56:08.037665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.816 qpair failed and we were unable to recover it. 00:38:32.816 [2024-12-09 09:56:08.038031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.816 [2024-12-09 09:56:08.038061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.816 qpair failed and we were unable to recover it. 00:38:32.816 [2024-12-09 09:56:08.038413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.816 [2024-12-09 09:56:08.038442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.816 qpair failed and we were unable to recover it. 00:38:32.816 [2024-12-09 09:56:08.038798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.816 [2024-12-09 09:56:08.038828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.816 qpair failed and we were unable to recover it. 00:38:32.816 [2024-12-09 09:56:08.039190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.816 [2024-12-09 09:56:08.039219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.816 qpair failed and we were unable to recover it. 00:38:32.816 [2024-12-09 09:56:08.039572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.816 [2024-12-09 09:56:08.039601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.816 qpair failed and we were unable to recover it. 00:38:32.816 [2024-12-09 09:56:08.040018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.816 [2024-12-09 09:56:08.040049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.816 qpair failed and we were unable to recover it. 00:38:32.816 [2024-12-09 09:56:08.040388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.816 [2024-12-09 09:56:08.040416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.816 qpair failed and we were unable to recover it. 00:38:32.816 [2024-12-09 09:56:08.040782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.816 [2024-12-09 09:56:08.040812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.816 qpair failed and we were unable to recover it. 00:38:32.816 [2024-12-09 09:56:08.041157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.816 [2024-12-09 09:56:08.041186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.816 qpair failed and we were unable to recover it. 00:38:32.816 [2024-12-09 09:56:08.041599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.816 [2024-12-09 09:56:08.041628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.816 qpair failed and we were unable to recover it. 00:38:32.816 [2024-12-09 09:56:08.041989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.816 [2024-12-09 09:56:08.042018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.816 qpair failed and we were unable to recover it. 00:38:32.816 [2024-12-09 09:56:08.042363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.816 [2024-12-09 09:56:08.042392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.816 qpair failed and we were unable to recover it. 00:38:32.816 [2024-12-09 09:56:08.042739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.816 [2024-12-09 09:56:08.042770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.816 qpair failed and we were unable to recover it. 00:38:32.816 [2024-12-09 09:56:08.043012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.816 [2024-12-09 09:56:08.043041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.816 qpair failed and we were unable to recover it. 00:38:32.816 [2024-12-09 09:56:08.043393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.816 [2024-12-09 09:56:08.043422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.816 qpair failed and we were unable to recover it. 00:38:32.816 [2024-12-09 09:56:08.043777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.816 [2024-12-09 09:56:08.043807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.816 qpair failed and we were unable to recover it. 00:38:32.816 [2024-12-09 09:56:08.044175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.816 [2024-12-09 09:56:08.044204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.816 qpair failed and we were unable to recover it. 00:38:32.816 [2024-12-09 09:56:08.044437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.816 [2024-12-09 09:56:08.044466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.816 qpair failed and we were unable to recover it. 00:38:32.816 [2024-12-09 09:56:08.044846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.816 [2024-12-09 09:56:08.044876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.816 qpair failed and we were unable to recover it. 00:38:32.816 [2024-12-09 09:56:08.045146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.816 [2024-12-09 09:56:08.045176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.816 qpair failed and we were unable to recover it. 00:38:32.816 [2024-12-09 09:56:08.045545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.816 [2024-12-09 09:56:08.045574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.816 qpair failed and we were unable to recover it. 00:38:32.816 [2024-12-09 09:56:08.045898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.816 [2024-12-09 09:56:08.045928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.816 qpair failed and we were unable to recover it. 00:38:32.816 [2024-12-09 09:56:08.046272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.816 [2024-12-09 09:56:08.046302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.816 qpair failed and we were unable to recover it. 00:38:32.816 [2024-12-09 09:56:08.046657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.816 [2024-12-09 09:56:08.046688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.817 qpair failed and we were unable to recover it. 00:38:32.817 [2024-12-09 09:56:08.047050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.817 [2024-12-09 09:56:08.047079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.817 qpair failed and we were unable to recover it. 00:38:32.817 [2024-12-09 09:56:08.047420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.817 [2024-12-09 09:56:08.047449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.817 qpair failed and we were unable to recover it. 00:38:32.817 [2024-12-09 09:56:08.047794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.817 [2024-12-09 09:56:08.047825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.817 qpair failed and we were unable to recover it. 00:38:32.817 [2024-12-09 09:56:08.048170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.817 [2024-12-09 09:56:08.048199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.817 qpair failed and we were unable to recover it. 00:38:32.817 [2024-12-09 09:56:08.048541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.817 [2024-12-09 09:56:08.048570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.817 qpair failed and we were unable to recover it. 00:38:32.817 [2024-12-09 09:56:08.048914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.817 [2024-12-09 09:56:08.048944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.817 qpair failed and we were unable to recover it. 00:38:32.817 [2024-12-09 09:56:08.049279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.817 [2024-12-09 09:56:08.049307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.817 qpair failed and we were unable to recover it. 00:38:32.817 [2024-12-09 09:56:08.049659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.817 [2024-12-09 09:56:08.049689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.817 qpair failed and we were unable to recover it. 00:38:32.817 [2024-12-09 09:56:08.050051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.817 [2024-12-09 09:56:08.050079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.817 qpair failed and we were unable to recover it. 00:38:32.817 [2024-12-09 09:56:08.050408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.817 [2024-12-09 09:56:08.050437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.817 qpair failed and we were unable to recover it. 00:38:32.817 [2024-12-09 09:56:08.050763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.817 [2024-12-09 09:56:08.050801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.817 qpair failed and we were unable to recover it. 00:38:32.817 [2024-12-09 09:56:08.051150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.817 [2024-12-09 09:56:08.051179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.817 qpair failed and we were unable to recover it. 00:38:32.817 [2024-12-09 09:56:08.051536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.817 [2024-12-09 09:56:08.051565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.817 qpair failed and we were unable to recover it. 00:38:32.817 [2024-12-09 09:56:08.051912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.817 [2024-12-09 09:56:08.051944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.817 qpair failed and we were unable to recover it. 00:38:32.817 [2024-12-09 09:56:08.052188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.817 [2024-12-09 09:56:08.052217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.817 qpair failed and we were unable to recover it. 00:38:32.817 [2024-12-09 09:56:08.052531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.817 [2024-12-09 09:56:08.052559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.817 qpair failed and we were unable to recover it. 00:38:32.817 [2024-12-09 09:56:08.052930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.817 [2024-12-09 09:56:08.052960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.817 qpair failed and we were unable to recover it. 00:38:32.817 [2024-12-09 09:56:08.053322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.817 [2024-12-09 09:56:08.053350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.817 qpair failed and we were unable to recover it. 00:38:32.817 [2024-12-09 09:56:08.053580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.817 [2024-12-09 09:56:08.053609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.817 qpair failed and we were unable to recover it. 00:38:32.817 [2024-12-09 09:56:08.053952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.817 [2024-12-09 09:56:08.053983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.817 qpair failed and we were unable to recover it. 00:38:32.817 [2024-12-09 09:56:08.054333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.817 [2024-12-09 09:56:08.054362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.817 qpair failed and we were unable to recover it. 00:38:32.817 [2024-12-09 09:56:08.054710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.817 [2024-12-09 09:56:08.054740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.817 qpair failed and we were unable to recover it. 00:38:32.817 [2024-12-09 09:56:08.055096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.817 [2024-12-09 09:56:08.055132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.817 qpair failed and we were unable to recover it. 00:38:32.817 [2024-12-09 09:56:08.055473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.817 [2024-12-09 09:56:08.055502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.817 qpair failed and we were unable to recover it. 00:38:32.817 [2024-12-09 09:56:08.055829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.817 [2024-12-09 09:56:08.055860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.817 qpair failed and we were unable to recover it. 00:38:32.817 [2024-12-09 09:56:08.056199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.817 [2024-12-09 09:56:08.056227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.817 qpair failed and we were unable to recover it. 00:38:32.817 [2024-12-09 09:56:08.056583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.817 [2024-12-09 09:56:08.056612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.817 qpair failed and we were unable to recover it. 00:38:32.817 [2024-12-09 09:56:08.056955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.817 [2024-12-09 09:56:08.056985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.817 qpair failed and we were unable to recover it. 00:38:32.817 [2024-12-09 09:56:08.057341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.817 [2024-12-09 09:56:08.057369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.817 qpair failed and we were unable to recover it. 00:38:32.817 [2024-12-09 09:56:08.057727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.817 [2024-12-09 09:56:08.057756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.817 qpair failed and we were unable to recover it. 00:38:32.817 [2024-12-09 09:56:08.058117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.817 [2024-12-09 09:56:08.058147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.817 qpair failed and we were unable to recover it. 00:38:32.817 [2024-12-09 09:56:08.058483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.817 [2024-12-09 09:56:08.058511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.817 qpair failed and we were unable to recover it. 00:38:32.817 [2024-12-09 09:56:08.058861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.817 [2024-12-09 09:56:08.058890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.817 qpair failed and we were unable to recover it. 00:38:32.817 [2024-12-09 09:56:08.059215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.817 [2024-12-09 09:56:08.059245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.817 qpair failed and we were unable to recover it. 00:38:32.817 [2024-12-09 09:56:08.059573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.817 [2024-12-09 09:56:08.059602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.817 qpair failed and we were unable to recover it. 00:38:32.817 [2024-12-09 09:56:08.059907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.817 [2024-12-09 09:56:08.059938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.817 qpair failed and we were unable to recover it. 00:38:32.817 [2024-12-09 09:56:08.060303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.817 [2024-12-09 09:56:08.060332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.817 qpair failed and we were unable to recover it. 00:38:32.817 [2024-12-09 09:56:08.060671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.818 [2024-12-09 09:56:08.060702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.818 qpair failed and we were unable to recover it. 00:38:32.818 [2024-12-09 09:56:08.060938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.818 [2024-12-09 09:56:08.060966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.818 qpair failed and we were unable to recover it. 00:38:32.818 [2024-12-09 09:56:08.061323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.818 [2024-12-09 09:56:08.061352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.818 qpair failed and we were unable to recover it. 00:38:32.818 [2024-12-09 09:56:08.061694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.818 [2024-12-09 09:56:08.061724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.818 qpair failed and we were unable to recover it. 00:38:32.818 [2024-12-09 09:56:08.062066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.818 [2024-12-09 09:56:08.062095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.818 qpair failed and we were unable to recover it. 00:38:32.818 [2024-12-09 09:56:08.062436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.818 [2024-12-09 09:56:08.062465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.818 qpair failed and we were unable to recover it. 00:38:32.818 [2024-12-09 09:56:08.062807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.818 [2024-12-09 09:56:08.062837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.818 qpair failed and we were unable to recover it. 00:38:32.818 [2024-12-09 09:56:08.063197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.818 [2024-12-09 09:56:08.063226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.818 qpair failed and we were unable to recover it. 00:38:32.818 [2024-12-09 09:56:08.063461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.818 [2024-12-09 09:56:08.063488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.818 qpair failed and we were unable to recover it. 00:38:32.818 [2024-12-09 09:56:08.063855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.818 [2024-12-09 09:56:08.063885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.818 qpair failed and we were unable to recover it. 00:38:32.818 [2024-12-09 09:56:08.064232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.818 [2024-12-09 09:56:08.064259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.818 qpair failed and we were unable to recover it. 00:38:32.818 [2024-12-09 09:56:08.064609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.818 [2024-12-09 09:56:08.064650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.818 qpair failed and we were unable to recover it. 00:38:32.818 [2024-12-09 09:56:08.064897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.818 [2024-12-09 09:56:08.064925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.818 qpair failed and we were unable to recover it. 00:38:32.818 [2024-12-09 09:56:08.065291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.818 [2024-12-09 09:56:08.065320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.818 qpair failed and we were unable to recover it. 00:38:32.818 [2024-12-09 09:56:08.065666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.818 [2024-12-09 09:56:08.065697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.818 qpair failed and we were unable to recover it. 00:38:32.818 [2024-12-09 09:56:08.066046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.818 [2024-12-09 09:56:08.066074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.818 qpair failed and we were unable to recover it. 00:38:32.818 [2024-12-09 09:56:08.066396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.818 [2024-12-09 09:56:08.066425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.818 qpair failed and we were unable to recover it. 00:38:32.818 [2024-12-09 09:56:08.066763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.818 [2024-12-09 09:56:08.066793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.818 qpair failed and we were unable to recover it. 00:38:32.818 [2024-12-09 09:56:08.067157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.818 [2024-12-09 09:56:08.067186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.818 qpair failed and we were unable to recover it. 00:38:32.818 [2024-12-09 09:56:08.067527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.818 [2024-12-09 09:56:08.067556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.818 qpair failed and we were unable to recover it. 00:38:32.818 [2024-12-09 09:56:08.067909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.818 [2024-12-09 09:56:08.067939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.818 qpair failed and we were unable to recover it. 00:38:32.818 [2024-12-09 09:56:08.068312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.818 [2024-12-09 09:56:08.068341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.818 qpair failed and we were unable to recover it. 00:38:32.818 [2024-12-09 09:56:08.068688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.818 [2024-12-09 09:56:08.068719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.818 qpair failed and we were unable to recover it. 00:38:32.818 [2024-12-09 09:56:08.069067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.818 [2024-12-09 09:56:08.069096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.818 qpair failed and we were unable to recover it. 00:38:32.818 [2024-12-09 09:56:08.069440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.818 [2024-12-09 09:56:08.069468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.818 qpair failed and we were unable to recover it. 00:38:32.818 [2024-12-09 09:56:08.069714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.818 [2024-12-09 09:56:08.069744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.818 qpair failed and we were unable to recover it. 00:38:32.818 [2024-12-09 09:56:08.070117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.818 [2024-12-09 09:56:08.070152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.818 qpair failed and we were unable to recover it. 00:38:32.818 [2024-12-09 09:56:08.070487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.818 [2024-12-09 09:56:08.070516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.818 qpair failed and we were unable to recover it. 00:38:32.818 [2024-12-09 09:56:08.070856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.818 [2024-12-09 09:56:08.070887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.818 qpair failed and we were unable to recover it. 00:38:32.818 [2024-12-09 09:56:08.071231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.818 [2024-12-09 09:56:08.071261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.818 qpair failed and we were unable to recover it. 00:38:32.818 [2024-12-09 09:56:08.071605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.818 [2024-12-09 09:56:08.071634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.818 qpair failed and we were unable to recover it. 00:38:32.818 [2024-12-09 09:56:08.071998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.818 [2024-12-09 09:56:08.072027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.818 qpair failed and we were unable to recover it. 00:38:32.818 [2024-12-09 09:56:08.072368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.818 [2024-12-09 09:56:08.072396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.818 qpair failed and we were unable to recover it. 00:38:32.818 [2024-12-09 09:56:08.072755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.818 [2024-12-09 09:56:08.072785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.818 qpair failed and we were unable to recover it. 00:38:32.818 [2024-12-09 09:56:08.073150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.818 [2024-12-09 09:56:08.073179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.818 qpair failed and we were unable to recover it. 00:38:32.818 [2024-12-09 09:56:08.073540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.818 [2024-12-09 09:56:08.073568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.818 qpair failed and we were unable to recover it. 00:38:32.818 [2024-12-09 09:56:08.073916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.818 [2024-12-09 09:56:08.073947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.818 qpair failed and we were unable to recover it. 00:38:32.818 [2024-12-09 09:56:08.074289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.818 [2024-12-09 09:56:08.074318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.818 qpair failed and we were unable to recover it. 00:38:32.818 [2024-12-09 09:56:08.074662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.819 [2024-12-09 09:56:08.074691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.819 qpair failed and we were unable to recover it. 00:38:32.819 [2024-12-09 09:56:08.075048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.819 [2024-12-09 09:56:08.075077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.819 qpair failed and we were unable to recover it. 00:38:32.819 [2024-12-09 09:56:08.075264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.819 [2024-12-09 09:56:08.075292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.819 qpair failed and we were unable to recover it. 00:38:32.819 [2024-12-09 09:56:08.075665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.819 [2024-12-09 09:56:08.075696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.819 qpair failed and we were unable to recover it. 00:38:32.819 [2024-12-09 09:56:08.075968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.819 [2024-12-09 09:56:08.075997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.819 qpair failed and we were unable to recover it. 00:38:32.819 [2024-12-09 09:56:08.076350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.819 [2024-12-09 09:56:08.076379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.819 qpair failed and we were unable to recover it. 00:38:32.819 [2024-12-09 09:56:08.076724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.819 [2024-12-09 09:56:08.076754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.819 qpair failed and we were unable to recover it. 00:38:32.819 [2024-12-09 09:56:08.077111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.819 [2024-12-09 09:56:08.077140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.819 qpair failed and we were unable to recover it. 00:38:32.819 [2024-12-09 09:56:08.077480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.819 [2024-12-09 09:56:08.077509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.819 qpair failed and we were unable to recover it. 00:38:32.819 [2024-12-09 09:56:08.077856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.819 [2024-12-09 09:56:08.077885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.819 qpair failed and we were unable to recover it. 00:38:32.819 [2024-12-09 09:56:08.078215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.819 [2024-12-09 09:56:08.078244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.819 qpair failed and we were unable to recover it. 00:38:32.819 [2024-12-09 09:56:08.078589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.819 [2024-12-09 09:56:08.078618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.819 qpair failed and we were unable to recover it. 00:38:32.819 [2024-12-09 09:56:08.078957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.819 [2024-12-09 09:56:08.078988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.819 qpair failed and we were unable to recover it. 00:38:32.819 [2024-12-09 09:56:08.079341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.819 [2024-12-09 09:56:08.079370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.819 qpair failed and we were unable to recover it. 00:38:32.819 [2024-12-09 09:56:08.079727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.819 [2024-12-09 09:56:08.079758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.819 qpair failed and we were unable to recover it. 00:38:32.819 [2024-12-09 09:56:08.080115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.819 [2024-12-09 09:56:08.080154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.819 qpair failed and we were unable to recover it. 00:38:32.819 [2024-12-09 09:56:08.080386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.819 [2024-12-09 09:56:08.080415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.819 qpair failed and we were unable to recover it. 00:38:32.819 [2024-12-09 09:56:08.080732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.819 [2024-12-09 09:56:08.080762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.819 qpair failed and we were unable to recover it. 00:38:32.819 [2024-12-09 09:56:08.081116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.819 [2024-12-09 09:56:08.081145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.819 qpair failed and we were unable to recover it. 00:38:32.819 [2024-12-09 09:56:08.081493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.819 [2024-12-09 09:56:08.081521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.819 qpair failed and we were unable to recover it. 00:38:32.819 [2024-12-09 09:56:08.081867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.819 [2024-12-09 09:56:08.081897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.819 qpair failed and we were unable to recover it. 00:38:32.819 [2024-12-09 09:56:08.082234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.819 [2024-12-09 09:56:08.082264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.819 qpair failed and we were unable to recover it. 00:38:32.819 [2024-12-09 09:56:08.082480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.819 [2024-12-09 09:56:08.082508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.819 qpair failed and we were unable to recover it. 00:38:32.819 [2024-12-09 09:56:08.082819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.819 [2024-12-09 09:56:08.082849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.819 qpair failed and we were unable to recover it. 00:38:32.819 [2024-12-09 09:56:08.083190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.819 [2024-12-09 09:56:08.083219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.819 qpair failed and we were unable to recover it. 00:38:32.819 [2024-12-09 09:56:08.083584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.819 [2024-12-09 09:56:08.083613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.819 qpair failed and we were unable to recover it. 00:38:32.819 [2024-12-09 09:56:08.083985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.819 [2024-12-09 09:56:08.084016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.819 qpair failed and we were unable to recover it. 00:38:32.819 [2024-12-09 09:56:08.084349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.819 [2024-12-09 09:56:08.084378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.819 qpair failed and we were unable to recover it. 00:38:32.819 [2024-12-09 09:56:08.084723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.819 [2024-12-09 09:56:08.084753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.819 qpair failed and we were unable to recover it. 00:38:32.819 [2024-12-09 09:56:08.085011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.819 [2024-12-09 09:56:08.085040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.819 qpair failed and we were unable to recover it. 00:38:32.819 [2024-12-09 09:56:08.085369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.819 [2024-12-09 09:56:08.085399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.819 qpair failed and we were unable to recover it. 00:38:32.819 [2024-12-09 09:56:08.085747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.819 [2024-12-09 09:56:08.085779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.819 qpair failed and we were unable to recover it. 00:38:32.819 [2024-12-09 09:56:08.086125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.819 [2024-12-09 09:56:08.086155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.819 qpair failed and we were unable to recover it. 00:38:32.819 [2024-12-09 09:56:08.086494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.819 [2024-12-09 09:56:08.086522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.819 qpair failed and we were unable to recover it. 00:38:32.819 [2024-12-09 09:56:08.086917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.819 [2024-12-09 09:56:08.086947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.819 qpair failed and we were unable to recover it. 00:38:32.819 [2024-12-09 09:56:08.087285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.819 [2024-12-09 09:56:08.087313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.819 qpair failed and we were unable to recover it. 00:38:32.819 [2024-12-09 09:56:08.087679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.819 [2024-12-09 09:56:08.087710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.819 qpair failed and we were unable to recover it. 00:38:32.819 [2024-12-09 09:56:08.088067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.819 [2024-12-09 09:56:08.088094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.819 qpair failed and we were unable to recover it. 00:38:32.819 [2024-12-09 09:56:08.088432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.820 [2024-12-09 09:56:08.088461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.820 qpair failed and we were unable to recover it. 00:38:32.820 [2024-12-09 09:56:08.088799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.820 [2024-12-09 09:56:08.088830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.820 qpair failed and we were unable to recover it. 00:38:32.820 [2024-12-09 09:56:08.089124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.820 [2024-12-09 09:56:08.089153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.820 qpair failed and we were unable to recover it. 00:38:32.820 [2024-12-09 09:56:08.089511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.820 [2024-12-09 09:56:08.089539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.820 qpair failed and we were unable to recover it. 00:38:32.820 [2024-12-09 09:56:08.089800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.820 [2024-12-09 09:56:08.089830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.820 qpair failed and we were unable to recover it. 00:38:32.820 [2024-12-09 09:56:08.090171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.820 [2024-12-09 09:56:08.090200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.820 qpair failed and we were unable to recover it. 00:38:32.820 [2024-12-09 09:56:08.090551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.820 [2024-12-09 09:56:08.090580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.820 qpair failed and we were unable to recover it. 00:38:32.820 [2024-12-09 09:56:08.090938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.820 [2024-12-09 09:56:08.090968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.820 qpair failed and we were unable to recover it. 00:38:32.820 [2024-12-09 09:56:08.091329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.820 [2024-12-09 09:56:08.091358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.820 qpair failed and we were unable to recover it. 00:38:32.820 [2024-12-09 09:56:08.091662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.820 [2024-12-09 09:56:08.091692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.820 qpair failed and we were unable to recover it. 00:38:32.820 [2024-12-09 09:56:08.092033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.820 [2024-12-09 09:56:08.092062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.820 qpair failed and we were unable to recover it. 00:38:32.820 [2024-12-09 09:56:08.092404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.820 [2024-12-09 09:56:08.092433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.820 qpair failed and we were unable to recover it. 00:38:32.820 [2024-12-09 09:56:08.092765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.820 [2024-12-09 09:56:08.092794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.820 qpair failed and we were unable to recover it. 00:38:32.820 [2024-12-09 09:56:08.093142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.820 [2024-12-09 09:56:08.093171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.820 qpair failed and we were unable to recover it. 00:38:32.820 [2024-12-09 09:56:08.093420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.820 [2024-12-09 09:56:08.093448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.820 qpair failed and we were unable to recover it. 00:38:32.820 [2024-12-09 09:56:08.093801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.820 [2024-12-09 09:56:08.093833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.820 qpair failed and we were unable to recover it. 00:38:32.820 [2024-12-09 09:56:08.094173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.820 [2024-12-09 09:56:08.094202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.820 qpair failed and we were unable to recover it. 00:38:32.820 [2024-12-09 09:56:08.094553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.820 [2024-12-09 09:56:08.094581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.820 qpair failed and we were unable to recover it. 00:38:32.820 [2024-12-09 09:56:08.094931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.820 [2024-12-09 09:56:08.094967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.820 qpair failed and we were unable to recover it. 00:38:32.820 [2024-12-09 09:56:08.095315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.820 [2024-12-09 09:56:08.095344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.820 qpair failed and we were unable to recover it. 00:38:32.820 [2024-12-09 09:56:08.095688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.820 [2024-12-09 09:56:08.095719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.820 qpair failed and we were unable to recover it. 00:38:32.820 [2024-12-09 09:56:08.096074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.820 [2024-12-09 09:56:08.096102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.820 qpair failed and we were unable to recover it. 00:38:32.820 [2024-12-09 09:56:08.096457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.820 [2024-12-09 09:56:08.096485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.820 qpair failed and we were unable to recover it. 00:38:32.820 [2024-12-09 09:56:08.096844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.820 [2024-12-09 09:56:08.096874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.820 qpair failed and we were unable to recover it. 00:38:32.820 [2024-12-09 09:56:08.097127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.820 [2024-12-09 09:56:08.097155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.820 qpair failed and we were unable to recover it. 00:38:32.820 [2024-12-09 09:56:08.097524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.820 [2024-12-09 09:56:08.097553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.820 qpair failed and we were unable to recover it. 00:38:32.820 [2024-12-09 09:56:08.097952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.820 [2024-12-09 09:56:08.097983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.820 qpair failed and we were unable to recover it. 00:38:32.820 [2024-12-09 09:56:08.098307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.820 [2024-12-09 09:56:08.098337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.820 qpair failed and we were unable to recover it. 00:38:32.820 [2024-12-09 09:56:08.098602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.820 [2024-12-09 09:56:08.098630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.820 qpair failed and we were unable to recover it. 00:38:32.820 [2024-12-09 09:56:08.099001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.820 [2024-12-09 09:56:08.099030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.820 qpair failed and we were unable to recover it. 00:38:32.820 [2024-12-09 09:56:08.099416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.820 [2024-12-09 09:56:08.099445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.820 qpair failed and we were unable to recover it. 00:38:32.820 [2024-12-09 09:56:08.099790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.820 [2024-12-09 09:56:08.099820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.820 qpair failed and we were unable to recover it. 00:38:32.820 [2024-12-09 09:56:08.100165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.820 [2024-12-09 09:56:08.100195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.820 qpair failed and we were unable to recover it. 00:38:32.820 [2024-12-09 09:56:08.100449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.820 [2024-12-09 09:56:08.100477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.820 qpair failed and we were unable to recover it. 00:38:32.820 [2024-12-09 09:56:08.100710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.820 [2024-12-09 09:56:08.100740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.820 qpair failed and we were unable to recover it. 00:38:32.820 [2024-12-09 09:56:08.101135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.820 [2024-12-09 09:56:08.101164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.820 qpair failed and we were unable to recover it. 00:38:32.820 [2024-12-09 09:56:08.101514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.820 [2024-12-09 09:56:08.101542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.820 qpair failed and we were unable to recover it. 00:38:32.821 [2024-12-09 09:56:08.101884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.821 [2024-12-09 09:56:08.101915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.821 qpair failed and we were unable to recover it. 00:38:32.821 [2024-12-09 09:56:08.102268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.821 [2024-12-09 09:56:08.102296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.821 qpair failed and we were unable to recover it. 00:38:32.821 [2024-12-09 09:56:08.102537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.821 [2024-12-09 09:56:08.102565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.821 qpair failed and we were unable to recover it. 00:38:32.821 [2024-12-09 09:56:08.102913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.821 [2024-12-09 09:56:08.102943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.821 qpair failed and we were unable to recover it. 00:38:32.821 [2024-12-09 09:56:08.103272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.821 [2024-12-09 09:56:08.103301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.821 qpair failed and we were unable to recover it. 00:38:32.821 [2024-12-09 09:56:08.103670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.821 [2024-12-09 09:56:08.103701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.821 qpair failed and we were unable to recover it. 00:38:32.821 [2024-12-09 09:56:08.104051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.821 [2024-12-09 09:56:08.104081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.821 qpair failed and we were unable to recover it. 00:38:32.821 [2024-12-09 09:56:08.104420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.821 [2024-12-09 09:56:08.104448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.821 qpair failed and we were unable to recover it. 00:38:32.821 [2024-12-09 09:56:08.104783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.821 [2024-12-09 09:56:08.104818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.821 qpair failed and we were unable to recover it. 00:38:32.821 [2024-12-09 09:56:08.105155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.821 [2024-12-09 09:56:08.105184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.821 qpair failed and we were unable to recover it. 00:38:32.821 [2024-12-09 09:56:08.105445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.821 [2024-12-09 09:56:08.105473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.821 qpair failed and we were unable to recover it. 00:38:32.821 [2024-12-09 09:56:08.105857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.821 [2024-12-09 09:56:08.105888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.821 qpair failed and we were unable to recover it. 00:38:32.821 [2024-12-09 09:56:08.106207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.821 [2024-12-09 09:56:08.106236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.821 qpair failed and we were unable to recover it. 00:38:32.821 [2024-12-09 09:56:08.106576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.821 [2024-12-09 09:56:08.106604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.821 qpair failed and we were unable to recover it. 00:38:32.821 [2024-12-09 09:56:08.106875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.821 [2024-12-09 09:56:08.106904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.821 qpair failed and we were unable to recover it. 00:38:32.821 [2024-12-09 09:56:08.107228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.821 [2024-12-09 09:56:08.107257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.821 qpair failed and we were unable to recover it. 00:38:32.821 [2024-12-09 09:56:08.107510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.821 [2024-12-09 09:56:08.107538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.821 qpair failed and we were unable to recover it. 00:38:32.821 [2024-12-09 09:56:08.107930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.821 [2024-12-09 09:56:08.107960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.821 qpair failed and we were unable to recover it. 00:38:32.821 [2024-12-09 09:56:08.108259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.821 [2024-12-09 09:56:08.108287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.821 qpair failed and we were unable to recover it. 00:38:32.821 [2024-12-09 09:56:08.108607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.821 [2024-12-09 09:56:08.108635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.821 qpair failed and we were unable to recover it. 00:38:32.821 [2024-12-09 09:56:08.108887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.821 [2024-12-09 09:56:08.108916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.821 qpair failed and we were unable to recover it. 00:38:32.821 [2024-12-09 09:56:08.109244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.821 [2024-12-09 09:56:08.109273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.821 qpair failed and we were unable to recover it. 00:38:32.821 [2024-12-09 09:56:08.109670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.821 [2024-12-09 09:56:08.109701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.821 qpair failed and we were unable to recover it. 00:38:32.821 [2024-12-09 09:56:08.110073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.821 [2024-12-09 09:56:08.110102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.821 qpair failed and we were unable to recover it. 00:38:32.821 [2024-12-09 09:56:08.110327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.821 [2024-12-09 09:56:08.110356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.821 qpair failed and we were unable to recover it. 00:38:32.821 [2024-12-09 09:56:08.110711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.821 [2024-12-09 09:56:08.110741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.821 qpair failed and we were unable to recover it. 00:38:32.821 [2024-12-09 09:56:08.111094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.821 [2024-12-09 09:56:08.111124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.821 qpair failed and we were unable to recover it. 00:38:32.821 [2024-12-09 09:56:08.111469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.821 [2024-12-09 09:56:08.111497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.821 qpair failed and we were unable to recover it. 00:38:32.821 [2024-12-09 09:56:08.111857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.821 [2024-12-09 09:56:08.111887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.821 qpair failed and we were unable to recover it. 00:38:32.821 [2024-12-09 09:56:08.112240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.821 [2024-12-09 09:56:08.112269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.821 qpair failed and we were unable to recover it. 00:38:32.821 [2024-12-09 09:56:08.112654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.821 [2024-12-09 09:56:08.112686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.821 qpair failed and we were unable to recover it. 00:38:32.821 [2024-12-09 09:56:08.113018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.821 [2024-12-09 09:56:08.113047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.821 qpair failed and we were unable to recover it. 00:38:32.821 [2024-12-09 09:56:08.113404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.821 [2024-12-09 09:56:08.113433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.821 qpair failed and we were unable to recover it. 00:38:32.821 [2024-12-09 09:56:08.113779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.821 [2024-12-09 09:56:08.113809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.821 qpair failed and we were unable to recover it. 00:38:32.822 [2024-12-09 09:56:08.114170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.822 [2024-12-09 09:56:08.114200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.822 qpair failed and we were unable to recover it. 00:38:32.822 [2024-12-09 09:56:08.114539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.822 [2024-12-09 09:56:08.114569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.822 qpair failed and we were unable to recover it. 00:38:32.822 [2024-12-09 09:56:08.114941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.822 [2024-12-09 09:56:08.114971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.822 qpair failed and we were unable to recover it. 00:38:32.822 [2024-12-09 09:56:08.115303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.822 [2024-12-09 09:56:08.115332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.822 qpair failed and we were unable to recover it. 00:38:32.822 [2024-12-09 09:56:08.115558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.822 [2024-12-09 09:56:08.115586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.822 qpair failed and we were unable to recover it. 00:38:32.822 [2024-12-09 09:56:08.115967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.822 [2024-12-09 09:56:08.115998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.822 qpair failed and we were unable to recover it. 00:38:32.822 [2024-12-09 09:56:08.116342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.822 [2024-12-09 09:56:08.116370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.822 qpair failed and we were unable to recover it. 00:38:32.822 [2024-12-09 09:56:08.116786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.822 [2024-12-09 09:56:08.116816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.822 qpair failed and we were unable to recover it. 00:38:32.822 [2024-12-09 09:56:08.117133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.822 [2024-12-09 09:56:08.117162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.822 qpair failed and we were unable to recover it. 00:38:32.822 [2024-12-09 09:56:08.117486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.822 [2024-12-09 09:56:08.117515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.822 qpair failed and we were unable to recover it. 00:38:32.822 [2024-12-09 09:56:08.117852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.822 [2024-12-09 09:56:08.117883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.822 qpair failed and we were unable to recover it. 00:38:32.822 [2024-12-09 09:56:08.118224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.822 [2024-12-09 09:56:08.118254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.822 qpair failed and we were unable to recover it. 00:38:32.822 [2024-12-09 09:56:08.118599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.822 [2024-12-09 09:56:08.118628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.822 qpair failed and we were unable to recover it. 00:38:32.822 [2024-12-09 09:56:08.119017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.822 [2024-12-09 09:56:08.119046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.822 qpair failed and we were unable to recover it. 00:38:32.822 [2024-12-09 09:56:08.119367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.822 [2024-12-09 09:56:08.119396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.822 qpair failed and we were unable to recover it. 00:38:32.822 [2024-12-09 09:56:08.119743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.822 [2024-12-09 09:56:08.119779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.822 qpair failed and we were unable to recover it. 00:38:32.822 [2024-12-09 09:56:08.120132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.822 [2024-12-09 09:56:08.120159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.822 qpair failed and we were unable to recover it. 00:38:32.822 [2024-12-09 09:56:08.120504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.822 [2024-12-09 09:56:08.120533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.822 qpair failed and we were unable to recover it. 00:38:32.822 [2024-12-09 09:56:08.120864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.822 [2024-12-09 09:56:08.120894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.822 qpair failed and we were unable to recover it. 00:38:32.822 [2024-12-09 09:56:08.121220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.822 [2024-12-09 09:56:08.121250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.822 qpair failed and we were unable to recover it. 00:38:32.822 [2024-12-09 09:56:08.121596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.822 [2024-12-09 09:56:08.121627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.822 qpair failed and we were unable to recover it. 00:38:32.822 [2024-12-09 09:56:08.121983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.822 [2024-12-09 09:56:08.122012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.822 qpair failed and we were unable to recover it. 00:38:32.822 [2024-12-09 09:56:08.122330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.822 [2024-12-09 09:56:08.122358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.822 qpair failed and we were unable to recover it. 00:38:32.822 [2024-12-09 09:56:08.122705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.822 [2024-12-09 09:56:08.122757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.822 qpair failed and we were unable to recover it. 00:38:32.822 [2024-12-09 09:56:08.123113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.822 [2024-12-09 09:56:08.123142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.822 qpair failed and we were unable to recover it. 00:38:32.822 [2024-12-09 09:56:08.123529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.822 [2024-12-09 09:56:08.123558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.822 qpair failed and we were unable to recover it. 00:38:32.822 [2024-12-09 09:56:08.123896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.822 [2024-12-09 09:56:08.123926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.822 qpair failed and we were unable to recover it. 00:38:32.822 [2024-12-09 09:56:08.124275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.822 [2024-12-09 09:56:08.124304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.822 qpair failed and we were unable to recover it. 00:38:32.822 [2024-12-09 09:56:08.124658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.822 [2024-12-09 09:56:08.124689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.822 qpair failed and we were unable to recover it. 00:38:32.822 [2024-12-09 09:56:08.125071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.822 [2024-12-09 09:56:08.125100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.822 qpair failed and we were unable to recover it. 00:38:32.822 [2024-12-09 09:56:08.125440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.822 [2024-12-09 09:56:08.125469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.822 qpair failed and we were unable to recover it. 00:38:32.822 [2024-12-09 09:56:08.125821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.822 [2024-12-09 09:56:08.125852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.822 qpair failed and we were unable to recover it. 00:38:32.822 [2024-12-09 09:56:08.126089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.822 [2024-12-09 09:56:08.126118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.822 qpair failed and we were unable to recover it. 00:38:32.822 [2024-12-09 09:56:08.126550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.822 [2024-12-09 09:56:08.126579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.822 qpair failed and we were unable to recover it. 00:38:32.822 [2024-12-09 09:56:08.126790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.822 [2024-12-09 09:56:08.126821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.822 qpair failed and we were unable to recover it. 00:38:32.822 [2024-12-09 09:56:08.127164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.822 [2024-12-09 09:56:08.127192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.822 qpair failed and we were unable to recover it. 00:38:32.822 [2024-12-09 09:56:08.127552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.822 [2024-12-09 09:56:08.127581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.822 qpair failed and we were unable to recover it. 00:38:32.822 [2024-12-09 09:56:08.128006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.822 [2024-12-09 09:56:08.128037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.822 qpair failed and we were unable to recover it. 00:38:32.823 [2024-12-09 09:56:08.128386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.823 [2024-12-09 09:56:08.128415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.823 qpair failed and we were unable to recover it. 00:38:32.823 [2024-12-09 09:56:08.128650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.823 [2024-12-09 09:56:08.128681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.823 qpair failed and we were unable to recover it. 00:38:32.823 [2024-12-09 09:56:08.129031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.823 [2024-12-09 09:56:08.129060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.823 qpair failed and we were unable to recover it. 00:38:32.823 [2024-12-09 09:56:08.129400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.823 [2024-12-09 09:56:08.129430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.823 qpair failed and we were unable to recover it. 00:38:32.823 [2024-12-09 09:56:08.129848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.823 [2024-12-09 09:56:08.129885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.823 qpair failed and we were unable to recover it. 00:38:32.823 [2024-12-09 09:56:08.130244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.823 [2024-12-09 09:56:08.130273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.823 qpair failed and we were unable to recover it. 00:38:32.823 [2024-12-09 09:56:08.130619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.823 [2024-12-09 09:56:08.130656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.823 qpair failed and we were unable to recover it. 00:38:32.823 [2024-12-09 09:56:08.130996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.823 [2024-12-09 09:56:08.131026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.823 qpair failed and we were unable to recover it. 00:38:32.823 [2024-12-09 09:56:08.131370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.823 [2024-12-09 09:56:08.131399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.823 qpair failed and we were unable to recover it. 00:38:32.823 [2024-12-09 09:56:08.131752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.823 [2024-12-09 09:56:08.131783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.823 qpair failed and we were unable to recover it. 00:38:32.823 [2024-12-09 09:56:08.132153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.823 [2024-12-09 09:56:08.132182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.823 qpair failed and we were unable to recover it. 00:38:32.823 [2024-12-09 09:56:08.132522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.823 [2024-12-09 09:56:08.132551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.823 qpair failed and we were unable to recover it. 00:38:32.823 [2024-12-09 09:56:08.132895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.823 [2024-12-09 09:56:08.132926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.823 qpair failed and we were unable to recover it. 00:38:32.823 [2024-12-09 09:56:08.133261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.823 [2024-12-09 09:56:08.133290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.823 qpair failed and we were unable to recover it. 00:38:32.823 [2024-12-09 09:56:08.133628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.823 [2024-12-09 09:56:08.133669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.823 qpair failed and we were unable to recover it. 00:38:32.823 [2024-12-09 09:56:08.134019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.823 [2024-12-09 09:56:08.134048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.823 qpair failed and we were unable to recover it. 00:38:32.823 [2024-12-09 09:56:08.134410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.823 [2024-12-09 09:56:08.134439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.823 qpair failed and we were unable to recover it. 00:38:32.823 [2024-12-09 09:56:08.134772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.823 [2024-12-09 09:56:08.134803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.823 qpair failed and we were unable to recover it. 00:38:32.823 [2024-12-09 09:56:08.135169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.823 [2024-12-09 09:56:08.135199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.823 qpair failed and we were unable to recover it. 00:38:32.823 [2024-12-09 09:56:08.135539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.823 [2024-12-09 09:56:08.135568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.823 qpair failed and we were unable to recover it. 00:38:32.823 [2024-12-09 09:56:08.135893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.823 [2024-12-09 09:56:08.135924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.823 qpair failed and we were unable to recover it. 00:38:32.823 [2024-12-09 09:56:08.136255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.823 [2024-12-09 09:56:08.136284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.823 qpair failed and we were unable to recover it. 00:38:32.823 [2024-12-09 09:56:08.136621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.823 [2024-12-09 09:56:08.136662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.823 qpair failed and we were unable to recover it. 00:38:32.823 [2024-12-09 09:56:08.136992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.823 [2024-12-09 09:56:08.137021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.823 qpair failed and we were unable to recover it. 00:38:32.823 [2024-12-09 09:56:08.137368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.823 [2024-12-09 09:56:08.137397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.823 qpair failed and we were unable to recover it. 00:38:32.823 [2024-12-09 09:56:08.137746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.823 [2024-12-09 09:56:08.137775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.823 qpair failed and we were unable to recover it. 00:38:32.823 [2024-12-09 09:56:08.138132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.823 [2024-12-09 09:56:08.138160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.823 qpair failed and we were unable to recover it. 00:38:32.823 [2024-12-09 09:56:08.138499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.823 [2024-12-09 09:56:08.138528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.823 qpair failed and we were unable to recover it. 00:38:32.823 [2024-12-09 09:56:08.138868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.823 [2024-12-09 09:56:08.138899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.823 qpair failed and we were unable to recover it. 00:38:32.823 [2024-12-09 09:56:08.139252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.823 [2024-12-09 09:56:08.139280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.823 qpair failed and we were unable to recover it. 00:38:32.823 [2024-12-09 09:56:08.139618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.823 [2024-12-09 09:56:08.139659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.823 qpair failed and we were unable to recover it. 00:38:32.823 [2024-12-09 09:56:08.139983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.823 [2024-12-09 09:56:08.140013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.823 qpair failed and we were unable to recover it. 00:38:32.823 [2024-12-09 09:56:08.140346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.823 [2024-12-09 09:56:08.140376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.823 qpair failed and we were unable to recover it. 00:38:32.823 [2024-12-09 09:56:08.140729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.823 [2024-12-09 09:56:08.140758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.823 qpair failed and we were unable to recover it. 00:38:32.823 [2024-12-09 09:56:08.141157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.823 [2024-12-09 09:56:08.141186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.823 qpair failed and we were unable to recover it. 00:38:32.823 [2024-12-09 09:56:08.141528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.823 [2024-12-09 09:56:08.141557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.823 qpair failed and we were unable to recover it. 00:38:32.823 [2024-12-09 09:56:08.141902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.823 [2024-12-09 09:56:08.141940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.823 qpair failed and we were unable to recover it. 00:38:32.823 [2024-12-09 09:56:08.142262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.823 [2024-12-09 09:56:08.142292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.824 qpair failed and we were unable to recover it. 00:38:32.824 [2024-12-09 09:56:08.142631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.824 [2024-12-09 09:56:08.142671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.824 qpair failed and we were unable to recover it. 00:38:32.824 [2024-12-09 09:56:08.143007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.824 [2024-12-09 09:56:08.143036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.824 qpair failed and we were unable to recover it. 00:38:32.824 [2024-12-09 09:56:08.143382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.824 [2024-12-09 09:56:08.143411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.824 qpair failed and we were unable to recover it. 00:38:32.824 [2024-12-09 09:56:08.143754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.824 [2024-12-09 09:56:08.143785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.824 qpair failed and we were unable to recover it. 00:38:32.824 [2024-12-09 09:56:08.144188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.824 [2024-12-09 09:56:08.144218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.824 qpair failed and we were unable to recover it. 00:38:32.824 [2024-12-09 09:56:08.144538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.824 [2024-12-09 09:56:08.144568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.824 qpair failed and we were unable to recover it. 00:38:32.824 [2024-12-09 09:56:08.144910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.824 [2024-12-09 09:56:08.144940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.824 qpair failed and we were unable to recover it. 00:38:32.824 [2024-12-09 09:56:08.145286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.824 [2024-12-09 09:56:08.145321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.824 qpair failed and we were unable to recover it. 00:38:32.824 [2024-12-09 09:56:08.145683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.824 [2024-12-09 09:56:08.145713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.824 qpair failed and we were unable to recover it. 00:38:32.824 [2024-12-09 09:56:08.146055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.824 [2024-12-09 09:56:08.146084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.824 qpair failed and we were unable to recover it. 00:38:32.824 [2024-12-09 09:56:08.146426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.824 [2024-12-09 09:56:08.146455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.824 qpair failed and we were unable to recover it. 00:38:32.824 [2024-12-09 09:56:08.146780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.824 [2024-12-09 09:56:08.146809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.824 qpair failed and we were unable to recover it. 00:38:32.824 [2024-12-09 09:56:08.147176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.824 [2024-12-09 09:56:08.147206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.824 qpair failed and we were unable to recover it. 00:38:32.824 [2024-12-09 09:56:08.147593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.824 [2024-12-09 09:56:08.147622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.824 qpair failed and we were unable to recover it. 00:38:32.824 [2024-12-09 09:56:08.147980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.824 [2024-12-09 09:56:08.148009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.824 qpair failed and we were unable to recover it. 00:38:32.824 [2024-12-09 09:56:08.148340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.824 [2024-12-09 09:56:08.148369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.824 qpair failed and we were unable to recover it. 00:38:32.824 [2024-12-09 09:56:08.148730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.824 [2024-12-09 09:56:08.148761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.824 qpair failed and we were unable to recover it. 00:38:32.824 [2024-12-09 09:56:08.149116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.824 [2024-12-09 09:56:08.149145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.824 qpair failed and we were unable to recover it. 00:38:32.824 [2024-12-09 09:56:08.149484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.824 [2024-12-09 09:56:08.149512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.824 qpair failed and we were unable to recover it. 00:38:32.824 [2024-12-09 09:56:08.149736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.824 [2024-12-09 09:56:08.149766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.824 qpair failed and we were unable to recover it. 00:38:32.824 [2024-12-09 09:56:08.150117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.824 [2024-12-09 09:56:08.150146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.824 qpair failed and we were unable to recover it. 00:38:32.824 [2024-12-09 09:56:08.150492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.824 [2024-12-09 09:56:08.150522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.824 qpair failed and we were unable to recover it. 00:38:32.824 [2024-12-09 09:56:08.150875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.824 [2024-12-09 09:56:08.150906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.824 qpair failed and we were unable to recover it. 00:38:32.824 [2024-12-09 09:56:08.151264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.824 [2024-12-09 09:56:08.151293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.824 qpair failed and we were unable to recover it. 00:38:32.824 [2024-12-09 09:56:08.151656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.824 [2024-12-09 09:56:08.151687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.824 qpair failed and we were unable to recover it. 00:38:32.824 [2024-12-09 09:56:08.152099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.824 [2024-12-09 09:56:08.152128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.824 qpair failed and we were unable to recover it. 00:38:32.824 [2024-12-09 09:56:08.152442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.824 [2024-12-09 09:56:08.152469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.824 qpair failed and we were unable to recover it. 00:38:32.824 [2024-12-09 09:56:08.152837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.824 [2024-12-09 09:56:08.152869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.824 qpair failed and we were unable to recover it. 00:38:32.824 [2024-12-09 09:56:08.153209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.824 [2024-12-09 09:56:08.153238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.824 qpair failed and we were unable to recover it. 00:38:32.824 [2024-12-09 09:56:08.153577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.824 [2024-12-09 09:56:08.153606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.824 qpair failed and we were unable to recover it. 00:38:32.824 [2024-12-09 09:56:08.153973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.824 [2024-12-09 09:56:08.154003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.824 qpair failed and we were unable to recover it. 00:38:32.824 [2024-12-09 09:56:08.154341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.824 [2024-12-09 09:56:08.154370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.824 qpair failed and we were unable to recover it. 00:38:32.824 [2024-12-09 09:56:08.154701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.824 [2024-12-09 09:56:08.154732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.824 qpair failed and we were unable to recover it. 00:38:32.824 [2024-12-09 09:56:08.155084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.824 [2024-12-09 09:56:08.155113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.824 qpair failed and we were unable to recover it. 00:38:32.824 [2024-12-09 09:56:08.155454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.824 [2024-12-09 09:56:08.155483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.824 qpair failed and we were unable to recover it. 00:38:32.824 [2024-12-09 09:56:08.155846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.824 [2024-12-09 09:56:08.155877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.824 qpair failed and we were unable to recover it. 00:38:32.824 [2024-12-09 09:56:08.156214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.824 [2024-12-09 09:56:08.156244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.824 qpair failed and we were unable to recover it. 00:38:32.824 [2024-12-09 09:56:08.156572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.825 [2024-12-09 09:56:08.156600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.825 qpair failed and we were unable to recover it. 00:38:32.825 [2024-12-09 09:56:08.156940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.825 [2024-12-09 09:56:08.156970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.825 qpair failed and we were unable to recover it. 00:38:32.825 [2024-12-09 09:56:08.157318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.825 [2024-12-09 09:56:08.157348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.825 qpair failed and we were unable to recover it. 00:38:32.825 [2024-12-09 09:56:08.157709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.825 [2024-12-09 09:56:08.157740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.825 qpair failed and we were unable to recover it. 00:38:32.825 [2024-12-09 09:56:08.158115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.825 [2024-12-09 09:56:08.158143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.825 qpair failed and we were unable to recover it. 00:38:32.825 [2024-12-09 09:56:08.158488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.825 [2024-12-09 09:56:08.158517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.825 qpair failed and we were unable to recover it. 00:38:32.825 [2024-12-09 09:56:08.158951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.825 [2024-12-09 09:56:08.158981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.825 qpair failed and we were unable to recover it. 00:38:32.825 [2024-12-09 09:56:08.159337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.825 [2024-12-09 09:56:08.159366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.825 qpair failed and we were unable to recover it. 00:38:32.825 [2024-12-09 09:56:08.159705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.825 [2024-12-09 09:56:08.159736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.825 qpair failed and we were unable to recover it. 00:38:32.825 [2024-12-09 09:56:08.160070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.825 [2024-12-09 09:56:08.160099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.825 qpair failed and we were unable to recover it. 00:38:32.825 [2024-12-09 09:56:08.160460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.825 [2024-12-09 09:56:08.160489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.825 qpair failed and we were unable to recover it. 00:38:32.825 [2024-12-09 09:56:08.160845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.825 [2024-12-09 09:56:08.160875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.825 qpair failed and we were unable to recover it. 00:38:32.825 [2024-12-09 09:56:08.161206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.825 [2024-12-09 09:56:08.161235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.825 qpair failed and we were unable to recover it. 00:38:32.825 [2024-12-09 09:56:08.161592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.825 [2024-12-09 09:56:08.161621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.825 qpair failed and we were unable to recover it. 00:38:32.825 [2024-12-09 09:56:08.161865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.825 [2024-12-09 09:56:08.161898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.825 qpair failed and we were unable to recover it. 00:38:32.825 [2024-12-09 09:56:08.162276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.825 [2024-12-09 09:56:08.162305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.825 qpair failed and we were unable to recover it. 00:38:32.825 [2024-12-09 09:56:08.162654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.825 [2024-12-09 09:56:08.162685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.825 qpair failed and we were unable to recover it. 00:38:32.825 [2024-12-09 09:56:08.162907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.825 [2024-12-09 09:56:08.162939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.825 qpair failed and we were unable to recover it. 00:38:32.825 [2024-12-09 09:56:08.163275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.825 [2024-12-09 09:56:08.163305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.825 qpair failed and we were unable to recover it. 00:38:32.825 [2024-12-09 09:56:08.163676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.825 [2024-12-09 09:56:08.163707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.825 qpair failed and we were unable to recover it. 00:38:32.825 [2024-12-09 09:56:08.164069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.825 [2024-12-09 09:56:08.164098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.825 qpair failed and we were unable to recover it. 00:38:32.825 [2024-12-09 09:56:08.164441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.825 [2024-12-09 09:56:08.164470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.825 qpair failed and we were unable to recover it. 00:38:32.825 [2024-12-09 09:56:08.164828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.825 [2024-12-09 09:56:08.164859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.825 qpair failed and we were unable to recover it. 00:38:32.825 [2024-12-09 09:56:08.165198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.825 [2024-12-09 09:56:08.165226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.825 qpair failed and we were unable to recover it. 00:38:32.825 [2024-12-09 09:56:08.165454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.825 [2024-12-09 09:56:08.165485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.825 qpair failed and we were unable to recover it. 00:38:32.825 [2024-12-09 09:56:08.165850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.825 [2024-12-09 09:56:08.165882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.825 qpair failed and we were unable to recover it. 00:38:32.825 [2024-12-09 09:56:08.166231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.825 [2024-12-09 09:56:08.166259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.825 qpair failed and we were unable to recover it. 00:38:32.825 [2024-12-09 09:56:08.166626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.825 [2024-12-09 09:56:08.166665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.825 qpair failed and we were unable to recover it. 00:38:32.825 [2024-12-09 09:56:08.167029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.825 [2024-12-09 09:56:08.167059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.825 qpair failed and we were unable to recover it. 00:38:32.825 [2024-12-09 09:56:08.167286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.825 [2024-12-09 09:56:08.167318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.825 qpair failed and we were unable to recover it. 00:38:32.825 [2024-12-09 09:56:08.167680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.825 [2024-12-09 09:56:08.167712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.825 qpair failed and we were unable to recover it. 00:38:32.825 [2024-12-09 09:56:08.168024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.825 [2024-12-09 09:56:08.168052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.825 qpair failed and we were unable to recover it. 00:38:32.825 [2024-12-09 09:56:08.168431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.825 [2024-12-09 09:56:08.168460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.825 qpair failed and we were unable to recover it. 00:38:32.825 [2024-12-09 09:56:08.168808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.825 [2024-12-09 09:56:08.168839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.825 qpair failed and we were unable to recover it. 00:38:32.825 [2024-12-09 09:56:08.169187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.826 [2024-12-09 09:56:08.169218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.826 qpair failed and we were unable to recover it. 00:38:32.826 [2024-12-09 09:56:08.169557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.826 [2024-12-09 09:56:08.169588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.826 qpair failed and we were unable to recover it. 00:38:32.826 [2024-12-09 09:56:08.169944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.826 [2024-12-09 09:56:08.169975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.826 qpair failed and we were unable to recover it. 00:38:32.826 [2024-12-09 09:56:08.170338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.826 [2024-12-09 09:56:08.170367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.826 qpair failed and we were unable to recover it. 00:38:32.826 [2024-12-09 09:56:08.170568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.826 [2024-12-09 09:56:08.170604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.826 qpair failed and we were unable to recover it. 00:38:32.826 [2024-12-09 09:56:08.170963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.826 [2024-12-09 09:56:08.170994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.826 qpair failed and we were unable to recover it. 00:38:32.826 [2024-12-09 09:56:08.171347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.826 [2024-12-09 09:56:08.171376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.826 qpair failed and we were unable to recover it. 00:38:32.826 [2024-12-09 09:56:08.171732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.826 [2024-12-09 09:56:08.171763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.826 qpair failed and we were unable to recover it. 00:38:32.826 [2024-12-09 09:56:08.172105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.826 [2024-12-09 09:56:08.172134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.826 qpair failed and we were unable to recover it. 00:38:32.826 [2024-12-09 09:56:08.172457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.826 [2024-12-09 09:56:08.172485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.826 qpair failed and we were unable to recover it. 00:38:32.826 [2024-12-09 09:56:08.172809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.826 [2024-12-09 09:56:08.172839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.826 qpair failed and we were unable to recover it. 00:38:32.826 [2024-12-09 09:56:08.173198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.826 [2024-12-09 09:56:08.173227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.826 qpair failed and we were unable to recover it. 00:38:32.826 [2024-12-09 09:56:08.173555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.826 [2024-12-09 09:56:08.173585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.826 qpair failed and we were unable to recover it. 00:38:32.826 [2024-12-09 09:56:08.173930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.826 [2024-12-09 09:56:08.173960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.826 qpair failed and we were unable to recover it. 00:38:32.826 [2024-12-09 09:56:08.174300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.826 [2024-12-09 09:56:08.174330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.826 qpair failed and we were unable to recover it. 00:38:32.826 [2024-12-09 09:56:08.174686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.826 [2024-12-09 09:56:08.174715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.826 qpair failed and we were unable to recover it. 00:38:32.826 [2024-12-09 09:56:08.175118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.826 [2024-12-09 09:56:08.175147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.826 qpair failed and we were unable to recover it. 00:38:32.826 [2024-12-09 09:56:08.175479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.826 [2024-12-09 09:56:08.175508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.826 qpair failed and we were unable to recover it. 00:38:32.826 [2024-12-09 09:56:08.175859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.826 [2024-12-09 09:56:08.175890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.826 qpair failed and we were unable to recover it. 00:38:32.826 [2024-12-09 09:56:08.176244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.826 [2024-12-09 09:56:08.176273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.826 qpair failed and we were unable to recover it. 00:38:32.826 [2024-12-09 09:56:08.176622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.826 [2024-12-09 09:56:08.176663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.826 qpair failed and we were unable to recover it. 00:38:32.826 [2024-12-09 09:56:08.177025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.826 [2024-12-09 09:56:08.177054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.826 qpair failed and we were unable to recover it. 00:38:32.826 [2024-12-09 09:56:08.177398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.826 [2024-12-09 09:56:08.177426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.826 qpair failed and we were unable to recover it. 00:38:32.826 [2024-12-09 09:56:08.177772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.826 [2024-12-09 09:56:08.177802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.826 qpair failed and we were unable to recover it. 00:38:32.826 [2024-12-09 09:56:08.178150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.826 [2024-12-09 09:56:08.178178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.826 qpair failed and we were unable to recover it. 00:38:32.826 [2024-12-09 09:56:08.178531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.826 [2024-12-09 09:56:08.178559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.826 qpair failed and we were unable to recover it. 00:38:32.826 [2024-12-09 09:56:08.178851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.826 [2024-12-09 09:56:08.178881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.826 qpair failed and we were unable to recover it. 00:38:32.826 [2024-12-09 09:56:08.179126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.826 [2024-12-09 09:56:08.179154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.826 qpair failed and we were unable to recover it. 00:38:32.826 [2024-12-09 09:56:08.179487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.826 [2024-12-09 09:56:08.179516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.826 qpair failed and we were unable to recover it. 00:38:32.826 [2024-12-09 09:56:08.179850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.826 [2024-12-09 09:56:08.179881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.826 qpair failed and we were unable to recover it. 00:38:32.826 [2024-12-09 09:56:08.180216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.826 [2024-12-09 09:56:08.180245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.826 qpair failed and we were unable to recover it. 00:38:32.826 [2024-12-09 09:56:08.180581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.826 [2024-12-09 09:56:08.180609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.826 qpair failed and we were unable to recover it. 00:38:32.826 [2024-12-09 09:56:08.180967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.826 [2024-12-09 09:56:08.180997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.826 qpair failed and we were unable to recover it. 00:38:32.826 [2024-12-09 09:56:08.181334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.826 [2024-12-09 09:56:08.181363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.826 qpair failed and we were unable to recover it. 00:38:32.826 [2024-12-09 09:56:08.181703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.826 [2024-12-09 09:56:08.181732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.826 qpair failed and we were unable to recover it. 00:38:32.826 [2024-12-09 09:56:08.182082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.826 [2024-12-09 09:56:08.182111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.826 qpair failed and we were unable to recover it. 00:38:32.826 [2024-12-09 09:56:08.182461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.826 [2024-12-09 09:56:08.182490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.826 qpair failed and we were unable to recover it. 00:38:32.826 [2024-12-09 09:56:08.182838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.827 [2024-12-09 09:56:08.182867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.827 qpair failed and we were unable to recover it. 00:38:32.827 [2024-12-09 09:56:08.183227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.827 [2024-12-09 09:56:08.183255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.827 qpair failed and we were unable to recover it. 00:38:32.827 [2024-12-09 09:56:08.183610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.827 [2024-12-09 09:56:08.183669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.827 qpair failed and we were unable to recover it. 00:38:32.827 [2024-12-09 09:56:08.184002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.827 [2024-12-09 09:56:08.184032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.827 qpair failed and we were unable to recover it. 00:38:32.827 [2024-12-09 09:56:08.184387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.827 [2024-12-09 09:56:08.184415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.827 qpair failed and we were unable to recover it. 00:38:32.827 [2024-12-09 09:56:08.184707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.827 [2024-12-09 09:56:08.184737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.827 qpair failed and we were unable to recover it. 00:38:32.827 [2024-12-09 09:56:08.185094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.827 [2024-12-09 09:56:08.185123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.827 qpair failed and we were unable to recover it. 00:38:32.827 [2024-12-09 09:56:08.185459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.827 [2024-12-09 09:56:08.185488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.827 qpair failed and we were unable to recover it. 00:38:32.827 [2024-12-09 09:56:08.185854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.827 [2024-12-09 09:56:08.185889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.827 qpair failed and we were unable to recover it. 00:38:32.827 [2024-12-09 09:56:08.186245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.827 [2024-12-09 09:56:08.186274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.827 qpair failed and we were unable to recover it. 00:38:32.827 [2024-12-09 09:56:08.186608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.827 [2024-12-09 09:56:08.186636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.827 qpair failed and we were unable to recover it. 00:38:32.827 [2024-12-09 09:56:08.186995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.827 [2024-12-09 09:56:08.187025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.827 qpair failed and we were unable to recover it. 00:38:32.827 [2024-12-09 09:56:08.187386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.827 [2024-12-09 09:56:08.187416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.827 qpair failed and we were unable to recover it. 00:38:32.827 [2024-12-09 09:56:08.187759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.827 [2024-12-09 09:56:08.187789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.827 qpair failed and we were unable to recover it. 00:38:32.827 [2024-12-09 09:56:08.188143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.827 [2024-12-09 09:56:08.188171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.827 qpair failed and we were unable to recover it. 00:38:32.827 [2024-12-09 09:56:08.188595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.827 [2024-12-09 09:56:08.188623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.827 qpair failed and we were unable to recover it. 00:38:32.827 [2024-12-09 09:56:08.189023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.827 [2024-12-09 09:56:08.189053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.827 qpair failed and we were unable to recover it. 00:38:32.827 [2024-12-09 09:56:08.189392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.827 [2024-12-09 09:56:08.189420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.827 qpair failed and we were unable to recover it. 00:38:32.827 [2024-12-09 09:56:08.189765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.827 [2024-12-09 09:56:08.189795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.827 qpair failed and we were unable to recover it. 00:38:32.827 [2024-12-09 09:56:08.190028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.827 [2024-12-09 09:56:08.190056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.827 qpair failed and we were unable to recover it. 00:38:32.827 [2024-12-09 09:56:08.190421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.827 [2024-12-09 09:56:08.190450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.827 qpair failed and we were unable to recover it. 00:38:32.827 [2024-12-09 09:56:08.190728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.827 [2024-12-09 09:56:08.190757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.827 qpair failed and we were unable to recover it. 00:38:32.827 [2024-12-09 09:56:08.191124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.827 [2024-12-09 09:56:08.191153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.827 qpair failed and we were unable to recover it. 00:38:32.827 [2024-12-09 09:56:08.191499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.827 [2024-12-09 09:56:08.191528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.827 qpair failed and we were unable to recover it. 00:38:32.827 [2024-12-09 09:56:08.191871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.827 [2024-12-09 09:56:08.191901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.827 qpair failed and we were unable to recover it. 00:38:32.827 [2024-12-09 09:56:08.192241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.827 [2024-12-09 09:56:08.192270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.827 qpair failed and we were unable to recover it. 00:38:32.827 [2024-12-09 09:56:08.192589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.827 [2024-12-09 09:56:08.192617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.827 qpair failed and we were unable to recover it. 00:38:32.827 [2024-12-09 09:56:08.192975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.827 [2024-12-09 09:56:08.193004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.827 qpair failed and we were unable to recover it. 00:38:32.827 [2024-12-09 09:56:08.193256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.827 [2024-12-09 09:56:08.193284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.827 qpair failed and we were unable to recover it. 00:38:32.827 [2024-12-09 09:56:08.193616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.827 [2024-12-09 09:56:08.193654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.827 qpair failed and we were unable to recover it. 00:38:32.827 [2024-12-09 09:56:08.193979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.827 [2024-12-09 09:56:08.194007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.827 qpair failed and we were unable to recover it. 00:38:32.827 [2024-12-09 09:56:08.194346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.827 [2024-12-09 09:56:08.194374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.827 qpair failed and we were unable to recover it. 00:38:32.827 [2024-12-09 09:56:08.194804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.827 [2024-12-09 09:56:08.194834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.827 qpair failed and we were unable to recover it. 00:38:32.827 [2024-12-09 09:56:08.195166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.827 [2024-12-09 09:56:08.195194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.827 qpair failed and we were unable to recover it. 00:38:32.827 [2024-12-09 09:56:08.195430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.827 [2024-12-09 09:56:08.195461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.827 qpair failed and we were unable to recover it. 00:38:32.827 [2024-12-09 09:56:08.195802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.827 [2024-12-09 09:56:08.195839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.827 qpair failed and we were unable to recover it. 00:38:32.827 [2024-12-09 09:56:08.196181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.827 [2024-12-09 09:56:08.196210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.827 qpair failed and we were unable to recover it. 00:38:32.827 [2024-12-09 09:56:08.196555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.827 [2024-12-09 09:56:08.196584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.827 qpair failed and we were unable to recover it. 00:38:32.828 [2024-12-09 09:56:08.196911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.828 [2024-12-09 09:56:08.196941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.828 qpair failed and we were unable to recover it. 00:38:32.828 [2024-12-09 09:56:08.197287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.828 [2024-12-09 09:56:08.197316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.828 qpair failed and we were unable to recover it. 00:38:32.828 [2024-12-09 09:56:08.197676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.828 [2024-12-09 09:56:08.197706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.828 qpair failed and we were unable to recover it. 00:38:32.828 [2024-12-09 09:56:08.198047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.828 [2024-12-09 09:56:08.198074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.828 qpair failed and we were unable to recover it. 00:38:32.828 [2024-12-09 09:56:08.198494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.828 [2024-12-09 09:56:08.198523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.828 qpair failed and we were unable to recover it. 00:38:32.828 [2024-12-09 09:56:08.198747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.828 [2024-12-09 09:56:08.198776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.828 qpair failed and we were unable to recover it. 00:38:32.828 [2024-12-09 09:56:08.199105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.828 [2024-12-09 09:56:08.199135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.828 qpair failed and we were unable to recover it. 00:38:32.828 [2024-12-09 09:56:08.199477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.828 [2024-12-09 09:56:08.199506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.828 qpair failed and we were unable to recover it. 00:38:32.828 [2024-12-09 09:56:08.199863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.828 [2024-12-09 09:56:08.199893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.828 qpair failed and we were unable to recover it. 00:38:32.828 [2024-12-09 09:56:08.200248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.828 [2024-12-09 09:56:08.200276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.828 qpair failed and we were unable to recover it. 00:38:32.828 [2024-12-09 09:56:08.200628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.828 [2024-12-09 09:56:08.200666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.828 qpair failed and we were unable to recover it. 00:38:32.828 [2024-12-09 09:56:08.201004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.828 [2024-12-09 09:56:08.201034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.828 qpair failed and we were unable to recover it. 00:38:32.828 [2024-12-09 09:56:08.201436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.828 [2024-12-09 09:56:08.201465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.828 qpair failed and we were unable to recover it. 00:38:32.828 [2024-12-09 09:56:08.201823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.828 [2024-12-09 09:56:08.201852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.828 qpair failed and we were unable to recover it. 00:38:32.828 [2024-12-09 09:56:08.202222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.828 [2024-12-09 09:56:08.202251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.828 qpair failed and we were unable to recover it. 00:38:32.828 [2024-12-09 09:56:08.202493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.828 [2024-12-09 09:56:08.202521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.828 qpair failed and we were unable to recover it. 00:38:32.828 [2024-12-09 09:56:08.202866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.828 [2024-12-09 09:56:08.202897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.828 qpair failed and we were unable to recover it. 00:38:32.828 [2024-12-09 09:56:08.203215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.828 [2024-12-09 09:56:08.203245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.828 qpair failed and we were unable to recover it. 00:38:32.828 [2024-12-09 09:56:08.203600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.828 [2024-12-09 09:56:08.203630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.828 qpair failed and we were unable to recover it. 00:38:32.828 [2024-12-09 09:56:08.203976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.828 [2024-12-09 09:56:08.204005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.828 qpair failed and we were unable to recover it. 00:38:32.828 [2024-12-09 09:56:08.204349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.828 [2024-12-09 09:56:08.204377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.828 qpair failed and we were unable to recover it. 00:38:32.828 [2024-12-09 09:56:08.204718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.828 [2024-12-09 09:56:08.204748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.828 qpair failed and we were unable to recover it. 00:38:32.828 [2024-12-09 09:56:08.205106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.828 [2024-12-09 09:56:08.205135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.828 qpair failed and we were unable to recover it. 00:38:32.828 [2024-12-09 09:56:08.205477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.828 [2024-12-09 09:56:08.205505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.828 qpair failed and we were unable to recover it. 00:38:32.828 [2024-12-09 09:56:08.205860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.828 [2024-12-09 09:56:08.205891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.828 qpair failed and we were unable to recover it. 00:38:32.828 [2024-12-09 09:56:08.206242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.828 [2024-12-09 09:56:08.206272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.828 qpair failed and we were unable to recover it. 00:38:32.828 [2024-12-09 09:56:08.206631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.828 [2024-12-09 09:56:08.206673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.828 qpair failed and we were unable to recover it. 00:38:32.828 [2024-12-09 09:56:08.207014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.828 [2024-12-09 09:56:08.207045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.828 qpair failed and we were unable to recover it. 00:38:32.828 [2024-12-09 09:56:08.207360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.828 [2024-12-09 09:56:08.207389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.828 qpair failed and we were unable to recover it. 00:38:32.828 [2024-12-09 09:56:08.207752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.828 [2024-12-09 09:56:08.207784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.828 qpair failed and we were unable to recover it. 00:38:32.828 [2024-12-09 09:56:08.208123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.828 [2024-12-09 09:56:08.208153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.828 qpair failed and we were unable to recover it. 00:38:32.828 [2024-12-09 09:56:08.208501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.828 [2024-12-09 09:56:08.208530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.828 qpair failed and we were unable to recover it. 00:38:32.828 [2024-12-09 09:56:08.208775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.828 [2024-12-09 09:56:08.208804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.828 qpair failed and we were unable to recover it. 00:38:32.828 [2024-12-09 09:56:08.209144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.828 [2024-12-09 09:56:08.209173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.828 qpair failed and we were unable to recover it. 00:38:32.828 [2024-12-09 09:56:08.209510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.828 [2024-12-09 09:56:08.209538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.828 qpair failed and we were unable to recover it. 00:38:32.828 [2024-12-09 09:56:08.209890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.828 [2024-12-09 09:56:08.209919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.828 qpair failed and we were unable to recover it. 00:38:32.828 [2024-12-09 09:56:08.210293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.828 [2024-12-09 09:56:08.210322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.828 qpair failed and we were unable to recover it. 00:38:32.828 [2024-12-09 09:56:08.210673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.828 [2024-12-09 09:56:08.210704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.828 qpair failed and we were unable to recover it. 00:38:32.829 [2024-12-09 09:56:08.210964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.829 [2024-12-09 09:56:08.210998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.829 qpair failed and we were unable to recover it. 00:38:32.829 [2024-12-09 09:56:08.211352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.829 [2024-12-09 09:56:08.211381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.829 qpair failed and we were unable to recover it. 00:38:32.829 [2024-12-09 09:56:08.211740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.829 [2024-12-09 09:56:08.211771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.829 qpair failed and we were unable to recover it. 00:38:32.829 [2024-12-09 09:56:08.212109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.829 [2024-12-09 09:56:08.212138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.829 qpair failed and we were unable to recover it. 00:38:32.829 [2024-12-09 09:56:08.212468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.829 [2024-12-09 09:56:08.212497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.829 qpair failed and we were unable to recover it. 00:38:32.829 [2024-12-09 09:56:08.212854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.829 [2024-12-09 09:56:08.212884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.829 qpair failed and we were unable to recover it. 00:38:32.829 [2024-12-09 09:56:08.213268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.829 [2024-12-09 09:56:08.213296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.829 qpair failed and we were unable to recover it. 00:38:32.829 [2024-12-09 09:56:08.213631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.829 [2024-12-09 09:56:08.213669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.829 qpair failed and we were unable to recover it. 00:38:32.829 [2024-12-09 09:56:08.214006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.829 [2024-12-09 09:56:08.214034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.829 qpair failed and we were unable to recover it. 00:38:32.829 [2024-12-09 09:56:08.214392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.829 [2024-12-09 09:56:08.214420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.829 qpair failed and we were unable to recover it. 00:38:32.829 [2024-12-09 09:56:08.214769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.829 [2024-12-09 09:56:08.214799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.829 qpair failed and we were unable to recover it. 00:38:32.829 [2024-12-09 09:56:08.215142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.829 [2024-12-09 09:56:08.215171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.829 qpair failed and we were unable to recover it. 00:38:32.829 [2024-12-09 09:56:08.215529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.829 [2024-12-09 09:56:08.215557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.829 qpair failed and we were unable to recover it. 00:38:32.829 [2024-12-09 09:56:08.215895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.829 [2024-12-09 09:56:08.215926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.829 qpair failed and we were unable to recover it. 00:38:32.829 [2024-12-09 09:56:08.216264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.829 [2024-12-09 09:56:08.216293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.829 qpair failed and we were unable to recover it. 00:38:32.829 [2024-12-09 09:56:08.216531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.829 [2024-12-09 09:56:08.216559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.829 qpair failed and we were unable to recover it. 00:38:32.829 [2024-12-09 09:56:08.216903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.829 [2024-12-09 09:56:08.216934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.829 qpair failed and we were unable to recover it. 00:38:32.829 [2024-12-09 09:56:08.217269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.829 [2024-12-09 09:56:08.217298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.829 qpair failed and we were unable to recover it. 00:38:32.829 [2024-12-09 09:56:08.217657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.829 [2024-12-09 09:56:08.217687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.829 qpair failed and we were unable to recover it. 00:38:32.829 [2024-12-09 09:56:08.218024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.829 [2024-12-09 09:56:08.218053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.829 qpair failed and we were unable to recover it. 00:38:32.829 [2024-12-09 09:56:08.218295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.829 [2024-12-09 09:56:08.218324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.829 qpair failed and we were unable to recover it. 00:38:32.829 [2024-12-09 09:56:08.218679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.829 [2024-12-09 09:56:08.218709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.829 qpair failed and we were unable to recover it. 00:38:32.829 [2024-12-09 09:56:08.218969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.829 [2024-12-09 09:56:08.218997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.829 qpair failed and we were unable to recover it. 00:38:32.829 [2024-12-09 09:56:08.219375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.829 [2024-12-09 09:56:08.219404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.829 qpair failed and we were unable to recover it. 00:38:32.829 [2024-12-09 09:56:08.219755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.829 [2024-12-09 09:56:08.219786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.829 qpair failed and we were unable to recover it. 00:38:32.829 [2024-12-09 09:56:08.220139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.829 [2024-12-09 09:56:08.220168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.829 qpair failed and we were unable to recover it. 00:38:32.829 [2024-12-09 09:56:08.220515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.829 [2024-12-09 09:56:08.220544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.829 qpair failed and we were unable to recover it. 00:38:32.829 [2024-12-09 09:56:08.220784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.829 [2024-12-09 09:56:08.220819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.829 qpair failed and we were unable to recover it. 00:38:32.829 [2024-12-09 09:56:08.221155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.829 [2024-12-09 09:56:08.221183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.829 qpair failed and we were unable to recover it. 00:38:32.829 [2024-12-09 09:56:08.221505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.829 [2024-12-09 09:56:08.221535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.829 qpair failed and we were unable to recover it. 00:38:32.829 [2024-12-09 09:56:08.221872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.829 [2024-12-09 09:56:08.221902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.829 qpair failed and we were unable to recover it. 00:38:32.829 [2024-12-09 09:56:08.222244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.829 [2024-12-09 09:56:08.222273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.829 qpair failed and we were unable to recover it. 00:38:32.829 [2024-12-09 09:56:08.222620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.829 [2024-12-09 09:56:08.222662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.829 qpair failed and we were unable to recover it. 00:38:32.829 [2024-12-09 09:56:08.222990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.829 [2024-12-09 09:56:08.223019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.829 qpair failed and we were unable to recover it. 00:38:32.829 [2024-12-09 09:56:08.223366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.829 [2024-12-09 09:56:08.223395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.829 qpair failed and we were unable to recover it. 00:38:32.829 [2024-12-09 09:56:08.223718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.829 [2024-12-09 09:56:08.223748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.829 qpair failed and we were unable to recover it. 00:38:32.829 [2024-12-09 09:56:08.224104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.829 [2024-12-09 09:56:08.224133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.829 qpair failed and we were unable to recover it. 00:38:32.829 [2024-12-09 09:56:08.224376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.829 [2024-12-09 09:56:08.224405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.830 qpair failed and we were unable to recover it. 00:38:32.830 [2024-12-09 09:56:08.224739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.830 [2024-12-09 09:56:08.224770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.830 qpair failed and we were unable to recover it. 00:38:32.830 [2024-12-09 09:56:08.225105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.830 [2024-12-09 09:56:08.225134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.830 qpair failed and we were unable to recover it. 00:38:32.830 [2024-12-09 09:56:08.225494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.830 [2024-12-09 09:56:08.225522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.830 qpair failed and we were unable to recover it. 00:38:32.830 [2024-12-09 09:56:08.225851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.830 [2024-12-09 09:56:08.225882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.830 qpair failed and we were unable to recover it. 00:38:32.830 [2024-12-09 09:56:08.226233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.830 [2024-12-09 09:56:08.226261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.830 qpair failed and we were unable to recover it. 00:38:32.830 [2024-12-09 09:56:08.226605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.830 [2024-12-09 09:56:08.226632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.830 qpair failed and we were unable to recover it. 00:38:32.830 [2024-12-09 09:56:08.227027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.830 [2024-12-09 09:56:08.227067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.830 qpair failed and we were unable to recover it. 00:38:32.830 [2024-12-09 09:56:08.227398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.830 [2024-12-09 09:56:08.227427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.830 qpair failed and we were unable to recover it. 00:38:32.830 [2024-12-09 09:56:08.227774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.830 [2024-12-09 09:56:08.227804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.830 qpair failed and we were unable to recover it. 00:38:32.830 [2024-12-09 09:56:08.228144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.830 [2024-12-09 09:56:08.228173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.830 qpair failed and we were unable to recover it. 00:38:32.830 [2024-12-09 09:56:08.228510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.830 [2024-12-09 09:56:08.228539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.830 qpair failed and we were unable to recover it. 00:38:32.830 [2024-12-09 09:56:08.228898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.830 [2024-12-09 09:56:08.228927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.830 qpair failed and we were unable to recover it. 00:38:32.830 [2024-12-09 09:56:08.229272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.830 [2024-12-09 09:56:08.229301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.830 qpair failed and we were unable to recover it. 00:38:32.830 [2024-12-09 09:56:08.229661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.830 [2024-12-09 09:56:08.229691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.830 qpair failed and we were unable to recover it. 00:38:32.830 [2024-12-09 09:56:08.230051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.830 [2024-12-09 09:56:08.230079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.830 qpair failed and we were unable to recover it. 00:38:32.830 [2024-12-09 09:56:08.230440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.830 [2024-12-09 09:56:08.230468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.830 qpair failed and we were unable to recover it. 00:38:32.830 [2024-12-09 09:56:08.230704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.830 [2024-12-09 09:56:08.230734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.830 qpair failed and we were unable to recover it. 00:38:32.830 [2024-12-09 09:56:08.231080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.830 [2024-12-09 09:56:08.231110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.830 qpair failed and we were unable to recover it. 00:38:32.830 [2024-12-09 09:56:08.231452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.830 [2024-12-09 09:56:08.231482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.830 qpair failed and we were unable to recover it. 00:38:32.830 [2024-12-09 09:56:08.231812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.830 [2024-12-09 09:56:08.231843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.830 qpair failed and we were unable to recover it. 00:38:32.830 [2024-12-09 09:56:08.232202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.830 [2024-12-09 09:56:08.232232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.830 qpair failed and we were unable to recover it. 00:38:32.830 [2024-12-09 09:56:08.232582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.830 [2024-12-09 09:56:08.232611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.830 qpair failed and we were unable to recover it. 00:38:32.830 [2024-12-09 09:56:08.232976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.830 [2024-12-09 09:56:08.233005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.830 qpair failed and we were unable to recover it. 00:38:32.830 [2024-12-09 09:56:08.233348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.830 [2024-12-09 09:56:08.233377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.830 qpair failed and we were unable to recover it. 00:38:32.830 [2024-12-09 09:56:08.233738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.830 [2024-12-09 09:56:08.233767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.830 qpair failed and we were unable to recover it. 00:38:32.830 [2024-12-09 09:56:08.234106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.830 [2024-12-09 09:56:08.234136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.830 qpair failed and we were unable to recover it. 00:38:32.830 [2024-12-09 09:56:08.234477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.830 [2024-12-09 09:56:08.234507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.830 qpair failed and we were unable to recover it. 00:38:32.830 [2024-12-09 09:56:08.234878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.830 [2024-12-09 09:56:08.234906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.830 qpair failed and we were unable to recover it. 00:38:32.830 [2024-12-09 09:56:08.235160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.830 [2024-12-09 09:56:08.235189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.830 qpair failed and we were unable to recover it. 00:38:32.830 [2024-12-09 09:56:08.235518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.830 [2024-12-09 09:56:08.235548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.830 qpair failed and we were unable to recover it. 00:38:32.830 [2024-12-09 09:56:08.235884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.830 [2024-12-09 09:56:08.235920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.830 qpair failed and we were unable to recover it. 00:38:32.830 [2024-12-09 09:56:08.236275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.830 [2024-12-09 09:56:08.236304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.830 qpair failed and we were unable to recover it. 00:38:32.830 [2024-12-09 09:56:08.236655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.830 [2024-12-09 09:56:08.236685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.830 qpair failed and we were unable to recover it. 00:38:32.831 [2024-12-09 09:56:08.237026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.831 [2024-12-09 09:56:08.237055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.831 qpair failed and we were unable to recover it. 00:38:32.831 [2024-12-09 09:56:08.237391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.831 [2024-12-09 09:56:08.237420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.831 qpair failed and we were unable to recover it. 00:38:32.831 [2024-12-09 09:56:08.237676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.831 [2024-12-09 09:56:08.237705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.831 qpair failed and we were unable to recover it. 00:38:32.831 [2024-12-09 09:56:08.237967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.831 [2024-12-09 09:56:08.237997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.831 qpair failed and we were unable to recover it. 00:38:32.831 [2024-12-09 09:56:08.238351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.831 [2024-12-09 09:56:08.238380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.831 qpair failed and we were unable to recover it. 00:38:32.831 [2024-12-09 09:56:08.238624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.831 [2024-12-09 09:56:08.238660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.831 qpair failed and we were unable to recover it. 00:38:32.831 [2024-12-09 09:56:08.238995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.831 [2024-12-09 09:56:08.239024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.831 qpair failed and we were unable to recover it. 00:38:32.831 [2024-12-09 09:56:08.239372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.831 [2024-12-09 09:56:08.239402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.831 qpair failed and we were unable to recover it. 00:38:32.831 [2024-12-09 09:56:08.239746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.831 [2024-12-09 09:56:08.239778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.831 qpair failed and we were unable to recover it. 00:38:32.831 [2024-12-09 09:56:08.240042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.831 [2024-12-09 09:56:08.240071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.831 qpair failed and we were unable to recover it. 00:38:32.831 [2024-12-09 09:56:08.240451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.831 [2024-12-09 09:56:08.240480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.831 qpair failed and we were unable to recover it. 00:38:32.831 [2024-12-09 09:56:08.240809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.831 [2024-12-09 09:56:08.240839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.831 qpair failed and we were unable to recover it. 00:38:32.831 [2024-12-09 09:56:08.241191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.831 [2024-12-09 09:56:08.241220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.831 qpair failed and we were unable to recover it. 00:38:32.831 [2024-12-09 09:56:08.241556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.831 [2024-12-09 09:56:08.241585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.831 qpair failed and we were unable to recover it. 00:38:32.831 [2024-12-09 09:56:08.241934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.831 [2024-12-09 09:56:08.241963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.831 qpair failed and we were unable to recover it. 00:38:32.831 [2024-12-09 09:56:08.242338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.831 [2024-12-09 09:56:08.242367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.831 qpair failed and we were unable to recover it. 00:38:32.831 [2024-12-09 09:56:08.242704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.831 [2024-12-09 09:56:08.242735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.831 qpair failed and we were unable to recover it. 00:38:32.831 [2024-12-09 09:56:08.243094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.831 [2024-12-09 09:56:08.243123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.831 qpair failed and we were unable to recover it. 00:38:32.831 [2024-12-09 09:56:08.243464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.831 [2024-12-09 09:56:08.243492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.831 qpair failed and we were unable to recover it. 00:38:32.831 [2024-12-09 09:56:08.243852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.831 [2024-12-09 09:56:08.243891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.831 qpair failed and we were unable to recover it. 00:38:32.831 [2024-12-09 09:56:08.244248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.831 [2024-12-09 09:56:08.244277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.831 qpair failed and we were unable to recover it. 00:38:32.831 [2024-12-09 09:56:08.244531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.831 [2024-12-09 09:56:08.244560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.831 qpair failed and we were unable to recover it. 00:38:32.831 [2024-12-09 09:56:08.244888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.831 [2024-12-09 09:56:08.244917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.831 qpair failed and we were unable to recover it. 00:38:32.831 [2024-12-09 09:56:08.245272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.831 [2024-12-09 09:56:08.245301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.831 qpair failed and we were unable to recover it. 00:38:32.831 [2024-12-09 09:56:08.245633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.831 [2024-12-09 09:56:08.245678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.831 qpair failed and we were unable to recover it. 00:38:32.831 [2024-12-09 09:56:08.246006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.831 [2024-12-09 09:56:08.246035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.831 qpair failed and we were unable to recover it. 00:38:32.831 [2024-12-09 09:56:08.246378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.831 [2024-12-09 09:56:08.246407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.831 qpair failed and we were unable to recover it. 00:38:32.831 [2024-12-09 09:56:08.246773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.831 [2024-12-09 09:56:08.246802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.831 qpair failed and we were unable to recover it. 00:38:32.831 [2024-12-09 09:56:08.247160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.831 [2024-12-09 09:56:08.247189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.831 qpair failed and we were unable to recover it. 00:38:32.831 [2024-12-09 09:56:08.247528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.831 [2024-12-09 09:56:08.247557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.831 qpair failed and we were unable to recover it. 00:38:32.831 [2024-12-09 09:56:08.247986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.831 [2024-12-09 09:56:08.248016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.831 qpair failed and we were unable to recover it. 00:38:32.831 [2024-12-09 09:56:08.248350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.831 [2024-12-09 09:56:08.248380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.831 qpair failed and we were unable to recover it. 00:38:32.831 [2024-12-09 09:56:08.248720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.831 [2024-12-09 09:56:08.248750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.831 qpair failed and we were unable to recover it. 00:38:32.831 [2024-12-09 09:56:08.248995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.831 [2024-12-09 09:56:08.249023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.831 qpair failed and we were unable to recover it. 00:38:32.831 [2024-12-09 09:56:08.249365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.831 [2024-12-09 09:56:08.249394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.831 qpair failed and we were unable to recover it. 00:38:32.831 [2024-12-09 09:56:08.249752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.831 [2024-12-09 09:56:08.249782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.831 qpair failed and we were unable to recover it. 00:38:32.831 [2024-12-09 09:56:08.250125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.831 [2024-12-09 09:56:08.250155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.831 qpair failed and we were unable to recover it. 00:38:32.831 [2024-12-09 09:56:08.250523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.831 [2024-12-09 09:56:08.250553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.832 qpair failed and we were unable to recover it. 00:38:32.832 [2024-12-09 09:56:08.250883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.832 [2024-12-09 09:56:08.250914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.832 qpair failed and we were unable to recover it. 00:38:32.832 [2024-12-09 09:56:08.251267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.832 [2024-12-09 09:56:08.251297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.832 qpair failed and we were unable to recover it. 00:38:32.832 [2024-12-09 09:56:08.251512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:32.832 [2024-12-09 09:56:08.251540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:32.832 qpair failed and we were unable to recover it. 00:38:33.104 [2024-12-09 09:56:08.251866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.104 [2024-12-09 09:56:08.251898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.104 qpair failed and we were unable to recover it. 00:38:33.104 [2024-12-09 09:56:08.252212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.104 [2024-12-09 09:56:08.252242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.104 qpair failed and we were unable to recover it. 00:38:33.104 [2024-12-09 09:56:08.252613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.104 [2024-12-09 09:56:08.252650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.104 qpair failed and we were unable to recover it. 00:38:33.104 [2024-12-09 09:56:08.252990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.104 [2024-12-09 09:56:08.253019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.104 qpair failed and we were unable to recover it. 00:38:33.104 [2024-12-09 09:56:08.253375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.104 [2024-12-09 09:56:08.253404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.104 qpair failed and we were unable to recover it. 00:38:33.104 [2024-12-09 09:56:08.253753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.104 [2024-12-09 09:56:08.253784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.104 qpair failed and we were unable to recover it. 00:38:33.104 [2024-12-09 09:56:08.254134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.104 [2024-12-09 09:56:08.254163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.104 qpair failed and we were unable to recover it. 00:38:33.104 [2024-12-09 09:56:08.254500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.104 [2024-12-09 09:56:08.254529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.104 qpair failed and we were unable to recover it. 00:38:33.104 [2024-12-09 09:56:08.254934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.105 [2024-12-09 09:56:08.254964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.105 qpair failed and we were unable to recover it. 00:38:33.105 [2024-12-09 09:56:08.255133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.105 [2024-12-09 09:56:08.255161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.105 qpair failed and we were unable to recover it. 00:38:33.105 [2024-12-09 09:56:08.255567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.105 [2024-12-09 09:56:08.255596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.105 qpair failed and we were unable to recover it. 00:38:33.105 [2024-12-09 09:56:08.255978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.105 [2024-12-09 09:56:08.256010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.105 qpair failed and we were unable to recover it. 00:38:33.105 [2024-12-09 09:56:08.256366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.105 [2024-12-09 09:56:08.256396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.105 qpair failed and we were unable to recover it. 00:38:33.105 [2024-12-09 09:56:08.256730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.105 [2024-12-09 09:56:08.256761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.105 qpair failed and we were unable to recover it. 00:38:33.105 [2024-12-09 09:56:08.257078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.105 [2024-12-09 09:56:08.257107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.105 qpair failed and we were unable to recover it. 00:38:33.105 [2024-12-09 09:56:08.257455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.105 [2024-12-09 09:56:08.257484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.105 qpair failed and we were unable to recover it. 00:38:33.105 [2024-12-09 09:56:08.257873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.105 [2024-12-09 09:56:08.257903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.105 qpair failed and we were unable to recover it. 00:38:33.105 [2024-12-09 09:56:08.258231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.105 [2024-12-09 09:56:08.258261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.105 qpair failed and we were unable to recover it. 00:38:33.105 [2024-12-09 09:56:08.258621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.105 [2024-12-09 09:56:08.258664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.105 qpair failed and we were unable to recover it. 00:38:33.105 [2024-12-09 09:56:08.258917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.105 [2024-12-09 09:56:08.258950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.105 qpair failed and we were unable to recover it. 00:38:33.105 [2024-12-09 09:56:08.259325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.105 [2024-12-09 09:56:08.259354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.105 qpair failed and we were unable to recover it. 00:38:33.105 [2024-12-09 09:56:08.259702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.105 [2024-12-09 09:56:08.259732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.105 qpair failed and we were unable to recover it. 00:38:33.105 [2024-12-09 09:56:08.260063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.105 [2024-12-09 09:56:08.260091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.105 qpair failed and we were unable to recover it. 00:38:33.105 [2024-12-09 09:56:08.260437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.105 [2024-12-09 09:56:08.260466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.105 qpair failed and we were unable to recover it. 00:38:33.105 [2024-12-09 09:56:08.260843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.105 [2024-12-09 09:56:08.260881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.105 qpair failed and we were unable to recover it. 00:38:33.105 [2024-12-09 09:56:08.261208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.105 [2024-12-09 09:56:08.261237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.105 qpair failed and we were unable to recover it. 00:38:33.105 [2024-12-09 09:56:08.261556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.105 [2024-12-09 09:56:08.261584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.105 qpair failed and we were unable to recover it. 00:38:33.105 [2024-12-09 09:56:08.261922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.105 [2024-12-09 09:56:08.261953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.105 qpair failed and we were unable to recover it. 00:38:33.105 [2024-12-09 09:56:08.262203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.105 [2024-12-09 09:56:08.262231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.105 qpair failed and we were unable to recover it. 00:38:33.105 [2024-12-09 09:56:08.262627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.105 [2024-12-09 09:56:08.262664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.105 qpair failed and we were unable to recover it. 00:38:33.105 [2024-12-09 09:56:08.263014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.105 [2024-12-09 09:56:08.263043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.105 qpair failed and we were unable to recover it. 00:38:33.105 [2024-12-09 09:56:08.263389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.105 [2024-12-09 09:56:08.263417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.105 qpair failed and we were unable to recover it. 00:38:33.105 [2024-12-09 09:56:08.263770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.105 [2024-12-09 09:56:08.263801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.105 qpair failed and we were unable to recover it. 00:38:33.105 [2024-12-09 09:56:08.264147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.105 [2024-12-09 09:56:08.264176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.105 qpair failed and we were unable to recover it. 00:38:33.105 [2024-12-09 09:56:08.264517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.105 [2024-12-09 09:56:08.264546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.105 qpair failed and we were unable to recover it. 00:38:33.105 [2024-12-09 09:56:08.264887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.105 [2024-12-09 09:56:08.264917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.105 qpair failed and we were unable to recover it. 00:38:33.105 [2024-12-09 09:56:08.265261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.105 [2024-12-09 09:56:08.265291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.105 qpair failed and we were unable to recover it. 00:38:33.105 [2024-12-09 09:56:08.265630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.105 [2024-12-09 09:56:08.265669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.105 qpair failed and we were unable to recover it. 00:38:33.105 [2024-12-09 09:56:08.265997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.105 [2024-12-09 09:56:08.266027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.105 qpair failed and we were unable to recover it. 00:38:33.105 [2024-12-09 09:56:08.266345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.105 [2024-12-09 09:56:08.266374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.105 qpair failed and we were unable to recover it. 00:38:33.105 [2024-12-09 09:56:08.266718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.105 [2024-12-09 09:56:08.266747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.105 qpair failed and we were unable to recover it. 00:38:33.105 [2024-12-09 09:56:08.267099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.105 [2024-12-09 09:56:08.267128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.105 qpair failed and we were unable to recover it. 00:38:33.105 [2024-12-09 09:56:08.267475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.105 [2024-12-09 09:56:08.267503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.105 qpair failed and we were unable to recover it. 00:38:33.105 [2024-12-09 09:56:08.267854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.105 [2024-12-09 09:56:08.267885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.105 qpair failed and we were unable to recover it. 00:38:33.105 [2024-12-09 09:56:08.268230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.105 [2024-12-09 09:56:08.268259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.105 qpair failed and we were unable to recover it. 00:38:33.105 [2024-12-09 09:56:08.268613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.105 [2024-12-09 09:56:08.268652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.106 qpair failed and we were unable to recover it. 00:38:33.106 [2024-12-09 09:56:08.268988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.106 [2024-12-09 09:56:08.269017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.106 qpair failed and we were unable to recover it. 00:38:33.106 [2024-12-09 09:56:08.269355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.106 [2024-12-09 09:56:08.269383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.106 qpair failed and we were unable to recover it. 00:38:33.106 [2024-12-09 09:56:08.269728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.106 [2024-12-09 09:56:08.269758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.106 qpair failed and we were unable to recover it. 00:38:33.106 [2024-12-09 09:56:08.270098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.106 [2024-12-09 09:56:08.270127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.106 qpair failed and we were unable to recover it. 00:38:33.106 [2024-12-09 09:56:08.270488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.106 [2024-12-09 09:56:08.270517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.106 qpair failed and we were unable to recover it. 00:38:33.106 [2024-12-09 09:56:08.270859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.106 [2024-12-09 09:56:08.270889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.106 qpair failed and we were unable to recover it. 00:38:33.106 [2024-12-09 09:56:08.271251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.106 [2024-12-09 09:56:08.271280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.106 qpair failed and we were unable to recover it. 00:38:33.106 [2024-12-09 09:56:08.271540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.106 [2024-12-09 09:56:08.271567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.106 qpair failed and we were unable to recover it. 00:38:33.106 [2024-12-09 09:56:08.271903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.106 [2024-12-09 09:56:08.271933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.106 qpair failed and we were unable to recover it. 00:38:33.106 [2024-12-09 09:56:08.272277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.106 [2024-12-09 09:56:08.272307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.106 qpair failed and we were unable to recover it. 00:38:33.106 [2024-12-09 09:56:08.272669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.106 [2024-12-09 09:56:08.272699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.106 qpair failed and we were unable to recover it. 00:38:33.106 [2024-12-09 09:56:08.273035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.106 [2024-12-09 09:56:08.273063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.106 qpair failed and we were unable to recover it. 00:38:33.106 [2024-12-09 09:56:08.273420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.106 [2024-12-09 09:56:08.273449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.106 qpair failed and we were unable to recover it. 00:38:33.106 [2024-12-09 09:56:08.273794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.106 [2024-12-09 09:56:08.273824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.106 qpair failed and we were unable to recover it. 00:38:33.106 [2024-12-09 09:56:08.274194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.106 [2024-12-09 09:56:08.274222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.106 qpair failed and we were unable to recover it. 00:38:33.106 [2024-12-09 09:56:08.274564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.106 [2024-12-09 09:56:08.274592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.106 qpair failed and we were unable to recover it. 00:38:33.106 [2024-12-09 09:56:08.274945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.106 [2024-12-09 09:56:08.274975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.106 qpair failed and we were unable to recover it. 00:38:33.106 [2024-12-09 09:56:08.275344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.106 [2024-12-09 09:56:08.275372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.106 qpair failed and we were unable to recover it. 00:38:33.106 [2024-12-09 09:56:08.275713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.106 [2024-12-09 09:56:08.275742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.106 qpair failed and we were unable to recover it. 00:38:33.106 [2024-12-09 09:56:08.276068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.106 [2024-12-09 09:56:08.276098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.106 qpair failed and we were unable to recover it. 00:38:33.106 [2024-12-09 09:56:08.276441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.106 [2024-12-09 09:56:08.276469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.106 qpair failed and we were unable to recover it. 00:38:33.106 [2024-12-09 09:56:08.276817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.106 [2024-12-09 09:56:08.276846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.106 qpair failed and we were unable to recover it. 00:38:33.106 [2024-12-09 09:56:08.277197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.106 [2024-12-09 09:56:08.277227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.106 qpair failed and we were unable to recover it. 00:38:33.106 [2024-12-09 09:56:08.277580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.106 [2024-12-09 09:56:08.277608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.106 qpair failed and we were unable to recover it. 00:38:33.106 [2024-12-09 09:56:08.277955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.106 [2024-12-09 09:56:08.277986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.106 qpair failed and we were unable to recover it. 00:38:33.106 [2024-12-09 09:56:08.278326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.106 [2024-12-09 09:56:08.278354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.106 qpair failed and we were unable to recover it. 00:38:33.106 [2024-12-09 09:56:08.278701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.106 [2024-12-09 09:56:08.278731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.106 qpair failed and we were unable to recover it. 00:38:33.106 [2024-12-09 09:56:08.279092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.106 [2024-12-09 09:56:08.279121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.106 qpair failed and we were unable to recover it. 00:38:33.106 [2024-12-09 09:56:08.279466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.106 [2024-12-09 09:56:08.279494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.106 qpair failed and we were unable to recover it. 00:38:33.106 [2024-12-09 09:56:08.279739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.106 [2024-12-09 09:56:08.279774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.106 qpair failed and we were unable to recover it. 00:38:33.106 [2024-12-09 09:56:08.280114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.106 [2024-12-09 09:56:08.280143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.106 qpair failed and we were unable to recover it. 00:38:33.106 [2024-12-09 09:56:08.280476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.106 [2024-12-09 09:56:08.280504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.106 qpair failed and we were unable to recover it. 00:38:33.106 [2024-12-09 09:56:08.280931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.106 [2024-12-09 09:56:08.280962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.106 qpair failed and we were unable to recover it. 00:38:33.106 [2024-12-09 09:56:08.281281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.106 [2024-12-09 09:56:08.281311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.106 qpair failed and we were unable to recover it. 00:38:33.106 [2024-12-09 09:56:08.281706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.106 [2024-12-09 09:56:08.281736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.106 qpair failed and we were unable to recover it. 00:38:33.106 [2024-12-09 09:56:08.282063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.106 [2024-12-09 09:56:08.282092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.106 qpair failed and we were unable to recover it. 00:38:33.106 [2024-12-09 09:56:08.282444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.106 [2024-12-09 09:56:08.282473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.106 qpair failed and we were unable to recover it. 00:38:33.106 [2024-12-09 09:56:08.282814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.107 [2024-12-09 09:56:08.282845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.107 qpair failed and we were unable to recover it. 00:38:33.107 [2024-12-09 09:56:08.283214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.107 [2024-12-09 09:56:08.283242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.107 qpair failed and we were unable to recover it. 00:38:33.107 [2024-12-09 09:56:08.283626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.107 [2024-12-09 09:56:08.283678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.107 qpair failed and we were unable to recover it. 00:38:33.107 [2024-12-09 09:56:08.283923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.107 [2024-12-09 09:56:08.283955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.107 qpair failed and we were unable to recover it. 00:38:33.107 [2024-12-09 09:56:08.284306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.107 [2024-12-09 09:56:08.284336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.107 qpair failed and we were unable to recover it. 00:38:33.107 [2024-12-09 09:56:08.284657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.107 [2024-12-09 09:56:08.284692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.107 qpair failed and we were unable to recover it. 00:38:33.107 [2024-12-09 09:56:08.285038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.107 [2024-12-09 09:56:08.285067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.107 qpair failed and we were unable to recover it. 00:38:33.107 [2024-12-09 09:56:08.285433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.107 [2024-12-09 09:56:08.285463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.107 qpair failed and we were unable to recover it. 00:38:33.107 [2024-12-09 09:56:08.285814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.107 [2024-12-09 09:56:08.285844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.107 qpair failed and we were unable to recover it. 00:38:33.107 [2024-12-09 09:56:08.286197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.107 [2024-12-09 09:56:08.286232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.107 qpair failed and we were unable to recover it. 00:38:33.107 [2024-12-09 09:56:08.286569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.107 [2024-12-09 09:56:08.286598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.107 qpair failed and we were unable to recover it. 00:38:33.107 [2024-12-09 09:56:08.286962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.107 [2024-12-09 09:56:08.286992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.107 qpair failed and we were unable to recover it. 00:38:33.107 [2024-12-09 09:56:08.287231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.107 [2024-12-09 09:56:08.287259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.107 qpair failed and we were unable to recover it. 00:38:33.107 [2024-12-09 09:56:08.287607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.107 [2024-12-09 09:56:08.287636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.107 qpair failed and we were unable to recover it. 00:38:33.107 [2024-12-09 09:56:08.287978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.107 [2024-12-09 09:56:08.288008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.107 qpair failed and we were unable to recover it. 00:38:33.107 [2024-12-09 09:56:08.288358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.107 [2024-12-09 09:56:08.288386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.107 qpair failed and we were unable to recover it. 00:38:33.107 [2024-12-09 09:56:08.288723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.107 [2024-12-09 09:56:08.288753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.107 qpair failed and we were unable to recover it. 00:38:33.107 [2024-12-09 09:56:08.289108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.107 [2024-12-09 09:56:08.289137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.107 qpair failed and we were unable to recover it. 00:38:33.107 [2024-12-09 09:56:08.289480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.107 [2024-12-09 09:56:08.289509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.107 qpair failed and we were unable to recover it. 00:38:33.107 [2024-12-09 09:56:08.289848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.107 [2024-12-09 09:56:08.289878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.107 qpair failed and we were unable to recover it. 00:38:33.107 [2024-12-09 09:56:08.290196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.107 [2024-12-09 09:56:08.290231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.107 qpair failed and we were unable to recover it. 00:38:33.107 [2024-12-09 09:56:08.290542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.107 [2024-12-09 09:56:08.290570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.107 qpair failed and we were unable to recover it. 00:38:33.107 [2024-12-09 09:56:08.290917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.107 [2024-12-09 09:56:08.290947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.107 qpair failed and we were unable to recover it. 00:38:33.107 [2024-12-09 09:56:08.291288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.107 [2024-12-09 09:56:08.291316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.107 qpair failed and we were unable to recover it. 00:38:33.107 [2024-12-09 09:56:08.291565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.107 [2024-12-09 09:56:08.291594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.107 qpair failed and we were unable to recover it. 00:38:33.107 [2024-12-09 09:56:08.291921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.107 [2024-12-09 09:56:08.291951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.107 qpair failed and we were unable to recover it. 00:38:33.107 [2024-12-09 09:56:08.292355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.107 [2024-12-09 09:56:08.292384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.107 qpair failed and we were unable to recover it. 00:38:33.107 [2024-12-09 09:56:08.292708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.107 [2024-12-09 09:56:08.292738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.107 qpair failed and we were unable to recover it. 00:38:33.107 [2024-12-09 09:56:08.293084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.107 [2024-12-09 09:56:08.293114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.107 qpair failed and we were unable to recover it. 00:38:33.107 [2024-12-09 09:56:08.293467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.107 [2024-12-09 09:56:08.293497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.107 qpair failed and we were unable to recover it. 00:38:33.107 [2024-12-09 09:56:08.293843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.107 [2024-12-09 09:56:08.293873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.107 qpair failed and we were unable to recover it. 00:38:33.107 [2024-12-09 09:56:08.294129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.107 [2024-12-09 09:56:08.294160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.107 qpair failed and we were unable to recover it. 00:38:33.107 [2024-12-09 09:56:08.294496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.107 [2024-12-09 09:56:08.294526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.107 qpair failed and we were unable to recover it. 00:38:33.107 [2024-12-09 09:56:08.294827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.107 [2024-12-09 09:56:08.294856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.107 qpair failed and we were unable to recover it. 00:38:33.107 [2024-12-09 09:56:08.295207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.107 [2024-12-09 09:56:08.295236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.107 qpair failed and we were unable to recover it. 00:38:33.107 [2024-12-09 09:56:08.295581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.107 [2024-12-09 09:56:08.295610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.107 qpair failed and we were unable to recover it. 00:38:33.107 [2024-12-09 09:56:08.295982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.107 [2024-12-09 09:56:08.296012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.107 qpair failed and we were unable to recover it. 00:38:33.107 [2024-12-09 09:56:08.296366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.107 [2024-12-09 09:56:08.296394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.107 qpair failed and we were unable to recover it. 00:38:33.108 [2024-12-09 09:56:08.296745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.108 [2024-12-09 09:56:08.296776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.108 qpair failed and we were unable to recover it. 00:38:33.108 [2024-12-09 09:56:08.297114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.108 [2024-12-09 09:56:08.297143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.108 qpair failed and we were unable to recover it. 00:38:33.108 [2024-12-09 09:56:08.297488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.108 [2024-12-09 09:56:08.297517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.108 qpair failed and we were unable to recover it. 00:38:33.108 [2024-12-09 09:56:08.297851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.108 [2024-12-09 09:56:08.297881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.108 qpair failed and we were unable to recover it. 00:38:33.108 [2024-12-09 09:56:08.298229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.108 [2024-12-09 09:56:08.298259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.108 qpair failed and we were unable to recover it. 00:38:33.108 [2024-12-09 09:56:08.298614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.108 [2024-12-09 09:56:08.298649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.108 qpair failed and we were unable to recover it. 00:38:33.108 [2024-12-09 09:56:08.298991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.108 [2024-12-09 09:56:08.299020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.108 qpair failed and we were unable to recover it. 00:38:33.108 [2024-12-09 09:56:08.299430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.108 [2024-12-09 09:56:08.299459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.108 qpair failed and we were unable to recover it. 00:38:33.108 [2024-12-09 09:56:08.299687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.108 [2024-12-09 09:56:08.299719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.108 qpair failed and we were unable to recover it. 00:38:33.108 [2024-12-09 09:56:08.300100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.108 [2024-12-09 09:56:08.300129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.108 qpair failed and we were unable to recover it. 00:38:33.108 [2024-12-09 09:56:08.300453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.108 [2024-12-09 09:56:08.300483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.108 qpair failed and we were unable to recover it. 00:38:33.108 [2024-12-09 09:56:08.300834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.108 [2024-12-09 09:56:08.300864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.108 qpair failed and we were unable to recover it. 00:38:33.108 [2024-12-09 09:56:08.301104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.108 [2024-12-09 09:56:08.301136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.108 qpair failed and we were unable to recover it. 00:38:33.108 [2024-12-09 09:56:08.301488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.108 [2024-12-09 09:56:08.301517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.108 qpair failed and we were unable to recover it. 00:38:33.108 [2024-12-09 09:56:08.301902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.108 [2024-12-09 09:56:08.301932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.108 qpair failed and we were unable to recover it. 00:38:33.108 [2024-12-09 09:56:08.302276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.108 [2024-12-09 09:56:08.302305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.108 qpair failed and we were unable to recover it. 00:38:33.108 [2024-12-09 09:56:08.302654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.108 [2024-12-09 09:56:08.302684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.108 qpair failed and we were unable to recover it. 00:38:33.108 [2024-12-09 09:56:08.303045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.108 [2024-12-09 09:56:08.303074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.108 qpair failed and we were unable to recover it. 00:38:33.108 [2024-12-09 09:56:08.303351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.108 [2024-12-09 09:56:08.303379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.108 qpair failed and we were unable to recover it. 00:38:33.108 [2024-12-09 09:56:08.303758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.108 [2024-12-09 09:56:08.303789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.108 qpair failed and we were unable to recover it. 00:38:33.108 [2024-12-09 09:56:08.304086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.108 [2024-12-09 09:56:08.304115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.108 qpair failed and we were unable to recover it. 00:38:33.108 [2024-12-09 09:56:08.304465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.108 [2024-12-09 09:56:08.304493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.108 qpair failed and we were unable to recover it. 00:38:33.108 [2024-12-09 09:56:08.304833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.108 [2024-12-09 09:56:08.304864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.108 qpair failed and we were unable to recover it. 00:38:33.108 [2024-12-09 09:56:08.305213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.108 [2024-12-09 09:56:08.305241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.108 qpair failed and we were unable to recover it. 00:38:33.108 [2024-12-09 09:56:08.305612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.108 [2024-12-09 09:56:08.305649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.108 qpair failed and we were unable to recover it. 00:38:33.108 [2024-12-09 09:56:08.305976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.108 [2024-12-09 09:56:08.306005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.108 qpair failed and we were unable to recover it. 00:38:33.108 [2024-12-09 09:56:08.306346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.108 [2024-12-09 09:56:08.306376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.108 qpair failed and we were unable to recover it. 00:38:33.108 [2024-12-09 09:56:08.306729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.108 [2024-12-09 09:56:08.306759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.108 qpair failed and we were unable to recover it. 00:38:33.108 [2024-12-09 09:56:08.307111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.108 [2024-12-09 09:56:08.307140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.108 qpair failed and we were unable to recover it. 00:38:33.108 [2024-12-09 09:56:08.307485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.108 [2024-12-09 09:56:08.307513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.108 qpair failed and we were unable to recover it. 00:38:33.108 [2024-12-09 09:56:08.307752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.108 [2024-12-09 09:56:08.307785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.108 qpair failed and we were unable to recover it. 00:38:33.108 [2024-12-09 09:56:08.308106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.108 [2024-12-09 09:56:08.308135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.108 qpair failed and we were unable to recover it. 00:38:33.108 [2024-12-09 09:56:08.308378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.108 [2024-12-09 09:56:08.308407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.108 qpair failed and we were unable to recover it. 00:38:33.108 [2024-12-09 09:56:08.308739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.108 [2024-12-09 09:56:08.308771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.108 qpair failed and we were unable to recover it. 00:38:33.108 [2024-12-09 09:56:08.309118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.108 [2024-12-09 09:56:08.309147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.108 qpair failed and we were unable to recover it. 00:38:33.108 [2024-12-09 09:56:08.309500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.108 [2024-12-09 09:56:08.309528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.108 qpair failed and we were unable to recover it. 00:38:33.108 [2024-12-09 09:56:08.309876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.108 [2024-12-09 09:56:08.309905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.108 qpair failed and we were unable to recover it. 00:38:33.109 [2024-12-09 09:56:08.310227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.109 [2024-12-09 09:56:08.310256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.109 qpair failed and we were unable to recover it. 00:38:33.109 [2024-12-09 09:56:08.310612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.109 [2024-12-09 09:56:08.310649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.109 qpair failed and we were unable to recover it. 00:38:33.109 [2024-12-09 09:56:08.310994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.109 [2024-12-09 09:56:08.311028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.109 qpair failed and we were unable to recover it. 00:38:33.109 [2024-12-09 09:56:08.311349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.109 [2024-12-09 09:56:08.311378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.109 qpair failed and we were unable to recover it. 00:38:33.109 [2024-12-09 09:56:08.311724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.109 [2024-12-09 09:56:08.311756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.109 qpair failed and we were unable to recover it. 00:38:33.109 [2024-12-09 09:56:08.311996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.109 [2024-12-09 09:56:08.312027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.109 qpair failed and we were unable to recover it. 00:38:33.109 [2024-12-09 09:56:08.312376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.109 [2024-12-09 09:56:08.312405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.109 qpair failed and we were unable to recover it. 00:38:33.109 [2024-12-09 09:56:08.312766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.109 [2024-12-09 09:56:08.312796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.109 qpair failed and we were unable to recover it. 00:38:33.109 [2024-12-09 09:56:08.313140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.109 [2024-12-09 09:56:08.313169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.109 qpair failed and we were unable to recover it. 00:38:33.109 [2024-12-09 09:56:08.313520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.109 [2024-12-09 09:56:08.313548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.109 qpair failed and we were unable to recover it. 00:38:33.109 [2024-12-09 09:56:08.313889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.109 [2024-12-09 09:56:08.313918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.109 qpair failed and we were unable to recover it. 00:38:33.109 [2024-12-09 09:56:08.314242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.109 [2024-12-09 09:56:08.314272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.109 qpair failed and we were unable to recover it. 00:38:33.109 [2024-12-09 09:56:08.314615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.109 [2024-12-09 09:56:08.314652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.109 qpair failed and we were unable to recover it. 00:38:33.109 [2024-12-09 09:56:08.315008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.109 [2024-12-09 09:56:08.315037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.109 qpair failed and we were unable to recover it. 00:38:33.109 [2024-12-09 09:56:08.315368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.109 [2024-12-09 09:56:08.315396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.109 qpair failed and we were unable to recover it. 00:38:33.109 [2024-12-09 09:56:08.315655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.109 [2024-12-09 09:56:08.315685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.109 qpair failed and we were unable to recover it. 00:38:33.109 [2024-12-09 09:56:08.315912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.109 [2024-12-09 09:56:08.315940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.109 qpair failed and we were unable to recover it. 00:38:33.109 [2024-12-09 09:56:08.316263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.109 [2024-12-09 09:56:08.316292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.109 qpair failed and we were unable to recover it. 00:38:33.109 [2024-12-09 09:56:08.316620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.109 [2024-12-09 09:56:08.316659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.109 qpair failed and we were unable to recover it. 00:38:33.109 [2024-12-09 09:56:08.316984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.109 [2024-12-09 09:56:08.317012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.109 qpair failed and we were unable to recover it. 00:38:33.109 [2024-12-09 09:56:08.317350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.109 [2024-12-09 09:56:08.317379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.109 qpair failed and we were unable to recover it. 00:38:33.109 [2024-12-09 09:56:08.317625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.109 [2024-12-09 09:56:08.317663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.109 qpair failed and we were unable to recover it. 00:38:33.109 [2024-12-09 09:56:08.317922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.109 [2024-12-09 09:56:08.317954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.109 qpair failed and we were unable to recover it. 00:38:33.109 [2024-12-09 09:56:08.318302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.109 [2024-12-09 09:56:08.318331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.109 qpair failed and we were unable to recover it. 00:38:33.109 [2024-12-09 09:56:08.318691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.109 [2024-12-09 09:56:08.318720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.109 qpair failed and we were unable to recover it. 00:38:33.109 [2024-12-09 09:56:08.319065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.109 [2024-12-09 09:56:08.319094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.109 qpair failed and we were unable to recover it. 00:38:33.109 [2024-12-09 09:56:08.319445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.109 [2024-12-09 09:56:08.319474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.109 qpair failed and we were unable to recover it. 00:38:33.109 [2024-12-09 09:56:08.319810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.109 [2024-12-09 09:56:08.319839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.109 qpair failed and we were unable to recover it. 00:38:33.109 [2024-12-09 09:56:08.320195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.109 [2024-12-09 09:56:08.320225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.109 qpair failed and we were unable to recover it. 00:38:33.109 [2024-12-09 09:56:08.320568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.109 [2024-12-09 09:56:08.320598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.109 qpair failed and we were unable to recover it. 00:38:33.109 [2024-12-09 09:56:08.320951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.109 [2024-12-09 09:56:08.320981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.109 qpair failed and we were unable to recover it. 00:38:33.109 [2024-12-09 09:56:08.321326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.109 [2024-12-09 09:56:08.321355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.109 qpair failed and we were unable to recover it. 00:38:33.109 [2024-12-09 09:56:08.321714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.110 [2024-12-09 09:56:08.321744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.110 qpair failed and we were unable to recover it. 00:38:33.110 [2024-12-09 09:56:08.322119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.110 [2024-12-09 09:56:08.322147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.110 qpair failed and we were unable to recover it. 00:38:33.110 [2024-12-09 09:56:08.322566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.110 [2024-12-09 09:56:08.322595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.110 qpair failed and we were unable to recover it. 00:38:33.110 [2024-12-09 09:56:08.322847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.110 [2024-12-09 09:56:08.322877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.110 qpair failed and we were unable to recover it. 00:38:33.110 [2024-12-09 09:56:08.323197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.110 [2024-12-09 09:56:08.323226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.110 qpair failed and we were unable to recover it. 00:38:33.110 [2024-12-09 09:56:08.323571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.110 [2024-12-09 09:56:08.323600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.110 qpair failed and we were unable to recover it. 00:38:33.110 [2024-12-09 09:56:08.323953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.110 [2024-12-09 09:56:08.323984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.110 qpair failed and we were unable to recover it. 00:38:33.110 [2024-12-09 09:56:08.324320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.110 [2024-12-09 09:56:08.324349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.110 qpair failed and we were unable to recover it. 00:38:33.110 [2024-12-09 09:56:08.324753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.110 [2024-12-09 09:56:08.324784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.110 qpair failed and we were unable to recover it. 00:38:33.110 [2024-12-09 09:56:08.325116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.110 [2024-12-09 09:56:08.325145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.110 qpair failed and we were unable to recover it. 00:38:33.110 [2024-12-09 09:56:08.325536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.110 [2024-12-09 09:56:08.325565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.110 qpair failed and we were unable to recover it. 00:38:33.110 [2024-12-09 09:56:08.325888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.110 [2024-12-09 09:56:08.325925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.110 qpair failed and we were unable to recover it. 00:38:33.110 [2024-12-09 09:56:08.326266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.110 [2024-12-09 09:56:08.326295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.110 qpair failed and we were unable to recover it. 00:38:33.110 [2024-12-09 09:56:08.326721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.110 [2024-12-09 09:56:08.326752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.110 qpair failed and we were unable to recover it. 00:38:33.110 [2024-12-09 09:56:08.327093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.110 [2024-12-09 09:56:08.327121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.110 qpair failed and we were unable to recover it. 00:38:33.110 [2024-12-09 09:56:08.327549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.110 [2024-12-09 09:56:08.327578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.110 qpair failed and we were unable to recover it. 00:38:33.110 [2024-12-09 09:56:08.327924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.110 [2024-12-09 09:56:08.327954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.110 qpair failed and we were unable to recover it. 00:38:33.110 [2024-12-09 09:56:08.328373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.110 [2024-12-09 09:56:08.328402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.110 qpair failed and we were unable to recover it. 00:38:33.110 [2024-12-09 09:56:08.328809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.110 [2024-12-09 09:56:08.328840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.110 qpair failed and we were unable to recover it. 00:38:33.110 [2024-12-09 09:56:08.329225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.110 [2024-12-09 09:56:08.329254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.110 qpair failed and we were unable to recover it. 00:38:33.110 [2024-12-09 09:56:08.329580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.110 [2024-12-09 09:56:08.329610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.110 qpair failed and we were unable to recover it. 00:38:33.110 [2024-12-09 09:56:08.329958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.110 [2024-12-09 09:56:08.329988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.110 qpair failed and we were unable to recover it. 00:38:33.110 [2024-12-09 09:56:08.330325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.110 [2024-12-09 09:56:08.330353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.110 qpair failed and we were unable to recover it. 00:38:33.110 [2024-12-09 09:56:08.330691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.110 [2024-12-09 09:56:08.330735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.110 qpair failed and we were unable to recover it. 00:38:33.110 [2024-12-09 09:56:08.331110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.110 [2024-12-09 09:56:08.331138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.110 qpair failed and we were unable to recover it. 00:38:33.110 [2024-12-09 09:56:08.331487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.110 [2024-12-09 09:56:08.331516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.110 qpair failed and we were unable to recover it. 00:38:33.110 [2024-12-09 09:56:08.331867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.110 [2024-12-09 09:56:08.331897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.110 qpair failed and we were unable to recover it. 00:38:33.110 [2024-12-09 09:56:08.332238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.110 [2024-12-09 09:56:08.332267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.110 qpair failed and we were unable to recover it. 00:38:33.110 [2024-12-09 09:56:08.332608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.110 [2024-12-09 09:56:08.332636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.110 qpair failed and we were unable to recover it. 00:38:33.110 [2024-12-09 09:56:08.332937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.110 [2024-12-09 09:56:08.332966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.110 qpair failed and we were unable to recover it. 00:38:33.110 [2024-12-09 09:56:08.333324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.110 [2024-12-09 09:56:08.333352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.110 qpair failed and we were unable to recover it. 00:38:33.110 [2024-12-09 09:56:08.333710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.110 [2024-12-09 09:56:08.333741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.110 qpair failed and we were unable to recover it. 00:38:33.110 [2024-12-09 09:56:08.334086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.110 [2024-12-09 09:56:08.334114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.110 qpair failed and we were unable to recover it. 00:38:33.110 [2024-12-09 09:56:08.334442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.110 [2024-12-09 09:56:08.334471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.110 qpair failed and we were unable to recover it. 00:38:33.110 [2024-12-09 09:56:08.334851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.110 [2024-12-09 09:56:08.334880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.110 qpair failed and we were unable to recover it. 00:38:33.110 [2024-12-09 09:56:08.335209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.110 [2024-12-09 09:56:08.335237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.110 qpair failed and we were unable to recover it. 00:38:33.110 [2024-12-09 09:56:08.335559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.110 [2024-12-09 09:56:08.335588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.110 qpair failed and we were unable to recover it. 00:38:33.110 [2024-12-09 09:56:08.335947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.110 [2024-12-09 09:56:08.335978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.110 qpair failed and we were unable to recover it. 00:38:33.111 [2024-12-09 09:56:08.336340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.111 [2024-12-09 09:56:08.336375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.111 qpair failed and we were unable to recover it. 00:38:33.111 [2024-12-09 09:56:08.336713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.111 [2024-12-09 09:56:08.336743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.111 qpair failed and we were unable to recover it. 00:38:33.111 [2024-12-09 09:56:08.337084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.111 [2024-12-09 09:56:08.337112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.111 qpair failed and we were unable to recover it. 00:38:33.111 [2024-12-09 09:56:08.337470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.111 [2024-12-09 09:56:08.337498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.111 qpair failed and we were unable to recover it. 00:38:33.111 [2024-12-09 09:56:08.337750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.111 [2024-12-09 09:56:08.337780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.111 qpair failed and we were unable to recover it. 00:38:33.111 [2024-12-09 09:56:08.338141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.111 [2024-12-09 09:56:08.338171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.111 qpair failed and we were unable to recover it. 00:38:33.111 [2024-12-09 09:56:08.338524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.111 [2024-12-09 09:56:08.338553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.111 qpair failed and we were unable to recover it. 00:38:33.111 [2024-12-09 09:56:08.338950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.111 [2024-12-09 09:56:08.338980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.111 qpair failed and we were unable to recover it. 00:38:33.111 [2024-12-09 09:56:08.339335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.111 [2024-12-09 09:56:08.339365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.111 qpair failed and we were unable to recover it. 00:38:33.111 [2024-12-09 09:56:08.339709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.111 [2024-12-09 09:56:08.339740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.111 qpair failed and we were unable to recover it. 00:38:33.111 [2024-12-09 09:56:08.339972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.111 [2024-12-09 09:56:08.340000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.111 qpair failed and we were unable to recover it. 00:38:33.111 [2024-12-09 09:56:08.340358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.111 [2024-12-09 09:56:08.340386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.111 qpair failed and we were unable to recover it. 00:38:33.111 [2024-12-09 09:56:08.340636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.111 [2024-12-09 09:56:08.340674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.111 qpair failed and we were unable to recover it. 00:38:33.111 [2024-12-09 09:56:08.341020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.111 [2024-12-09 09:56:08.341049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.111 qpair failed and we were unable to recover it. 00:38:33.111 [2024-12-09 09:56:08.341408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.111 [2024-12-09 09:56:08.341438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.111 qpair failed and we were unable to recover it. 00:38:33.111 [2024-12-09 09:56:08.341765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.111 [2024-12-09 09:56:08.341802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.111 qpair failed and we were unable to recover it. 00:38:33.111 [2024-12-09 09:56:08.342160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.111 [2024-12-09 09:56:08.342189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.111 qpair failed and we were unable to recover it. 00:38:33.111 [2024-12-09 09:56:08.342527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.111 [2024-12-09 09:56:08.342556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.111 qpair failed and we were unable to recover it. 00:38:33.111 [2024-12-09 09:56:08.342893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.111 [2024-12-09 09:56:08.342923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.111 qpair failed and we were unable to recover it. 00:38:33.111 [2024-12-09 09:56:08.343281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.111 [2024-12-09 09:56:08.343310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.111 qpair failed and we were unable to recover it. 00:38:33.111 [2024-12-09 09:56:08.343633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.111 [2024-12-09 09:56:08.343670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.111 qpair failed and we were unable to recover it. 00:38:33.111 [2024-12-09 09:56:08.344012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.111 [2024-12-09 09:56:08.344040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.111 qpair failed and we were unable to recover it. 00:38:33.111 [2024-12-09 09:56:08.344357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.111 [2024-12-09 09:56:08.344386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.111 qpair failed and we were unable to recover it. 00:38:33.111 [2024-12-09 09:56:08.344739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.111 [2024-12-09 09:56:08.344769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.111 qpair failed and we were unable to recover it. 00:38:33.111 [2024-12-09 09:56:08.345125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.111 [2024-12-09 09:56:08.345154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.111 qpair failed and we were unable to recover it. 00:38:33.111 [2024-12-09 09:56:08.345502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.111 [2024-12-09 09:56:08.345531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.111 qpair failed and we were unable to recover it. 00:38:33.111 [2024-12-09 09:56:08.345859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.111 [2024-12-09 09:56:08.345889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.111 qpair failed and we were unable to recover it. 00:38:33.111 [2024-12-09 09:56:08.346241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.111 [2024-12-09 09:56:08.346271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.111 qpair failed and we were unable to recover it. 00:38:33.111 [2024-12-09 09:56:08.346631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.111 [2024-12-09 09:56:08.346670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.111 qpair failed and we were unable to recover it. 00:38:33.111 [2024-12-09 09:56:08.346988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.111 [2024-12-09 09:56:08.347017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.111 qpair failed and we were unable to recover it. 00:38:33.111 [2024-12-09 09:56:08.347243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.111 [2024-12-09 09:56:08.347271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.111 qpair failed and we were unable to recover it. 00:38:33.111 [2024-12-09 09:56:08.347622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.111 [2024-12-09 09:56:08.347670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.111 qpair failed and we were unable to recover it. 00:38:33.111 [2024-12-09 09:56:08.348096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.111 [2024-12-09 09:56:08.348125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.111 qpair failed and we were unable to recover it. 00:38:33.111 [2024-12-09 09:56:08.348443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.111 [2024-12-09 09:56:08.348473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.111 qpair failed and we were unable to recover it. 00:38:33.111 [2024-12-09 09:56:08.348798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.111 [2024-12-09 09:56:08.348828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.111 qpair failed and we were unable to recover it. 00:38:33.111 [2024-12-09 09:56:08.349167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.111 [2024-12-09 09:56:08.349196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.111 qpair failed and we were unable to recover it. 00:38:33.111 [2024-12-09 09:56:08.349556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.111 [2024-12-09 09:56:08.349594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.111 qpair failed and we were unable to recover it. 00:38:33.111 [2024-12-09 09:56:08.349927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.111 [2024-12-09 09:56:08.349956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.111 qpair failed and we were unable to recover it. 00:38:33.112 [2024-12-09 09:56:08.350299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.112 [2024-12-09 09:56:08.350327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.112 qpair failed and we were unable to recover it. 00:38:33.112 [2024-12-09 09:56:08.350676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.112 [2024-12-09 09:56:08.350706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.112 qpair failed and we were unable to recover it. 00:38:33.112 [2024-12-09 09:56:08.351035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.112 [2024-12-09 09:56:08.351064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.112 qpair failed and we were unable to recover it. 00:38:33.112 [2024-12-09 09:56:08.351486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.112 [2024-12-09 09:56:08.351521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.112 qpair failed and we were unable to recover it. 00:38:33.112 [2024-12-09 09:56:08.351855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.112 [2024-12-09 09:56:08.351885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.112 qpair failed and we were unable to recover it. 00:38:33.112 [2024-12-09 09:56:08.352239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.112 [2024-12-09 09:56:08.352268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.112 qpair failed and we were unable to recover it. 00:38:33.112 [2024-12-09 09:56:08.352612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.112 [2024-12-09 09:56:08.352649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.112 qpair failed and we were unable to recover it. 00:38:33.112 [2024-12-09 09:56:08.353046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.112 [2024-12-09 09:56:08.353075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.112 qpair failed and we were unable to recover it. 00:38:33.112 [2024-12-09 09:56:08.353425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.112 [2024-12-09 09:56:08.353453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.112 qpair failed and we were unable to recover it. 00:38:33.112 [2024-12-09 09:56:08.353766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.112 [2024-12-09 09:56:08.353796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.112 qpair failed and we were unable to recover it. 00:38:33.112 [2024-12-09 09:56:08.354114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.112 [2024-12-09 09:56:08.354143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.112 qpair failed and we were unable to recover it. 00:38:33.112 [2024-12-09 09:56:08.354372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.112 [2024-12-09 09:56:08.354401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.112 qpair failed and we were unable to recover it. 00:38:33.112 [2024-12-09 09:56:08.354743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.112 [2024-12-09 09:56:08.354774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.112 qpair failed and we were unable to recover it. 00:38:33.112 [2024-12-09 09:56:08.355136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.112 [2024-12-09 09:56:08.355164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.112 qpair failed and we were unable to recover it. 00:38:33.112 [2024-12-09 09:56:08.355495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.112 [2024-12-09 09:56:08.355524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.112 qpair failed and we were unable to recover it. 00:38:33.112 [2024-12-09 09:56:08.355906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.112 [2024-12-09 09:56:08.355936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.112 qpair failed and we were unable to recover it. 00:38:33.112 [2024-12-09 09:56:08.356266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.112 [2024-12-09 09:56:08.356295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.112 qpair failed and we were unable to recover it. 00:38:33.112 [2024-12-09 09:56:08.356616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.112 [2024-12-09 09:56:08.356653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.112 qpair failed and we were unable to recover it. 00:38:33.112 [2024-12-09 09:56:08.356976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.112 [2024-12-09 09:56:08.357005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.112 qpair failed and we were unable to recover it. 00:38:33.112 [2024-12-09 09:56:08.357345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.112 [2024-12-09 09:56:08.357374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.112 qpair failed and we were unable to recover it. 00:38:33.112 [2024-12-09 09:56:08.357715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.112 [2024-12-09 09:56:08.357744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.112 qpair failed and we were unable to recover it. 00:38:33.112 [2024-12-09 09:56:08.358118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.112 [2024-12-09 09:56:08.358146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.112 qpair failed and we were unable to recover it. 00:38:33.112 [2024-12-09 09:56:08.358441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.112 [2024-12-09 09:56:08.358469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.112 qpair failed and we were unable to recover it. 00:38:33.112 [2024-12-09 09:56:08.358825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.112 [2024-12-09 09:56:08.358855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.112 qpair failed and we were unable to recover it. 00:38:33.112 [2024-12-09 09:56:08.359177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.112 [2024-12-09 09:56:08.359205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.112 qpair failed and we were unable to recover it. 00:38:33.112 [2024-12-09 09:56:08.359526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.112 [2024-12-09 09:56:08.359555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.112 qpair failed and we were unable to recover it. 00:38:33.112 [2024-12-09 09:56:08.359896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.112 [2024-12-09 09:56:08.359926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.112 qpair failed and we were unable to recover it. 00:38:33.112 [2024-12-09 09:56:08.360173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.112 [2024-12-09 09:56:08.360201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.112 qpair failed and we were unable to recover it. 00:38:33.112 [2024-12-09 09:56:08.360524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.112 [2024-12-09 09:56:08.360553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.112 qpair failed and we were unable to recover it. 00:38:33.112 [2024-12-09 09:56:08.360893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.112 [2024-12-09 09:56:08.360924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.112 qpair failed and we were unable to recover it. 00:38:33.112 [2024-12-09 09:56:08.361292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.112 [2024-12-09 09:56:08.361328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.112 qpair failed and we were unable to recover it. 00:38:33.112 [2024-12-09 09:56:08.361571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.112 [2024-12-09 09:56:08.361599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.112 qpair failed and we were unable to recover it. 00:38:33.112 [2024-12-09 09:56:08.361968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.112 [2024-12-09 09:56:08.361998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.112 qpair failed and we were unable to recover it. 00:38:33.112 [2024-12-09 09:56:08.362342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.112 [2024-12-09 09:56:08.362372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.112 qpair failed and we were unable to recover it. 00:38:33.112 [2024-12-09 09:56:08.362719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.112 [2024-12-09 09:56:08.362750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.112 qpair failed and we were unable to recover it. 00:38:33.112 [2024-12-09 09:56:08.363107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.112 [2024-12-09 09:56:08.363136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.112 qpair failed and we were unable to recover it. 00:38:33.112 [2024-12-09 09:56:08.363494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.112 [2024-12-09 09:56:08.363522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.112 qpair failed and we were unable to recover it. 00:38:33.112 [2024-12-09 09:56:08.363858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.112 [2024-12-09 09:56:08.363887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.112 qpair failed and we were unable to recover it. 00:38:33.113 [2024-12-09 09:56:08.364150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.113 [2024-12-09 09:56:08.364179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.113 qpair failed and we were unable to recover it. 00:38:33.113 [2024-12-09 09:56:08.364534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.113 [2024-12-09 09:56:08.364563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.113 qpair failed and we were unable to recover it. 00:38:33.113 [2024-12-09 09:56:08.364911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.113 [2024-12-09 09:56:08.364940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.113 qpair failed and we were unable to recover it. 00:38:33.113 [2024-12-09 09:56:08.365289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.113 [2024-12-09 09:56:08.365319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.113 qpair failed and we were unable to recover it. 00:38:33.113 [2024-12-09 09:56:08.365679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.113 [2024-12-09 09:56:08.365709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.113 qpair failed and we were unable to recover it. 00:38:33.113 [2024-12-09 09:56:08.366051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.113 [2024-12-09 09:56:08.366080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.113 qpair failed and we were unable to recover it. 00:38:33.113 [2024-12-09 09:56:08.366446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.113 [2024-12-09 09:56:08.366477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.113 qpair failed and we were unable to recover it. 00:38:33.113 [2024-12-09 09:56:08.366835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.113 [2024-12-09 09:56:08.366866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.113 qpair failed and we were unable to recover it. 00:38:33.113 [2024-12-09 09:56:08.367231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.113 [2024-12-09 09:56:08.367261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.113 qpair failed and we were unable to recover it. 00:38:33.113 [2024-12-09 09:56:08.367603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.113 [2024-12-09 09:56:08.367633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.113 qpair failed and we were unable to recover it. 00:38:33.113 [2024-12-09 09:56:08.368014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.113 [2024-12-09 09:56:08.368044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.113 qpair failed and we were unable to recover it. 00:38:33.113 [2024-12-09 09:56:08.368372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.113 [2024-12-09 09:56:08.368401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.113 qpair failed and we were unable to recover it. 00:38:33.113 [2024-12-09 09:56:08.368756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.113 [2024-12-09 09:56:08.368788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.113 qpair failed and we were unable to recover it. 00:38:33.113 [2024-12-09 09:56:08.369116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.113 [2024-12-09 09:56:08.369145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.113 qpair failed and we were unable to recover it. 00:38:33.113 [2024-12-09 09:56:08.369488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.113 [2024-12-09 09:56:08.369517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.113 qpair failed and we were unable to recover it. 00:38:33.113 [2024-12-09 09:56:08.369768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.113 [2024-12-09 09:56:08.369798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.113 qpair failed and we were unable to recover it. 00:38:33.113 [2024-12-09 09:56:08.370147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.113 [2024-12-09 09:56:08.370175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.113 qpair failed and we were unable to recover it. 00:38:33.113 [2024-12-09 09:56:08.370499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.113 [2024-12-09 09:56:08.370529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.113 qpair failed and we were unable to recover it. 00:38:33.113 [2024-12-09 09:56:08.370867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.113 [2024-12-09 09:56:08.370899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.113 qpair failed and we were unable to recover it. 00:38:33.113 [2024-12-09 09:56:08.371273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.113 [2024-12-09 09:56:08.371301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.113 qpair failed and we were unable to recover it. 00:38:33.113 [2024-12-09 09:56:08.371635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.113 [2024-12-09 09:56:08.371673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.113 qpair failed and we were unable to recover it. 00:38:33.113 [2024-12-09 09:56:08.372013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.113 [2024-12-09 09:56:08.372043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.113 qpair failed and we were unable to recover it. 00:38:33.113 [2024-12-09 09:56:08.372386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.113 [2024-12-09 09:56:08.372415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.113 qpair failed and we were unable to recover it. 00:38:33.113 [2024-12-09 09:56:08.372675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.113 [2024-12-09 09:56:08.372705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.113 qpair failed and we were unable to recover it. 00:38:33.113 [2024-12-09 09:56:08.373041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.113 [2024-12-09 09:56:08.373071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.113 qpair failed and we were unable to recover it. 00:38:33.113 [2024-12-09 09:56:08.373419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.113 [2024-12-09 09:56:08.373448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.113 qpair failed and we were unable to recover it. 00:38:33.113 [2024-12-09 09:56:08.373686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.113 [2024-12-09 09:56:08.373717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.113 qpair failed and we were unable to recover it. 00:38:33.113 [2024-12-09 09:56:08.374076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.113 [2024-12-09 09:56:08.374105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.113 qpair failed and we were unable to recover it. 00:38:33.113 [2024-12-09 09:56:08.374442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.113 [2024-12-09 09:56:08.374472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.113 qpair failed and we were unable to recover it. 00:38:33.113 [2024-12-09 09:56:08.374809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.113 [2024-12-09 09:56:08.374840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.113 qpair failed and we were unable to recover it. 00:38:33.113 [2024-12-09 09:56:08.375207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.113 [2024-12-09 09:56:08.375236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.113 qpair failed and we were unable to recover it. 00:38:33.113 [2024-12-09 09:56:08.375564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.113 [2024-12-09 09:56:08.375595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.113 qpair failed and we were unable to recover it. 00:38:33.113 [2024-12-09 09:56:08.375933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.113 [2024-12-09 09:56:08.375964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.113 qpair failed and we were unable to recover it. 00:38:33.113 [2024-12-09 09:56:08.376282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.113 [2024-12-09 09:56:08.376318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.113 qpair failed and we were unable to recover it. 00:38:33.113 [2024-12-09 09:56:08.376659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.113 [2024-12-09 09:56:08.376690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.113 qpair failed and we were unable to recover it. 00:38:33.113 [2024-12-09 09:56:08.377032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.113 [2024-12-09 09:56:08.377061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.113 qpair failed and we were unable to recover it. 00:38:33.113 [2024-12-09 09:56:08.377415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.113 [2024-12-09 09:56:08.377445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.113 qpair failed and we were unable to recover it. 00:38:33.114 [2024-12-09 09:56:08.377841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.114 [2024-12-09 09:56:08.377872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.114 qpair failed and we were unable to recover it. 00:38:33.114 [2024-12-09 09:56:08.378211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.114 [2024-12-09 09:56:08.378240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.114 qpair failed and we were unable to recover it. 00:38:33.114 [2024-12-09 09:56:08.378575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.114 [2024-12-09 09:56:08.378604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.114 qpair failed and we were unable to recover it. 00:38:33.114 [2024-12-09 09:56:08.379013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.114 [2024-12-09 09:56:08.379044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.114 qpair failed and we were unable to recover it. 00:38:33.114 [2024-12-09 09:56:08.379367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.114 [2024-12-09 09:56:08.379403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.114 qpair failed and we were unable to recover it. 00:38:33.114 [2024-12-09 09:56:08.379777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.114 [2024-12-09 09:56:08.379808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.114 qpair failed and we were unable to recover it. 00:38:33.114 [2024-12-09 09:56:08.380169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.114 [2024-12-09 09:56:08.380197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.114 qpair failed and we were unable to recover it. 00:38:33.114 [2024-12-09 09:56:08.380567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.114 [2024-12-09 09:56:08.380596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.114 qpair failed and we were unable to recover it. 00:38:33.114 [2024-12-09 09:56:08.380947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.114 [2024-12-09 09:56:08.380977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.114 qpair failed and we were unable to recover it. 00:38:33.114 [2024-12-09 09:56:08.381333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.114 [2024-12-09 09:56:08.381361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.114 qpair failed and we were unable to recover it. 00:38:33.114 [2024-12-09 09:56:08.381701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.114 [2024-12-09 09:56:08.381733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.114 qpair failed and we were unable to recover it. 00:38:33.114 [2024-12-09 09:56:08.382112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.114 [2024-12-09 09:56:08.382141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.114 qpair failed and we were unable to recover it. 00:38:33.114 [2024-12-09 09:56:08.382475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.114 [2024-12-09 09:56:08.382505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.114 qpair failed and we were unable to recover it. 00:38:33.114 [2024-12-09 09:56:08.382765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.114 [2024-12-09 09:56:08.382795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.114 qpair failed and we were unable to recover it. 00:38:33.114 [2024-12-09 09:56:08.383156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.114 [2024-12-09 09:56:08.383186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.114 qpair failed and we were unable to recover it. 00:38:33.114 [2024-12-09 09:56:08.383525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.114 [2024-12-09 09:56:08.383554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.114 qpair failed and we were unable to recover it. 00:38:33.114 [2024-12-09 09:56:08.383888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.114 [2024-12-09 09:56:08.383920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.114 qpair failed and we were unable to recover it. 00:38:33.114 [2024-12-09 09:56:08.384279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.114 [2024-12-09 09:56:08.384307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.114 qpair failed and we were unable to recover it. 00:38:33.114 [2024-12-09 09:56:08.384565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.114 [2024-12-09 09:56:08.384594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.114 qpair failed and we were unable to recover it. 00:38:33.114 [2024-12-09 09:56:08.384945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.114 [2024-12-09 09:56:08.384975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.114 qpair failed and we were unable to recover it. 00:38:33.114 [2024-12-09 09:56:08.385220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.114 [2024-12-09 09:56:08.385248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.114 qpair failed and we were unable to recover it. 00:38:33.114 [2024-12-09 09:56:08.385604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.114 [2024-12-09 09:56:08.385633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.114 qpair failed and we were unable to recover it. 00:38:33.114 [2024-12-09 09:56:08.385881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.114 [2024-12-09 09:56:08.385911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.114 qpair failed and we were unable to recover it. 00:38:33.114 [2024-12-09 09:56:08.386265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.114 [2024-12-09 09:56:08.386294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.114 qpair failed and we were unable to recover it. 00:38:33.114 [2024-12-09 09:56:08.386630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.114 [2024-12-09 09:56:08.386670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.114 qpair failed and we were unable to recover it. 00:38:33.114 [2024-12-09 09:56:08.386998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.114 [2024-12-09 09:56:08.387027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.114 qpair failed and we were unable to recover it. 00:38:33.114 [2024-12-09 09:56:08.387373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.114 [2024-12-09 09:56:08.387403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.114 qpair failed and we were unable to recover it. 00:38:33.114 [2024-12-09 09:56:08.387744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.114 [2024-12-09 09:56:08.387776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.114 qpair failed and we were unable to recover it. 00:38:33.114 [2024-12-09 09:56:08.388013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.114 [2024-12-09 09:56:08.388041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.114 qpair failed and we were unable to recover it. 00:38:33.114 [2024-12-09 09:56:08.388434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.114 [2024-12-09 09:56:08.388463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.114 qpair failed and we were unable to recover it. 00:38:33.114 [2024-12-09 09:56:08.388791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.114 [2024-12-09 09:56:08.388821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.114 qpair failed and we were unable to recover it. 00:38:33.114 [2024-12-09 09:56:08.389181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.114 [2024-12-09 09:56:08.389210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.114 qpair failed and we were unable to recover it. 00:38:33.114 [2024-12-09 09:56:08.389551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.114 [2024-12-09 09:56:08.389580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.114 qpair failed and we were unable to recover it. 00:38:33.114 [2024-12-09 09:56:08.389972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.115 [2024-12-09 09:56:08.390003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.115 qpair failed and we were unable to recover it. 00:38:33.115 [2024-12-09 09:56:08.390337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.115 [2024-12-09 09:56:08.390366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.115 qpair failed and we were unable to recover it. 00:38:33.115 [2024-12-09 09:56:08.390720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.115 [2024-12-09 09:56:08.390751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.115 qpair failed and we were unable to recover it. 00:38:33.115 [2024-12-09 09:56:08.391180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.115 [2024-12-09 09:56:08.391209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.115 qpair failed and we were unable to recover it. 00:38:33.115 [2024-12-09 09:56:08.391531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.115 [2024-12-09 09:56:08.391561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.115 qpair failed and we were unable to recover it. 00:38:33.115 [2024-12-09 09:56:08.391903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.115 [2024-12-09 09:56:08.391934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.115 qpair failed and we were unable to recover it. 00:38:33.115 [2024-12-09 09:56:08.392272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.115 [2024-12-09 09:56:08.392301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.115 qpair failed and we were unable to recover it. 00:38:33.115 [2024-12-09 09:56:08.392646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.115 [2024-12-09 09:56:08.392676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.115 qpair failed and we were unable to recover it. 00:38:33.115 [2024-12-09 09:56:08.392970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.115 [2024-12-09 09:56:08.392999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.115 qpair failed and we were unable to recover it. 00:38:33.115 [2024-12-09 09:56:08.393320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.115 [2024-12-09 09:56:08.393349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.115 qpair failed and we were unable to recover it. 00:38:33.115 [2024-12-09 09:56:08.393675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.115 [2024-12-09 09:56:08.393706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.115 qpair failed and we were unable to recover it. 00:38:33.115 [2024-12-09 09:56:08.394029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.115 [2024-12-09 09:56:08.394066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.115 qpair failed and we were unable to recover it. 00:38:33.115 [2024-12-09 09:56:08.394396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.115 [2024-12-09 09:56:08.394425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.115 qpair failed and we were unable to recover it. 00:38:33.115 [2024-12-09 09:56:08.394763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.115 [2024-12-09 09:56:08.394794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.115 qpair failed and we were unable to recover it. 00:38:33.115 [2024-12-09 09:56:08.395150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.115 [2024-12-09 09:56:08.395179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.115 qpair failed and we were unable to recover it. 00:38:33.115 [2024-12-09 09:56:08.395517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.115 [2024-12-09 09:56:08.395546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.115 qpair failed and we were unable to recover it. 00:38:33.115 [2024-12-09 09:56:08.395893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.115 [2024-12-09 09:56:08.395923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.115 qpair failed and we were unable to recover it. 00:38:33.115 [2024-12-09 09:56:08.396457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.115 [2024-12-09 09:56:08.396496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.115 qpair failed and we were unable to recover it. 00:38:33.115 [2024-12-09 09:56:08.396860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.115 [2024-12-09 09:56:08.396896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.115 qpair failed and we were unable to recover it. 00:38:33.115 [2024-12-09 09:56:08.397228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.115 [2024-12-09 09:56:08.397259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.115 qpair failed and we were unable to recover it. 00:38:33.115 [2024-12-09 09:56:08.397601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.115 [2024-12-09 09:56:08.397631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.115 qpair failed and we were unable to recover it. 00:38:33.115 [2024-12-09 09:56:08.397974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.115 [2024-12-09 09:56:08.398005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.115 qpair failed and we were unable to recover it. 00:38:33.115 [2024-12-09 09:56:08.398365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.115 [2024-12-09 09:56:08.398393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.115 qpair failed and we were unable to recover it. 00:38:33.115 [2024-12-09 09:56:08.398717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.115 [2024-12-09 09:56:08.398749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.115 qpair failed and we were unable to recover it. 00:38:33.115 [2024-12-09 09:56:08.399068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.115 [2024-12-09 09:56:08.399096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.115 qpair failed and we were unable to recover it. 00:38:33.115 [2024-12-09 09:56:08.399447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.115 [2024-12-09 09:56:08.399476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.115 qpair failed and we were unable to recover it. 00:38:33.115 [2024-12-09 09:56:08.399811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.115 [2024-12-09 09:56:08.399843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.115 qpair failed and we were unable to recover it. 00:38:33.115 [2024-12-09 09:56:08.400188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.115 [2024-12-09 09:56:08.400217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.115 qpair failed and we were unable to recover it. 00:38:33.115 [2024-12-09 09:56:08.400464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.115 [2024-12-09 09:56:08.400493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.115 qpair failed and we were unable to recover it. 00:38:33.115 [2024-12-09 09:56:08.400858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.115 [2024-12-09 09:56:08.400890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.115 qpair failed and we were unable to recover it. 00:38:33.115 [2024-12-09 09:56:08.401241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.115 [2024-12-09 09:56:08.401270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.115 qpair failed and we were unable to recover it. 00:38:33.115 [2024-12-09 09:56:08.401591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.115 [2024-12-09 09:56:08.401626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.115 qpair failed and we were unable to recover it. 00:38:33.115 [2024-12-09 09:56:08.401966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.115 [2024-12-09 09:56:08.401996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.115 qpair failed and we were unable to recover it. 00:38:33.115 [2024-12-09 09:56:08.402334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.115 [2024-12-09 09:56:08.402362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.115 qpair failed and we were unable to recover it. 00:38:33.115 [2024-12-09 09:56:08.402715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.115 [2024-12-09 09:56:08.402746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.115 qpair failed and we were unable to recover it. 00:38:33.115 [2024-12-09 09:56:08.403104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.115 [2024-12-09 09:56:08.403133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.115 qpair failed and we were unable to recover it. 00:38:33.115 [2024-12-09 09:56:08.403562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.115 [2024-12-09 09:56:08.403591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.115 qpair failed and we were unable to recover it. 00:38:33.115 [2024-12-09 09:56:08.403935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.115 [2024-12-09 09:56:08.403965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.115 qpair failed and we were unable to recover it. 00:38:33.116 [2024-12-09 09:56:08.404300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.116 [2024-12-09 09:56:08.404329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.116 qpair failed and we were unable to recover it. 00:38:33.116 [2024-12-09 09:56:08.404674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.116 [2024-12-09 09:56:08.404704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.116 qpair failed and we were unable to recover it. 00:38:33.116 [2024-12-09 09:56:08.405064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.116 [2024-12-09 09:56:08.405094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.116 qpair failed and we were unable to recover it. 00:38:33.116 [2024-12-09 09:56:08.405328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.116 [2024-12-09 09:56:08.405357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.116 qpair failed and we were unable to recover it. 00:38:33.116 [2024-12-09 09:56:08.405679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.116 [2024-12-09 09:56:08.405710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.116 qpair failed and we were unable to recover it. 00:38:33.116 [2024-12-09 09:56:08.406063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.116 [2024-12-09 09:56:08.406092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.116 qpair failed and we were unable to recover it. 00:38:33.116 [2024-12-09 09:56:08.406482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.116 [2024-12-09 09:56:08.406510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.116 qpair failed and we were unable to recover it. 00:38:33.116 [2024-12-09 09:56:08.406860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.116 [2024-12-09 09:56:08.406891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.116 qpair failed and we were unable to recover it. 00:38:33.116 [2024-12-09 09:56:08.407242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.116 [2024-12-09 09:56:08.407270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.116 qpair failed and we were unable to recover it. 00:38:33.116 [2024-12-09 09:56:08.407623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.116 [2024-12-09 09:56:08.407660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.116 qpair failed and we were unable to recover it. 00:38:33.116 [2024-12-09 09:56:08.407998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.116 [2024-12-09 09:56:08.408027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.116 qpair failed and we were unable to recover it. 00:38:33.116 [2024-12-09 09:56:08.408361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.116 [2024-12-09 09:56:08.408390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.116 qpair failed and we were unable to recover it. 00:38:33.116 [2024-12-09 09:56:08.408631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.116 [2024-12-09 09:56:08.408672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.116 qpair failed and we were unable to recover it. 00:38:33.116 [2024-12-09 09:56:08.409022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.116 [2024-12-09 09:56:08.409051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.116 qpair failed and we were unable to recover it. 00:38:33.116 [2024-12-09 09:56:08.409390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.116 [2024-12-09 09:56:08.409420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.116 qpair failed and we were unable to recover it. 00:38:33.116 [2024-12-09 09:56:08.409766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.116 [2024-12-09 09:56:08.409796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.116 qpair failed and we were unable to recover it. 00:38:33.116 [2024-12-09 09:56:08.410144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.116 [2024-12-09 09:56:08.410173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.116 qpair failed and we were unable to recover it. 00:38:33.116 [2024-12-09 09:56:08.410516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.116 [2024-12-09 09:56:08.410544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.116 qpair failed and we were unable to recover it. 00:38:33.116 [2024-12-09 09:56:08.410884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.116 [2024-12-09 09:56:08.410915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.116 qpair failed and we were unable to recover it. 00:38:33.116 [2024-12-09 09:56:08.411203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.116 [2024-12-09 09:56:08.411231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.116 qpair failed and we were unable to recover it. 00:38:33.116 [2024-12-09 09:56:08.411567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.116 [2024-12-09 09:56:08.411597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.116 qpair failed and we were unable to recover it. 00:38:33.116 [2024-12-09 09:56:08.411940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.116 [2024-12-09 09:56:08.411971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.116 qpair failed and we were unable to recover it. 00:38:33.116 [2024-12-09 09:56:08.412314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.116 [2024-12-09 09:56:08.412342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.116 qpair failed and we were unable to recover it. 00:38:33.116 [2024-12-09 09:56:08.412688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.116 [2024-12-09 09:56:08.412718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.116 qpair failed and we were unable to recover it. 00:38:33.116 [2024-12-09 09:56:08.413079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.116 [2024-12-09 09:56:08.413108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.116 qpair failed and we were unable to recover it. 00:38:33.116 [2024-12-09 09:56:08.413446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.116 [2024-12-09 09:56:08.413474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.116 qpair failed and we were unable to recover it. 00:38:33.116 [2024-12-09 09:56:08.413728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.116 [2024-12-09 09:56:08.413759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.116 qpair failed and we were unable to recover it. 00:38:33.116 [2024-12-09 09:56:08.413996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.116 [2024-12-09 09:56:08.414025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.116 qpair failed and we were unable to recover it. 00:38:33.116 [2024-12-09 09:56:08.414391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.116 [2024-12-09 09:56:08.414421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.116 qpair failed and we were unable to recover it. 00:38:33.116 [2024-12-09 09:56:08.414750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.116 [2024-12-09 09:56:08.414781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.116 qpair failed and we were unable to recover it. 00:38:33.116 [2024-12-09 09:56:08.415015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.116 [2024-12-09 09:56:08.415043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.116 qpair failed and we were unable to recover it. 00:38:33.116 [2024-12-09 09:56:08.415387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.116 [2024-12-09 09:56:08.415416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.116 qpair failed and we were unable to recover it. 00:38:33.116 [2024-12-09 09:56:08.415792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.116 [2024-12-09 09:56:08.415823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.116 qpair failed and we were unable to recover it. 00:38:33.116 [2024-12-09 09:56:08.416162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.116 [2024-12-09 09:56:08.416192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.116 qpair failed and we were unable to recover it. 00:38:33.116 [2024-12-09 09:56:08.416556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.116 [2024-12-09 09:56:08.416586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.116 qpair failed and we were unable to recover it. 00:38:33.116 [2024-12-09 09:56:08.416929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.116 [2024-12-09 09:56:08.416959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.116 qpair failed and we were unable to recover it. 00:38:33.116 [2024-12-09 09:56:08.417303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.116 [2024-12-09 09:56:08.417332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.116 qpair failed and we were unable to recover it. 00:38:33.116 [2024-12-09 09:56:08.417678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.116 [2024-12-09 09:56:08.417709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.117 qpair failed and we were unable to recover it. 00:38:33.117 [2024-12-09 09:56:08.418044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.117 [2024-12-09 09:56:08.418072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.117 qpair failed and we were unable to recover it. 00:38:33.117 [2024-12-09 09:56:08.418407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.117 [2024-12-09 09:56:08.418437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.117 qpair failed and we were unable to recover it. 00:38:33.117 [2024-12-09 09:56:08.418778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.117 [2024-12-09 09:56:08.418808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.117 qpair failed and we were unable to recover it. 00:38:33.117 [2024-12-09 09:56:08.419142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.117 [2024-12-09 09:56:08.419170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.117 qpair failed and we were unable to recover it. 00:38:33.117 [2024-12-09 09:56:08.419520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.117 [2024-12-09 09:56:08.419548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.117 qpair failed and we were unable to recover it. 00:38:33.117 [2024-12-09 09:56:08.419879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.117 [2024-12-09 09:56:08.419909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.117 qpair failed and we were unable to recover it. 00:38:33.117 [2024-12-09 09:56:08.420236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.117 [2024-12-09 09:56:08.420265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.117 qpair failed and we were unable to recover it. 00:38:33.117 [2024-12-09 09:56:08.420610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.117 [2024-12-09 09:56:08.420646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.117 qpair failed and we were unable to recover it. 00:38:33.117 [2024-12-09 09:56:08.420996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.117 [2024-12-09 09:56:08.421024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.117 qpair failed and we were unable to recover it. 00:38:33.117 [2024-12-09 09:56:08.421356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.117 [2024-12-09 09:56:08.421385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.117 qpair failed and we were unable to recover it. 00:38:33.117 [2024-12-09 09:56:08.421752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.117 [2024-12-09 09:56:08.421783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.117 qpair failed and we were unable to recover it. 00:38:33.117 [2024-12-09 09:56:08.422122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.117 [2024-12-09 09:56:08.422151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.117 qpair failed and we were unable to recover it. 00:38:33.117 [2024-12-09 09:56:08.422500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.117 [2024-12-09 09:56:08.422529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.117 qpair failed and we were unable to recover it. 00:38:33.117 [2024-12-09 09:56:08.422865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.117 [2024-12-09 09:56:08.422895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.117 qpair failed and we were unable to recover it. 00:38:33.117 [2024-12-09 09:56:08.423247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.117 [2024-12-09 09:56:08.423275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.117 qpair failed and we were unable to recover it. 00:38:33.117 [2024-12-09 09:56:08.423619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.117 [2024-12-09 09:56:08.423670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.117 qpair failed and we were unable to recover it. 00:38:33.117 [2024-12-09 09:56:08.424023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.117 [2024-12-09 09:56:08.424052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.117 qpair failed and we were unable to recover it. 00:38:33.117 [2024-12-09 09:56:08.424398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.117 [2024-12-09 09:56:08.424426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.117 qpair failed and we were unable to recover it. 00:38:33.117 [2024-12-09 09:56:08.424784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.117 [2024-12-09 09:56:08.424814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.117 qpair failed and we were unable to recover it. 00:38:33.117 [2024-12-09 09:56:08.425154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.117 [2024-12-09 09:56:08.425183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.117 qpair failed and we were unable to recover it. 00:38:33.117 [2024-12-09 09:56:08.425534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.117 [2024-12-09 09:56:08.425562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.117 qpair failed and we were unable to recover it. 00:38:33.117 [2024-12-09 09:56:08.425917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.117 [2024-12-09 09:56:08.425948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.117 qpair failed and we were unable to recover it. 00:38:33.117 [2024-12-09 09:56:08.426300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.117 [2024-12-09 09:56:08.426329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.117 qpair failed and we were unable to recover it. 00:38:33.117 [2024-12-09 09:56:08.426676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.117 [2024-12-09 09:56:08.426712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.117 qpair failed and we were unable to recover it. 00:38:33.117 [2024-12-09 09:56:08.427078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.117 [2024-12-09 09:56:08.427106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.117 qpair failed and we were unable to recover it. 00:38:33.117 [2024-12-09 09:56:08.427453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.117 [2024-12-09 09:56:08.427481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.117 qpair failed and we were unable to recover it. 00:38:33.117 [2024-12-09 09:56:08.427830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.117 [2024-12-09 09:56:08.427860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.117 qpair failed and we were unable to recover it. 00:38:33.117 [2024-12-09 09:56:08.428277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.117 [2024-12-09 09:56:08.428306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.117 qpair failed and we were unable to recover it. 00:38:33.117 [2024-12-09 09:56:08.428660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.117 [2024-12-09 09:56:08.428689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.117 qpair failed and we were unable to recover it. 00:38:33.117 [2024-12-09 09:56:08.429038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.117 [2024-12-09 09:56:08.429068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.117 qpair failed and we were unable to recover it. 00:38:33.117 [2024-12-09 09:56:08.429408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.117 [2024-12-09 09:56:08.429438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.117 qpair failed and we were unable to recover it. 00:38:33.117 [2024-12-09 09:56:08.429780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.117 [2024-12-09 09:56:08.429813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.117 qpair failed and we were unable to recover it. 00:38:33.117 [2024-12-09 09:56:08.430159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.117 [2024-12-09 09:56:08.430188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.117 qpair failed and we were unable to recover it. 00:38:33.117 [2024-12-09 09:56:08.430429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.117 [2024-12-09 09:56:08.430458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.117 qpair failed and we were unable to recover it. 00:38:33.117 [2024-12-09 09:56:08.430806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.117 [2024-12-09 09:56:08.430837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.117 qpair failed and we were unable to recover it. 00:38:33.117 [2024-12-09 09:56:08.431160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.117 [2024-12-09 09:56:08.431190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.117 qpair failed and we were unable to recover it. 00:38:33.117 [2024-12-09 09:56:08.431513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.117 [2024-12-09 09:56:08.431544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.117 qpair failed and we were unable to recover it. 00:38:33.117 [2024-12-09 09:56:08.431871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.118 [2024-12-09 09:56:08.431901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.118 qpair failed and we were unable to recover it. 00:38:33.118 [2024-12-09 09:56:08.432256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.118 [2024-12-09 09:56:08.432285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.118 qpair failed and we were unable to recover it. 00:38:33.118 [2024-12-09 09:56:08.432613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.118 [2024-12-09 09:56:08.432667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.118 qpair failed and we were unable to recover it. 00:38:33.118 [2024-12-09 09:56:08.433054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.118 [2024-12-09 09:56:08.433083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.118 qpair failed and we were unable to recover it. 00:38:33.118 [2024-12-09 09:56:08.433408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.118 [2024-12-09 09:56:08.433438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.118 qpair failed and we were unable to recover it. 00:38:33.118 [2024-12-09 09:56:08.433805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.118 [2024-12-09 09:56:08.433836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.118 qpair failed and we were unable to recover it. 00:38:33.118 [2024-12-09 09:56:08.434181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.118 [2024-12-09 09:56:08.434211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.118 qpair failed and we were unable to recover it. 00:38:33.118 [2024-12-09 09:56:08.434545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.118 [2024-12-09 09:56:08.434574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.118 qpair failed and we were unable to recover it. 00:38:33.118 [2024-12-09 09:56:08.435013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.118 [2024-12-09 09:56:08.435043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.118 qpair failed and we were unable to recover it. 00:38:33.118 [2024-12-09 09:56:08.435392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.118 [2024-12-09 09:56:08.435421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.118 qpair failed and we were unable to recover it. 00:38:33.118 [2024-12-09 09:56:08.435667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.118 [2024-12-09 09:56:08.435697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.118 qpair failed and we were unable to recover it. 00:38:33.118 [2024-12-09 09:56:08.436031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.118 [2024-12-09 09:56:08.436060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.118 qpair failed and we were unable to recover it. 00:38:33.118 [2024-12-09 09:56:08.436389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.118 [2024-12-09 09:56:08.436418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.118 qpair failed and we were unable to recover it. 00:38:33.118 [2024-12-09 09:56:08.436776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.118 [2024-12-09 09:56:08.436806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.118 qpair failed and we were unable to recover it. 00:38:33.118 [2024-12-09 09:56:08.437144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.118 [2024-12-09 09:56:08.437173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.118 qpair failed and we were unable to recover it. 00:38:33.118 [2024-12-09 09:56:08.437541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.118 [2024-12-09 09:56:08.437569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.118 qpair failed and we were unable to recover it. 00:38:33.118 [2024-12-09 09:56:08.437912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.118 [2024-12-09 09:56:08.437944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.118 qpair failed and we were unable to recover it. 00:38:33.118 [2024-12-09 09:56:08.438286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.118 [2024-12-09 09:56:08.438314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.118 qpair failed and we were unable to recover it. 00:38:33.118 [2024-12-09 09:56:08.438649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.118 [2024-12-09 09:56:08.438680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.118 qpair failed and we were unable to recover it. 00:38:33.118 [2024-12-09 09:56:08.439004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.118 [2024-12-09 09:56:08.439033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.118 qpair failed and we were unable to recover it. 00:38:33.118 [2024-12-09 09:56:08.439370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.118 [2024-12-09 09:56:08.439399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.118 qpair failed and we were unable to recover it. 00:38:33.118 [2024-12-09 09:56:08.439749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.118 [2024-12-09 09:56:08.439779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.118 qpair failed and we were unable to recover it. 00:38:33.118 [2024-12-09 09:56:08.440126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.118 [2024-12-09 09:56:08.440153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.118 qpair failed and we were unable to recover it. 00:38:33.118 [2024-12-09 09:56:08.440516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.118 [2024-12-09 09:56:08.440545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.118 qpair failed and we were unable to recover it. 00:38:33.118 [2024-12-09 09:56:08.440776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.118 [2024-12-09 09:56:08.440805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.118 qpair failed and we were unable to recover it. 00:38:33.118 [2024-12-09 09:56:08.441128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.118 [2024-12-09 09:56:08.441156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.118 qpair failed and we were unable to recover it. 00:38:33.118 [2024-12-09 09:56:08.441497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.118 [2024-12-09 09:56:08.441527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.118 qpair failed and we were unable to recover it. 00:38:33.118 [2024-12-09 09:56:08.441869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.118 [2024-12-09 09:56:08.441905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.118 qpair failed and we were unable to recover it. 00:38:33.118 [2024-12-09 09:56:08.442246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.118 [2024-12-09 09:56:08.442275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.118 qpair failed and we were unable to recover it. 00:38:33.118 [2024-12-09 09:56:08.442635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.118 [2024-12-09 09:56:08.442674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.118 qpair failed and we were unable to recover it. 00:38:33.118 [2024-12-09 09:56:08.443016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.118 [2024-12-09 09:56:08.443046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.118 qpair failed and we were unable to recover it. 00:38:33.118 [2024-12-09 09:56:08.443393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.118 [2024-12-09 09:56:08.443421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.118 qpair failed and we were unable to recover it. 00:38:33.118 [2024-12-09 09:56:08.443677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.118 [2024-12-09 09:56:08.443706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.118 qpair failed and we were unable to recover it. 00:38:33.118 [2024-12-09 09:56:08.444051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.118 [2024-12-09 09:56:08.444080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.118 qpair failed and we were unable to recover it. 00:38:33.118 [2024-12-09 09:56:08.444428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.119 [2024-12-09 09:56:08.444457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.119 qpair failed and we were unable to recover it. 00:38:33.119 [2024-12-09 09:56:08.444818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.119 [2024-12-09 09:56:08.444847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.119 qpair failed and we were unable to recover it. 00:38:33.119 [2024-12-09 09:56:08.445166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.119 [2024-12-09 09:56:08.445195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.119 qpair failed and we were unable to recover it. 00:38:33.119 [2024-12-09 09:56:08.445545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.119 [2024-12-09 09:56:08.445573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.119 qpair failed and we were unable to recover it. 00:38:33.119 [2024-12-09 09:56:08.445909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.119 [2024-12-09 09:56:08.445940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.119 qpair failed and we were unable to recover it. 00:38:33.119 [2024-12-09 09:56:08.446280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.119 [2024-12-09 09:56:08.446308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.119 qpair failed and we were unable to recover it. 00:38:33.119 [2024-12-09 09:56:08.446663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.119 [2024-12-09 09:56:08.446693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.119 qpair failed and we were unable to recover it. 00:38:33.119 [2024-12-09 09:56:08.446976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.119 [2024-12-09 09:56:08.447004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.119 qpair failed and we were unable to recover it. 00:38:33.119 [2024-12-09 09:56:08.447333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.119 [2024-12-09 09:56:08.447362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.119 qpair failed and we were unable to recover it. 00:38:33.119 [2024-12-09 09:56:08.447675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.119 [2024-12-09 09:56:08.447712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.119 qpair failed and we were unable to recover it. 00:38:33.119 [2024-12-09 09:56:08.448057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.119 [2024-12-09 09:56:08.448085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.119 qpair failed and we were unable to recover it. 00:38:33.119 [2024-12-09 09:56:08.448452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.119 [2024-12-09 09:56:08.448488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.119 qpair failed and we were unable to recover it. 00:38:33.119 [2024-12-09 09:56:08.448844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.119 [2024-12-09 09:56:08.448874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.119 qpair failed and we were unable to recover it. 00:38:33.119 [2024-12-09 09:56:08.449217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.119 [2024-12-09 09:56:08.449246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.119 qpair failed and we were unable to recover it. 00:38:33.119 [2024-12-09 09:56:08.449510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.119 [2024-12-09 09:56:08.449538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.119 qpair failed and we were unable to recover it. 00:38:33.119 [2024-12-09 09:56:08.449918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.119 [2024-12-09 09:56:08.449949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.119 qpair failed and we were unable to recover it. 00:38:33.119 [2024-12-09 09:56:08.450187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.119 [2024-12-09 09:56:08.450215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.119 qpair failed and we were unable to recover it. 00:38:33.119 [2024-12-09 09:56:08.450545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.119 [2024-12-09 09:56:08.450573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.119 qpair failed and we were unable to recover it. 00:38:33.119 [2024-12-09 09:56:08.450888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.119 [2024-12-09 09:56:08.450918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.119 qpair failed and we were unable to recover it. 00:38:33.119 [2024-12-09 09:56:08.451264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.119 [2024-12-09 09:56:08.451293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.119 qpair failed and we were unable to recover it. 00:38:33.119 [2024-12-09 09:56:08.451633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.119 [2024-12-09 09:56:08.451678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.119 qpair failed and we were unable to recover it. 00:38:33.119 [2024-12-09 09:56:08.452003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.119 [2024-12-09 09:56:08.452032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.119 qpair failed and we were unable to recover it. 00:38:33.119 [2024-12-09 09:56:08.452431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.119 [2024-12-09 09:56:08.452459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.119 qpair failed and we were unable to recover it. 00:38:33.119 [2024-12-09 09:56:08.452802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.119 [2024-12-09 09:56:08.452832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.119 qpair failed and we were unable to recover it. 00:38:33.119 [2024-12-09 09:56:08.453260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.119 [2024-12-09 09:56:08.453289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.119 qpair failed and we were unable to recover it. 00:38:33.119 [2024-12-09 09:56:08.453632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.119 [2024-12-09 09:56:08.453671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.119 qpair failed and we were unable to recover it. 00:38:33.119 [2024-12-09 09:56:08.454014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.119 [2024-12-09 09:56:08.454042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.119 qpair failed and we were unable to recover it. 00:38:33.119 [2024-12-09 09:56:08.454387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.119 [2024-12-09 09:56:08.454415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.119 qpair failed and we were unable to recover it. 00:38:33.119 [2024-12-09 09:56:08.454753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.119 [2024-12-09 09:56:08.454784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.119 qpair failed and we were unable to recover it. 00:38:33.119 [2024-12-09 09:56:08.455118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.119 [2024-12-09 09:56:08.455146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.119 qpair failed and we were unable to recover it. 00:38:33.119 [2024-12-09 09:56:08.455484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.119 [2024-12-09 09:56:08.455513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.119 qpair failed and we were unable to recover it. 00:38:33.119 [2024-12-09 09:56:08.455768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.119 [2024-12-09 09:56:08.455798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.119 qpair failed and we were unable to recover it. 00:38:33.119 [2024-12-09 09:56:08.456169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.119 [2024-12-09 09:56:08.456197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.119 qpair failed and we were unable to recover it. 00:38:33.119 [2024-12-09 09:56:08.456439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.119 [2024-12-09 09:56:08.456466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.119 qpair failed and we were unable to recover it. 00:38:33.119 [2024-12-09 09:56:08.456816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.119 [2024-12-09 09:56:08.456847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.119 qpair failed and we were unable to recover it. 00:38:33.119 [2024-12-09 09:56:08.457135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.119 [2024-12-09 09:56:08.457163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.119 qpair failed and we were unable to recover it. 00:38:33.119 [2024-12-09 09:56:08.457503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.119 [2024-12-09 09:56:08.457532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.119 qpair failed and we were unable to recover it. 00:38:33.119 [2024-12-09 09:56:08.457908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.119 [2024-12-09 09:56:08.457938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.119 qpair failed and we were unable to recover it. 00:38:33.120 [2024-12-09 09:56:08.458268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.120 [2024-12-09 09:56:08.458297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.120 qpair failed and we were unable to recover it. 00:38:33.120 [2024-12-09 09:56:08.458665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.120 [2024-12-09 09:56:08.458695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.120 qpair failed and we were unable to recover it. 00:38:33.120 [2024-12-09 09:56:08.459070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.120 [2024-12-09 09:56:08.459098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.120 qpair failed and we were unable to recover it. 00:38:33.120 [2024-12-09 09:56:08.459434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.120 [2024-12-09 09:56:08.459462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.120 qpair failed and we were unable to recover it. 00:38:33.120 [2024-12-09 09:56:08.459727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.120 [2024-12-09 09:56:08.459757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.120 qpair failed and we were unable to recover it. 00:38:33.120 [2024-12-09 09:56:08.460106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.120 [2024-12-09 09:56:08.460134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.120 qpair failed and we were unable to recover it. 00:38:33.120 [2024-12-09 09:56:08.460477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.120 [2024-12-09 09:56:08.460505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.120 qpair failed and we were unable to recover it. 00:38:33.120 [2024-12-09 09:56:08.460878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.120 [2024-12-09 09:56:08.460916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.120 qpair failed and we were unable to recover it. 00:38:33.120 [2024-12-09 09:56:08.461289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.120 [2024-12-09 09:56:08.461318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.120 qpair failed and we were unable to recover it. 00:38:33.120 [2024-12-09 09:56:08.461556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.120 [2024-12-09 09:56:08.461584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.120 qpair failed and we were unable to recover it. 00:38:33.120 [2024-12-09 09:56:08.461962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.120 [2024-12-09 09:56:08.461992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.120 qpair failed and we were unable to recover it. 00:38:33.120 [2024-12-09 09:56:08.462338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.120 [2024-12-09 09:56:08.462366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.120 qpair failed and we were unable to recover it. 00:38:33.120 [2024-12-09 09:56:08.462717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.120 [2024-12-09 09:56:08.462746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.120 qpair failed and we were unable to recover it. 00:38:33.120 [2024-12-09 09:56:08.463098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.120 [2024-12-09 09:56:08.463126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.120 qpair failed and we were unable to recover it. 00:38:33.120 [2024-12-09 09:56:08.463477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.120 [2024-12-09 09:56:08.463505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.120 qpair failed and we were unable to recover it. 00:38:33.120 [2024-12-09 09:56:08.463863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.120 [2024-12-09 09:56:08.463893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.120 qpair failed and we were unable to recover it. 00:38:33.120 [2024-12-09 09:56:08.464227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.120 [2024-12-09 09:56:08.464255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.120 qpair failed and we were unable to recover it. 00:38:33.120 [2024-12-09 09:56:08.464615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.120 [2024-12-09 09:56:08.464650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.120 qpair failed and we were unable to recover it. 00:38:33.120 [2024-12-09 09:56:08.464890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.120 [2024-12-09 09:56:08.464918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.120 qpair failed and we were unable to recover it. 00:38:33.120 [2024-12-09 09:56:08.465250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.120 [2024-12-09 09:56:08.465278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.120 qpair failed and we were unable to recover it. 00:38:33.120 [2024-12-09 09:56:08.465626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.120 [2024-12-09 09:56:08.465671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.120 qpair failed and we were unable to recover it. 00:38:33.120 [2024-12-09 09:56:08.465906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.120 [2024-12-09 09:56:08.465934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.120 qpair failed and we were unable to recover it. 00:38:33.120 [2024-12-09 09:56:08.466233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.120 [2024-12-09 09:56:08.466262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.120 qpair failed and we were unable to recover it. 00:38:33.120 [2024-12-09 09:56:08.466614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.120 [2024-12-09 09:56:08.466662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.120 qpair failed and we were unable to recover it. 00:38:33.120 [2024-12-09 09:56:08.466927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.120 [2024-12-09 09:56:08.466962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.120 qpair failed and we were unable to recover it. 00:38:33.120 [2024-12-09 09:56:08.467323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.120 [2024-12-09 09:56:08.467352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.120 qpair failed and we were unable to recover it. 00:38:33.120 [2024-12-09 09:56:08.467714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.120 [2024-12-09 09:56:08.467744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.120 qpair failed and we were unable to recover it. 00:38:33.120 [2024-12-09 09:56:08.467998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.120 [2024-12-09 09:56:08.468026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.120 qpair failed and we were unable to recover it. 00:38:33.120 [2024-12-09 09:56:08.468358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.120 [2024-12-09 09:56:08.468387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.120 qpair failed and we were unable to recover it. 00:38:33.120 [2024-12-09 09:56:08.468750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.120 [2024-12-09 09:56:08.468779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.120 qpair failed and we were unable to recover it. 00:38:33.120 [2024-12-09 09:56:08.469005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.120 [2024-12-09 09:56:08.469033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.120 qpair failed and we were unable to recover it. 00:38:33.120 [2024-12-09 09:56:08.469392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.120 [2024-12-09 09:56:08.469420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.120 qpair failed and we were unable to recover it. 00:38:33.120 [2024-12-09 09:56:08.469676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.120 [2024-12-09 09:56:08.469705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.120 qpair failed and we were unable to recover it. 00:38:33.120 [2024-12-09 09:56:08.470067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.120 [2024-12-09 09:56:08.470095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.120 qpair failed and we were unable to recover it. 00:38:33.120 [2024-12-09 09:56:08.470476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.120 [2024-12-09 09:56:08.470504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.120 qpair failed and we were unable to recover it. 00:38:33.120 [2024-12-09 09:56:08.470858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.120 [2024-12-09 09:56:08.470887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.120 qpair failed and we were unable to recover it. 00:38:33.120 [2024-12-09 09:56:08.471243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.120 [2024-12-09 09:56:08.471272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.120 qpair failed and we were unable to recover it. 00:38:33.121 [2024-12-09 09:56:08.471595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.121 [2024-12-09 09:56:08.471625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.121 qpair failed and we were unable to recover it. 00:38:33.121 [2024-12-09 09:56:08.471978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.121 [2024-12-09 09:56:08.472007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.121 qpair failed and we were unable to recover it. 00:38:33.121 [2024-12-09 09:56:08.472347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.121 [2024-12-09 09:56:08.472375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.121 qpair failed and we were unable to recover it. 00:38:33.121 [2024-12-09 09:56:08.472720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.121 [2024-12-09 09:56:08.472750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.121 qpair failed and we were unable to recover it. 00:38:33.121 [2024-12-09 09:56:08.473093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.121 [2024-12-09 09:56:08.473121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.121 qpair failed and we were unable to recover it. 00:38:33.121 [2024-12-09 09:56:08.473467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.121 [2024-12-09 09:56:08.473495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.121 qpair failed and we were unable to recover it. 00:38:33.121 [2024-12-09 09:56:08.473857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.121 [2024-12-09 09:56:08.473886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.121 qpair failed and we were unable to recover it. 00:38:33.121 [2024-12-09 09:56:08.474112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.121 [2024-12-09 09:56:08.474141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.121 qpair failed and we were unable to recover it. 00:38:33.121 [2024-12-09 09:56:08.474522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.121 [2024-12-09 09:56:08.474550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.121 qpair failed and we were unable to recover it. 00:38:33.121 [2024-12-09 09:56:08.474798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.121 [2024-12-09 09:56:08.474828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.121 qpair failed and we were unable to recover it. 00:38:33.121 [2024-12-09 09:56:08.475183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.121 [2024-12-09 09:56:08.475212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.121 qpair failed and we were unable to recover it. 00:38:33.121 [2024-12-09 09:56:08.475569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.121 [2024-12-09 09:56:08.475598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.121 qpair failed and we were unable to recover it. 00:38:33.121 [2024-12-09 09:56:08.475960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.121 [2024-12-09 09:56:08.475991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.121 qpair failed and we were unable to recover it. 00:38:33.121 [2024-12-09 09:56:08.476332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.121 [2024-12-09 09:56:08.476366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.121 qpair failed and we were unable to recover it. 00:38:33.121 [2024-12-09 09:56:08.476599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.121 [2024-12-09 09:56:08.476628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.121 qpair failed and we were unable to recover it. 00:38:33.121 [2024-12-09 09:56:08.476997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.121 [2024-12-09 09:56:08.477027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.121 qpair failed and we were unable to recover it. 00:38:33.121 [2024-12-09 09:56:08.477388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.121 [2024-12-09 09:56:08.477418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.121 qpair failed and we were unable to recover it. 00:38:33.121 [2024-12-09 09:56:08.477751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.121 [2024-12-09 09:56:08.477782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.121 qpair failed and we were unable to recover it. 00:38:33.121 [2024-12-09 09:56:08.478128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.121 [2024-12-09 09:56:08.478156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.121 qpair failed and we were unable to recover it. 00:38:33.121 [2024-12-09 09:56:08.478505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.121 [2024-12-09 09:56:08.478533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.121 qpair failed and we were unable to recover it. 00:38:33.121 [2024-12-09 09:56:08.478864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.121 [2024-12-09 09:56:08.478894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.121 qpair failed and we were unable to recover it. 00:38:33.121 [2024-12-09 09:56:08.479134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.121 [2024-12-09 09:56:08.479162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.121 qpair failed and we were unable to recover it. 00:38:33.121 [2024-12-09 09:56:08.479480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.121 [2024-12-09 09:56:08.479509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.121 qpair failed and we were unable to recover it. 00:38:33.121 [2024-12-09 09:56:08.479852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.121 [2024-12-09 09:56:08.479881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.121 qpair failed and we were unable to recover it. 00:38:33.121 [2024-12-09 09:56:08.480207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.121 [2024-12-09 09:56:08.480234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.121 qpair failed and we were unable to recover it. 00:38:33.121 [2024-12-09 09:56:08.480572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.121 [2024-12-09 09:56:08.480601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.121 qpair failed and we were unable to recover it. 00:38:33.121 [2024-12-09 09:56:08.480943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.121 [2024-12-09 09:56:08.480973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.121 qpair failed and we were unable to recover it. 00:38:33.121 [2024-12-09 09:56:08.481317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.121 [2024-12-09 09:56:08.481347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.121 qpair failed and we were unable to recover it. 00:38:33.121 [2024-12-09 09:56:08.481686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.121 [2024-12-09 09:56:08.481717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.121 qpair failed and we were unable to recover it. 00:38:33.121 [2024-12-09 09:56:08.482062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.121 [2024-12-09 09:56:08.482091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.121 qpair failed and we were unable to recover it. 00:38:33.121 [2024-12-09 09:56:08.482455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.121 [2024-12-09 09:56:08.482483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.121 qpair failed and we were unable to recover it. 00:38:33.121 [2024-12-09 09:56:08.482839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.121 [2024-12-09 09:56:08.482869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.121 qpair failed and we were unable to recover it. 00:38:33.121 [2024-12-09 09:56:08.483217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.121 [2024-12-09 09:56:08.483246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.121 qpair failed and we were unable to recover it. 00:38:33.121 [2024-12-09 09:56:08.483589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.121 [2024-12-09 09:56:08.483617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.121 qpair failed and we were unable to recover it. 00:38:33.121 [2024-12-09 09:56:08.483972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.121 [2024-12-09 09:56:08.484002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.121 qpair failed and we were unable to recover it. 00:38:33.121 [2024-12-09 09:56:08.484349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.121 [2024-12-09 09:56:08.484378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.121 qpair failed and we were unable to recover it. 00:38:33.121 [2024-12-09 09:56:08.484745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.121 [2024-12-09 09:56:08.484775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.121 qpair failed and we were unable to recover it. 00:38:33.121 [2024-12-09 09:56:08.485112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.121 [2024-12-09 09:56:08.485140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.121 qpair failed and we were unable to recover it. 00:38:33.122 [2024-12-09 09:56:08.485504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.122 [2024-12-09 09:56:08.485533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.122 qpair failed and we were unable to recover it. 00:38:33.122 [2024-12-09 09:56:08.485780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.122 [2024-12-09 09:56:08.485811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.122 qpair failed and we were unable to recover it. 00:38:33.122 [2024-12-09 09:56:08.486121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.122 [2024-12-09 09:56:08.486156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.122 qpair failed and we were unable to recover it. 00:38:33.122 [2024-12-09 09:56:08.486499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.122 [2024-12-09 09:56:08.486529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.122 qpair failed and we were unable to recover it. 00:38:33.122 [2024-12-09 09:56:08.486883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.122 [2024-12-09 09:56:08.486913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.122 qpair failed and we were unable to recover it. 00:38:33.122 [2024-12-09 09:56:08.487243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.122 [2024-12-09 09:56:08.487273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.122 qpair failed and we were unable to recover it. 00:38:33.122 [2024-12-09 09:56:08.487633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.122 [2024-12-09 09:56:08.487684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.122 qpair failed and we were unable to recover it. 00:38:33.122 [2024-12-09 09:56:08.488041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.122 [2024-12-09 09:56:08.488070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.122 qpair failed and we were unable to recover it. 00:38:33.122 [2024-12-09 09:56:08.488421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.122 [2024-12-09 09:56:08.488449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.122 qpair failed and we were unable to recover it. 00:38:33.122 [2024-12-09 09:56:08.488734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.122 [2024-12-09 09:56:08.488764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.122 qpair failed and we were unable to recover it. 00:38:33.122 [2024-12-09 09:56:08.489020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.122 [2024-12-09 09:56:08.489048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.122 qpair failed and we were unable to recover it. 00:38:33.122 [2024-12-09 09:56:08.489377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.122 [2024-12-09 09:56:08.489405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.122 qpair failed and we were unable to recover it. 00:38:33.122 [2024-12-09 09:56:08.489743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.122 [2024-12-09 09:56:08.489774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.122 qpair failed and we were unable to recover it. 00:38:33.122 [2024-12-09 09:56:08.489997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.122 [2024-12-09 09:56:08.490026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.122 qpair failed and we were unable to recover it. 00:38:33.122 [2024-12-09 09:56:08.490383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.122 [2024-12-09 09:56:08.490412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.122 qpair failed and we were unable to recover it. 00:38:33.122 [2024-12-09 09:56:08.490737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.122 [2024-12-09 09:56:08.490767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.122 qpair failed and we were unable to recover it. 00:38:33.122 [2024-12-09 09:56:08.491078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.122 [2024-12-09 09:56:08.491112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.122 qpair failed and we were unable to recover it. 00:38:33.122 [2024-12-09 09:56:08.491451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.122 [2024-12-09 09:56:08.491480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.122 qpair failed and we were unable to recover it. 00:38:33.122 [2024-12-09 09:56:08.491811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.122 [2024-12-09 09:56:08.491840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.122 qpair failed and we were unable to recover it. 00:38:33.122 [2024-12-09 09:56:08.492198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.122 [2024-12-09 09:56:08.492227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.122 qpair failed and we were unable to recover it. 00:38:33.122 [2024-12-09 09:56:08.492581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.122 [2024-12-09 09:56:08.492610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.122 qpair failed and we were unable to recover it. 00:38:33.122 [2024-12-09 09:56:08.492979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.122 [2024-12-09 09:56:08.493010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.122 qpair failed and we were unable to recover it. 00:38:33.122 [2024-12-09 09:56:08.493380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.122 [2024-12-09 09:56:08.493408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.122 qpair failed and we were unable to recover it. 00:38:33.122 [2024-12-09 09:56:08.493739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.122 [2024-12-09 09:56:08.493769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.122 qpair failed and we were unable to recover it. 00:38:33.122 [2024-12-09 09:56:08.494021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.122 [2024-12-09 09:56:08.494050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.122 qpair failed and we were unable to recover it. 00:38:33.122 [2024-12-09 09:56:08.494404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.122 [2024-12-09 09:56:08.494433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.122 qpair failed and we were unable to recover it. 00:38:33.122 [2024-12-09 09:56:08.494793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.122 [2024-12-09 09:56:08.494822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.122 qpair failed and we were unable to recover it. 00:38:33.122 [2024-12-09 09:56:08.495163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.122 [2024-12-09 09:56:08.495191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.122 qpair failed and we were unable to recover it. 00:38:33.122 [2024-12-09 09:56:08.495531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.122 [2024-12-09 09:56:08.495560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.122 qpair failed and we were unable to recover it. 00:38:33.122 [2024-12-09 09:56:08.495951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.122 [2024-12-09 09:56:08.495981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.122 qpair failed and we were unable to recover it. 00:38:33.122 [2024-12-09 09:56:08.496325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.122 [2024-12-09 09:56:08.496354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.122 qpair failed and we were unable to recover it. 00:38:33.122 [2024-12-09 09:56:08.496687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.122 [2024-12-09 09:56:08.496717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.122 qpair failed and we were unable to recover it. 00:38:33.122 [2024-12-09 09:56:08.497070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.122 [2024-12-09 09:56:08.497099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.122 qpair failed and we were unable to recover it. 00:38:33.122 [2024-12-09 09:56:08.497449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.122 [2024-12-09 09:56:08.497477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.122 qpair failed and we were unable to recover it. 00:38:33.122 [2024-12-09 09:56:08.497841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.122 [2024-12-09 09:56:08.497872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.122 qpair failed and we were unable to recover it. 00:38:33.122 [2024-12-09 09:56:08.498210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.122 [2024-12-09 09:56:08.498238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.122 qpair failed and we were unable to recover it. 00:38:33.122 [2024-12-09 09:56:08.498588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.122 [2024-12-09 09:56:08.498616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.122 qpair failed and we were unable to recover it. 00:38:33.122 [2024-12-09 09:56:08.498966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.122 [2024-12-09 09:56:08.498997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.123 qpair failed and we were unable to recover it. 00:38:33.123 [2024-12-09 09:56:08.499353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.123 [2024-12-09 09:56:08.499381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.123 qpair failed and we were unable to recover it. 00:38:33.123 [2024-12-09 09:56:08.499728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.123 [2024-12-09 09:56:08.499758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.123 qpair failed and we were unable to recover it. 00:38:33.123 [2024-12-09 09:56:08.500006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.123 [2024-12-09 09:56:08.500035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.123 qpair failed and we were unable to recover it. 00:38:33.123 [2024-12-09 09:56:08.500309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.123 [2024-12-09 09:56:08.500339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.123 qpair failed and we were unable to recover it. 00:38:33.123 [2024-12-09 09:56:08.500668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.123 [2024-12-09 09:56:08.500697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.123 qpair failed and we were unable to recover it. 00:38:33.123 [2024-12-09 09:56:08.501020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.123 [2024-12-09 09:56:08.501054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.123 qpair failed and we were unable to recover it. 00:38:33.123 [2024-12-09 09:56:08.501432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.123 [2024-12-09 09:56:08.501461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.123 qpair failed and we were unable to recover it. 00:38:33.123 [2024-12-09 09:56:08.501795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.123 [2024-12-09 09:56:08.501825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.123 qpair failed and we were unable to recover it. 00:38:33.123 [2024-12-09 09:56:08.502184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.123 [2024-12-09 09:56:08.502213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.123 qpair failed and we were unable to recover it. 00:38:33.123 [2024-12-09 09:56:08.502555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.123 [2024-12-09 09:56:08.502584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.123 qpair failed and we were unable to recover it. 00:38:33.123 [2024-12-09 09:56:08.502948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.123 [2024-12-09 09:56:08.502978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.123 qpair failed and we were unable to recover it. 00:38:33.123 [2024-12-09 09:56:08.503293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.123 [2024-12-09 09:56:08.503323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.123 qpair failed and we were unable to recover it. 00:38:33.123 [2024-12-09 09:56:08.503687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.123 [2024-12-09 09:56:08.503718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.123 qpair failed and we were unable to recover it. 00:38:33.123 [2024-12-09 09:56:08.504052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.123 [2024-12-09 09:56:08.504081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.123 qpair failed and we were unable to recover it. 00:38:33.123 [2024-12-09 09:56:08.504441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.123 [2024-12-09 09:56:08.504470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.123 qpair failed and we were unable to recover it. 00:38:33.123 [2024-12-09 09:56:08.504789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.123 [2024-12-09 09:56:08.504819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.123 qpair failed and we were unable to recover it. 00:38:33.123 [2024-12-09 09:56:08.505156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.123 [2024-12-09 09:56:08.505185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.123 qpair failed and we were unable to recover it. 00:38:33.123 [2024-12-09 09:56:08.505550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.123 [2024-12-09 09:56:08.505580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.123 qpair failed and we were unable to recover it. 00:38:33.123 [2024-12-09 09:56:08.505918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.123 [2024-12-09 09:56:08.505949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.123 qpair failed and we were unable to recover it. 00:38:33.123 [2024-12-09 09:56:08.506273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.123 [2024-12-09 09:56:08.506303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.123 qpair failed and we were unable to recover it. 00:38:33.123 [2024-12-09 09:56:08.506665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.123 [2024-12-09 09:56:08.506695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.123 qpair failed and we were unable to recover it. 00:38:33.123 [2024-12-09 09:56:08.507021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.123 [2024-12-09 09:56:08.507050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.123 qpair failed and we were unable to recover it. 00:38:33.123 [2024-12-09 09:56:08.507376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.123 [2024-12-09 09:56:08.507405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.123 qpair failed and we were unable to recover it. 00:38:33.123 [2024-12-09 09:56:08.507758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.123 [2024-12-09 09:56:08.507788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.123 qpair failed and we were unable to recover it. 00:38:33.123 [2024-12-09 09:56:08.508134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.123 [2024-12-09 09:56:08.508163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.123 qpair failed and we were unable to recover it. 00:38:33.123 [2024-12-09 09:56:08.508509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.123 [2024-12-09 09:56:08.508537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.123 qpair failed and we were unable to recover it. 00:38:33.123 [2024-12-09 09:56:08.508887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.123 [2024-12-09 09:56:08.508917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.123 qpair failed and we were unable to recover it. 00:38:33.123 [2024-12-09 09:56:08.509311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.123 [2024-12-09 09:56:08.509339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.123 qpair failed and we were unable to recover it. 00:38:33.123 [2024-12-09 09:56:08.509686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.123 [2024-12-09 09:56:08.509715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.123 qpair failed and we were unable to recover it. 00:38:33.123 [2024-12-09 09:56:08.510058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.123 [2024-12-09 09:56:08.510086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.123 qpair failed and we were unable to recover it. 00:38:33.123 [2024-12-09 09:56:08.510432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.124 [2024-12-09 09:56:08.510461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.124 qpair failed and we were unable to recover it. 00:38:33.124 [2024-12-09 09:56:08.510807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.124 [2024-12-09 09:56:08.510838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.124 qpair failed and we were unable to recover it. 00:38:33.124 [2024-12-09 09:56:08.511176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.124 [2024-12-09 09:56:08.511205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.124 qpair failed and we were unable to recover it. 00:38:33.124 [2024-12-09 09:56:08.511549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.124 [2024-12-09 09:56:08.511577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.124 qpair failed and we were unable to recover it. 00:38:33.124 [2024-12-09 09:56:08.511927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.124 [2024-12-09 09:56:08.511957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.124 qpair failed and we were unable to recover it. 00:38:33.124 [2024-12-09 09:56:08.512291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.124 [2024-12-09 09:56:08.512320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.124 qpair failed and we were unable to recover it. 00:38:33.124 [2024-12-09 09:56:08.512695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.124 [2024-12-09 09:56:08.512725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.124 qpair failed and we were unable to recover it. 00:38:33.124 [2024-12-09 09:56:08.513102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.124 [2024-12-09 09:56:08.513131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.124 qpair failed and we were unable to recover it. 00:38:33.124 [2024-12-09 09:56:08.513480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.124 [2024-12-09 09:56:08.513509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.124 qpair failed and we were unable to recover it. 00:38:33.124 [2024-12-09 09:56:08.513853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.124 [2024-12-09 09:56:08.513881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.124 qpair failed and we were unable to recover it. 00:38:33.124 [2024-12-09 09:56:08.514214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.124 [2024-12-09 09:56:08.514243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.124 qpair failed and we were unable to recover it. 00:38:33.124 [2024-12-09 09:56:08.514585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.124 [2024-12-09 09:56:08.514614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.124 qpair failed and we were unable to recover it. 00:38:33.124 [2024-12-09 09:56:08.514972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.124 [2024-12-09 09:56:08.515001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.124 qpair failed and we were unable to recover it. 00:38:33.124 [2024-12-09 09:56:08.515345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.124 [2024-12-09 09:56:08.515374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.124 qpair failed and we were unable to recover it. 00:38:33.124 [2024-12-09 09:56:08.515698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.124 [2024-12-09 09:56:08.515728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.124 qpair failed and we were unable to recover it. 00:38:33.124 [2024-12-09 09:56:08.516053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.124 [2024-12-09 09:56:08.516082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.124 qpair failed and we were unable to recover it. 00:38:33.124 [2024-12-09 09:56:08.516443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.124 [2024-12-09 09:56:08.516478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.124 qpair failed and we were unable to recover it. 00:38:33.124 [2024-12-09 09:56:08.516699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.124 [2024-12-09 09:56:08.516729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.124 qpair failed and we were unable to recover it. 00:38:33.124 [2024-12-09 09:56:08.517085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.124 [2024-12-09 09:56:08.517114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.124 qpair failed and we were unable to recover it. 00:38:33.124 [2024-12-09 09:56:08.517452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.124 [2024-12-09 09:56:08.517481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.124 qpair failed and we were unable to recover it. 00:38:33.124 [2024-12-09 09:56:08.517836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.124 [2024-12-09 09:56:08.517865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.124 qpair failed and we were unable to recover it. 00:38:33.124 [2024-12-09 09:56:08.518240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.124 [2024-12-09 09:56:08.518269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.124 qpair failed and we were unable to recover it. 00:38:33.124 [2024-12-09 09:56:08.518609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.124 [2024-12-09 09:56:08.518660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.124 qpair failed and we were unable to recover it. 00:38:33.124 [2024-12-09 09:56:08.519053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.124 [2024-12-09 09:56:08.519082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.124 qpair failed and we were unable to recover it. 00:38:33.124 [2024-12-09 09:56:08.519424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.124 [2024-12-09 09:56:08.519453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.124 qpair failed and we were unable to recover it. 00:38:33.124 [2024-12-09 09:56:08.519802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.124 [2024-12-09 09:56:08.519833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.124 qpair failed and we were unable to recover it. 00:38:33.124 [2024-12-09 09:56:08.520209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.124 [2024-12-09 09:56:08.520238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.124 qpair failed and we were unable to recover it. 00:38:33.124 [2024-12-09 09:56:08.520581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.124 [2024-12-09 09:56:08.520610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.124 qpair failed and we were unable to recover it. 00:38:33.124 [2024-12-09 09:56:08.520964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.124 [2024-12-09 09:56:08.520994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.124 qpair failed and we were unable to recover it. 00:38:33.124 [2024-12-09 09:56:08.521345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.124 [2024-12-09 09:56:08.521374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.124 qpair failed and we were unable to recover it. 00:38:33.124 [2024-12-09 09:56:08.521706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.124 [2024-12-09 09:56:08.521736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.124 qpair failed and we were unable to recover it. 00:38:33.124 [2024-12-09 09:56:08.522105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.124 [2024-12-09 09:56:08.522134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.124 qpair failed and we were unable to recover it. 00:38:33.124 [2024-12-09 09:56:08.522462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.124 [2024-12-09 09:56:08.522491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.124 qpair failed and we were unable to recover it. 00:38:33.124 [2024-12-09 09:56:08.522865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.124 [2024-12-09 09:56:08.522895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.124 qpair failed and we were unable to recover it. 00:38:33.124 [2024-12-09 09:56:08.523239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.124 [2024-12-09 09:56:08.523268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.124 qpair failed and we were unable to recover it. 00:38:33.124 [2024-12-09 09:56:08.523610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.124 [2024-12-09 09:56:08.523648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.124 qpair failed and we were unable to recover it. 00:38:33.124 [2024-12-09 09:56:08.523995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.124 [2024-12-09 09:56:08.524024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.124 qpair failed and we were unable to recover it. 00:38:33.124 [2024-12-09 09:56:08.524367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.124 [2024-12-09 09:56:08.524396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.124 qpair failed and we were unable to recover it. 00:38:33.124 [2024-12-09 09:56:08.524759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.125 [2024-12-09 09:56:08.524790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.125 qpair failed and we were unable to recover it. 00:38:33.125 [2024-12-09 09:56:08.525024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.125 [2024-12-09 09:56:08.525054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.125 qpair failed and we were unable to recover it. 00:38:33.125 [2024-12-09 09:56:08.525400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.125 [2024-12-09 09:56:08.525429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.125 qpair failed and we were unable to recover it. 00:38:33.125 [2024-12-09 09:56:08.525777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.125 [2024-12-09 09:56:08.525806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.125 qpair failed and we were unable to recover it. 00:38:33.125 [2024-12-09 09:56:08.526160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.125 [2024-12-09 09:56:08.526189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.125 qpair failed and we were unable to recover it. 00:38:33.125 [2024-12-09 09:56:08.526536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.125 [2024-12-09 09:56:08.526565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.125 qpair failed and we were unable to recover it. 00:38:33.125 [2024-12-09 09:56:08.526909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.125 [2024-12-09 09:56:08.526939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.125 qpair failed and we were unable to recover it. 00:38:33.125 [2024-12-09 09:56:08.527280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.125 [2024-12-09 09:56:08.527309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.125 qpair failed and we were unable to recover it. 00:38:33.125 [2024-12-09 09:56:08.527685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.125 [2024-12-09 09:56:08.527714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.125 qpair failed and we were unable to recover it. 00:38:33.125 [2024-12-09 09:56:08.527974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.125 [2024-12-09 09:56:08.528001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.125 qpair failed and we were unable to recover it. 00:38:33.125 [2024-12-09 09:56:08.528371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.125 [2024-12-09 09:56:08.528400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.125 qpair failed and we were unable to recover it. 00:38:33.125 [2024-12-09 09:56:08.528737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.125 [2024-12-09 09:56:08.528768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.125 qpair failed and we were unable to recover it. 00:38:33.125 [2024-12-09 09:56:08.529150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.125 [2024-12-09 09:56:08.529179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.125 qpair failed and we were unable to recover it. 00:38:33.125 [2024-12-09 09:56:08.529491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.125 [2024-12-09 09:56:08.529520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.125 qpair failed and we were unable to recover it. 00:38:33.125 [2024-12-09 09:56:08.529849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.125 [2024-12-09 09:56:08.529879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.125 qpair failed and we were unable to recover it. 00:38:33.125 [2024-12-09 09:56:08.530124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.125 [2024-12-09 09:56:08.530158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.125 qpair failed and we were unable to recover it. 00:38:33.125 [2024-12-09 09:56:08.530476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.125 [2024-12-09 09:56:08.530506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.125 qpair failed and we were unable to recover it. 00:38:33.125 [2024-12-09 09:56:08.530855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.125 [2024-12-09 09:56:08.530885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.125 qpair failed and we were unable to recover it. 00:38:33.125 [2024-12-09 09:56:08.531229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.125 [2024-12-09 09:56:08.531257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.125 qpair failed and we were unable to recover it. 00:38:33.125 [2024-12-09 09:56:08.531602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.125 [2024-12-09 09:56:08.531631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.125 qpair failed and we were unable to recover it. 00:38:33.125 [2024-12-09 09:56:08.531997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.125 [2024-12-09 09:56:08.532026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.125 qpair failed and we were unable to recover it. 00:38:33.125 [2024-12-09 09:56:08.532377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.125 [2024-12-09 09:56:08.532405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.125 qpair failed and we were unable to recover it. 00:38:33.125 [2024-12-09 09:56:08.532752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.125 [2024-12-09 09:56:08.532784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.125 qpair failed and we were unable to recover it. 00:38:33.125 [2024-12-09 09:56:08.533120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.125 [2024-12-09 09:56:08.533148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.125 qpair failed and we were unable to recover it. 00:38:33.125 [2024-12-09 09:56:08.533489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.125 [2024-12-09 09:56:08.533518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.125 qpair failed and we were unable to recover it. 00:38:33.125 [2024-12-09 09:56:08.533877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.125 [2024-12-09 09:56:08.533906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.125 qpair failed and we were unable to recover it. 00:38:33.125 [2024-12-09 09:56:08.534218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.125 [2024-12-09 09:56:08.534247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.125 qpair failed and we were unable to recover it. 00:38:33.125 [2024-12-09 09:56:08.534598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.125 [2024-12-09 09:56:08.534626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.125 qpair failed and we were unable to recover it. 00:38:33.125 [2024-12-09 09:56:08.534968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.125 [2024-12-09 09:56:08.535034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.125 qpair failed and we were unable to recover it. 00:38:33.125 [2024-12-09 09:56:08.535246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.125 [2024-12-09 09:56:08.535275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.125 qpair failed and we were unable to recover it. 00:38:33.125 [2024-12-09 09:56:08.535531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.125 [2024-12-09 09:56:08.535564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.125 qpair failed and we were unable to recover it. 00:38:33.125 [2024-12-09 09:56:08.535912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.125 [2024-12-09 09:56:08.535944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.125 qpair failed and we were unable to recover it. 00:38:33.125 [2024-12-09 09:56:08.536288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.125 [2024-12-09 09:56:08.536318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.125 qpair failed and we were unable to recover it. 00:38:33.125 [2024-12-09 09:56:08.536663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.125 [2024-12-09 09:56:08.536695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.125 qpair failed and we were unable to recover it. 00:38:33.125 [2024-12-09 09:56:08.537138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.125 [2024-12-09 09:56:08.537168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.125 qpair failed and we were unable to recover it. 00:38:33.125 [2024-12-09 09:56:08.537576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.125 [2024-12-09 09:56:08.537605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.125 qpair failed and we were unable to recover it. 00:38:33.125 [2024-12-09 09:56:08.537975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.125 [2024-12-09 09:56:08.538006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.126 qpair failed and we were unable to recover it. 00:38:33.126 [2024-12-09 09:56:08.538339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.126 [2024-12-09 09:56:08.538369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.126 qpair failed and we were unable to recover it. 00:38:33.126 [2024-12-09 09:56:08.538712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.126 [2024-12-09 09:56:08.538748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.126 qpair failed and we were unable to recover it. 00:38:33.126 [2024-12-09 09:56:08.539086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.126 [2024-12-09 09:56:08.539115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.126 qpair failed and we were unable to recover it. 00:38:33.126 [2024-12-09 09:56:08.539483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.126 [2024-12-09 09:56:08.539513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.126 qpair failed and we were unable to recover it. 00:38:33.126 [2024-12-09 09:56:08.539857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.126 [2024-12-09 09:56:08.539888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.126 qpair failed and we were unable to recover it. 00:38:33.126 [2024-12-09 09:56:08.540242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.126 [2024-12-09 09:56:08.540270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.126 qpair failed and we were unable to recover it. 00:38:33.126 [2024-12-09 09:56:08.540512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.126 [2024-12-09 09:56:08.540541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.126 qpair failed and we were unable to recover it. 00:38:33.126 [2024-12-09 09:56:08.540872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.126 [2024-12-09 09:56:08.540902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.126 qpair failed and we were unable to recover it. 00:38:33.126 [2024-12-09 09:56:08.541230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.126 [2024-12-09 09:56:08.541259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.126 qpair failed and we were unable to recover it. 00:38:33.126 [2024-12-09 09:56:08.541610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.126 [2024-12-09 09:56:08.541655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.126 qpair failed and we were unable to recover it. 00:38:33.126 [2024-12-09 09:56:08.541972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.126 [2024-12-09 09:56:08.542002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.126 qpair failed and we were unable to recover it. 00:38:33.126 [2024-12-09 09:56:08.542360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.126 [2024-12-09 09:56:08.542388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.126 qpair failed and we were unable to recover it. 00:38:33.126 [2024-12-09 09:56:08.542736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.126 [2024-12-09 09:56:08.542766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.126 qpair failed and we were unable to recover it. 00:38:33.126 [2024-12-09 09:56:08.543106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.126 [2024-12-09 09:56:08.543134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.126 qpair failed and we were unable to recover it. 00:38:33.126 [2024-12-09 09:56:08.543533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.126 [2024-12-09 09:56:08.543562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.126 qpair failed and we were unable to recover it. 00:38:33.126 [2024-12-09 09:56:08.543888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.126 [2024-12-09 09:56:08.543919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.126 qpair failed and we were unable to recover it. 00:38:33.399 [2024-12-09 09:56:08.544272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.399 [2024-12-09 09:56:08.544302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.399 qpair failed and we were unable to recover it. 00:38:33.399 [2024-12-09 09:56:08.544656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.399 [2024-12-09 09:56:08.544687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.399 qpair failed and we were unable to recover it. 00:38:33.399 [2024-12-09 09:56:08.545031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.399 [2024-12-09 09:56:08.545060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.399 qpair failed and we were unable to recover it. 00:38:33.399 [2024-12-09 09:56:08.545399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.399 [2024-12-09 09:56:08.545428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.399 qpair failed and we were unable to recover it. 00:38:33.399 [2024-12-09 09:56:08.545771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.399 [2024-12-09 09:56:08.545801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.399 qpair failed and we were unable to recover it. 00:38:33.399 [2024-12-09 09:56:08.546006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.399 [2024-12-09 09:56:08.546038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.399 qpair failed and we were unable to recover it. 00:38:33.399 [2024-12-09 09:56:08.546349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.399 [2024-12-09 09:56:08.546379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.399 qpair failed and we were unable to recover it. 00:38:33.399 [2024-12-09 09:56:08.546749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.399 [2024-12-09 09:56:08.546781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.399 qpair failed and we were unable to recover it. 00:38:33.399 [2024-12-09 09:56:08.547141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.399 [2024-12-09 09:56:08.547170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.399 qpair failed and we were unable to recover it. 00:38:33.399 [2024-12-09 09:56:08.547566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.399 [2024-12-09 09:56:08.547594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.399 qpair failed and we were unable to recover it. 00:38:33.399 [2024-12-09 09:56:08.547939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.399 [2024-12-09 09:56:08.547969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.399 qpair failed and we were unable to recover it. 00:38:33.399 [2024-12-09 09:56:08.548290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.399 [2024-12-09 09:56:08.548318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.399 qpair failed and we were unable to recover it. 00:38:33.399 [2024-12-09 09:56:08.548656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.399 [2024-12-09 09:56:08.548691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.399 qpair failed and we were unable to recover it. 00:38:33.399 [2024-12-09 09:56:08.549044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.399 [2024-12-09 09:56:08.549072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.399 qpair failed and we were unable to recover it. 00:38:33.399 [2024-12-09 09:56:08.549414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.399 [2024-12-09 09:56:08.549443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.399 qpair failed and we were unable to recover it. 00:38:33.399 [2024-12-09 09:56:08.549804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.399 [2024-12-09 09:56:08.549834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.399 qpair failed and we were unable to recover it. 00:38:33.399 [2024-12-09 09:56:08.550152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.399 [2024-12-09 09:56:08.550181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.399 qpair failed and we were unable to recover it. 00:38:33.399 [2024-12-09 09:56:08.550525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.399 [2024-12-09 09:56:08.550554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.399 qpair failed and we were unable to recover it. 00:38:33.399 [2024-12-09 09:56:08.550786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.399 [2024-12-09 09:56:08.550818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.399 qpair failed and we were unable to recover it. 00:38:33.399 [2024-12-09 09:56:08.551187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.399 [2024-12-09 09:56:08.551216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.399 qpair failed and we were unable to recover it. 00:38:33.399 [2024-12-09 09:56:08.551436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.399 [2024-12-09 09:56:08.551467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.399 qpair failed and we were unable to recover it. 00:38:33.399 [2024-12-09 09:56:08.551856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.399 [2024-12-09 09:56:08.551887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.399 qpair failed and we were unable to recover it. 00:38:33.399 [2024-12-09 09:56:08.552239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.399 [2024-12-09 09:56:08.552269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.399 qpair failed and we were unable to recover it. 00:38:33.399 [2024-12-09 09:56:08.552646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.399 [2024-12-09 09:56:08.552677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.399 qpair failed and we were unable to recover it. 00:38:33.399 [2024-12-09 09:56:08.553036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.399 [2024-12-09 09:56:08.553065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.399 qpair failed and we were unable to recover it. 00:38:33.399 [2024-12-09 09:56:08.553406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.399 [2024-12-09 09:56:08.553434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.399 qpair failed and we were unable to recover it. 00:38:33.399 [2024-12-09 09:56:08.553782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.399 [2024-12-09 09:56:08.553812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.399 qpair failed and we were unable to recover it. 00:38:33.399 [2024-12-09 09:56:08.554143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.399 [2024-12-09 09:56:08.554172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.399 qpair failed and we were unable to recover it. 00:38:33.400 [2024-12-09 09:56:08.554521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.400 [2024-12-09 09:56:08.554550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.400 qpair failed and we were unable to recover it. 00:38:33.400 [2024-12-09 09:56:08.554901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.400 [2024-12-09 09:56:08.554930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.400 qpair failed and we were unable to recover it. 00:38:33.400 [2024-12-09 09:56:08.555267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.400 [2024-12-09 09:56:08.555296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.400 qpair failed and we were unable to recover it. 00:38:33.400 [2024-12-09 09:56:08.555657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.400 [2024-12-09 09:56:08.555688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.400 qpair failed and we were unable to recover it. 00:38:33.400 [2024-12-09 09:56:08.556013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.400 [2024-12-09 09:56:08.556042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.400 qpair failed and we were unable to recover it. 00:38:33.400 [2024-12-09 09:56:08.556367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.400 [2024-12-09 09:56:08.556395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.400 qpair failed and we were unable to recover it. 00:38:33.400 [2024-12-09 09:56:08.556646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.400 [2024-12-09 09:56:08.556684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.400 qpair failed and we were unable to recover it. 00:38:33.400 [2024-12-09 09:56:08.557032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.400 [2024-12-09 09:56:08.557062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.400 qpair failed and we were unable to recover it. 00:38:33.400 [2024-12-09 09:56:08.557421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.400 [2024-12-09 09:56:08.557449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.400 qpair failed and we were unable to recover it. 00:38:33.400 [2024-12-09 09:56:08.557790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.400 [2024-12-09 09:56:08.557820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.400 qpair failed and we were unable to recover it. 00:38:33.400 [2024-12-09 09:56:08.558137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.400 [2024-12-09 09:56:08.558166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.400 qpair failed and we were unable to recover it. 00:38:33.400 [2024-12-09 09:56:08.558400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.400 [2024-12-09 09:56:08.558431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.400 qpair failed and we were unable to recover it. 00:38:33.400 [2024-12-09 09:56:08.558773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.400 [2024-12-09 09:56:08.558803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.400 qpair failed and we were unable to recover it. 00:38:33.400 [2024-12-09 09:56:08.559018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.400 [2024-12-09 09:56:08.559046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.400 qpair failed and we were unable to recover it. 00:38:33.400 [2024-12-09 09:56:08.559376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.400 [2024-12-09 09:56:08.559405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.400 qpair failed and we were unable to recover it. 00:38:33.400 [2024-12-09 09:56:08.559773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.400 [2024-12-09 09:56:08.559803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.400 qpair failed and we were unable to recover it. 00:38:33.400 [2024-12-09 09:56:08.560144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.400 [2024-12-09 09:56:08.560172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.400 qpair failed and we were unable to recover it. 00:38:33.400 [2024-12-09 09:56:08.560525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.400 [2024-12-09 09:56:08.560554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.400 qpair failed and we were unable to recover it. 00:38:33.400 [2024-12-09 09:56:08.560909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.400 [2024-12-09 09:56:08.560939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.400 qpair failed and we were unable to recover it. 00:38:33.400 [2024-12-09 09:56:08.561336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.400 [2024-12-09 09:56:08.561365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.400 qpair failed and we were unable to recover it. 00:38:33.400 [2024-12-09 09:56:08.561707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.400 [2024-12-09 09:56:08.561739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.400 qpair failed and we were unable to recover it. 00:38:33.400 [2024-12-09 09:56:08.562086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.400 [2024-12-09 09:56:08.562115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.400 qpair failed and we were unable to recover it. 00:38:33.400 [2024-12-09 09:56:08.562360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.400 [2024-12-09 09:56:08.562392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.400 qpair failed and we were unable to recover it. 00:38:33.400 [2024-12-09 09:56:08.562613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.400 [2024-12-09 09:56:08.562656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.400 qpair failed and we were unable to recover it. 00:38:33.400 [2024-12-09 09:56:08.562869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.400 [2024-12-09 09:56:08.562898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.400 qpair failed and we were unable to recover it. 00:38:33.400 [2024-12-09 09:56:08.563252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.400 [2024-12-09 09:56:08.563280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.400 qpair failed and we were unable to recover it. 00:38:33.400 [2024-12-09 09:56:08.563651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.400 [2024-12-09 09:56:08.563681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.400 qpair failed and we were unable to recover it. 00:38:33.400 [2024-12-09 09:56:08.564054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.400 [2024-12-09 09:56:08.564083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.400 qpair failed and we were unable to recover it. 00:38:33.400 [2024-12-09 09:56:08.564408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.400 [2024-12-09 09:56:08.564437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.400 qpair failed and we were unable to recover it. 00:38:33.400 [2024-12-09 09:56:08.564769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.400 [2024-12-09 09:56:08.564799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.400 qpair failed and we were unable to recover it. 00:38:33.400 [2024-12-09 09:56:08.565141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.400 [2024-12-09 09:56:08.565171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.400 qpair failed and we were unable to recover it. 00:38:33.400 [2024-12-09 09:56:08.565533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.400 [2024-12-09 09:56:08.565561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.400 qpair failed and we were unable to recover it. 00:38:33.400 [2024-12-09 09:56:08.565879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.400 [2024-12-09 09:56:08.565910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.400 qpair failed and we were unable to recover it. 00:38:33.400 [2024-12-09 09:56:08.566258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.400 [2024-12-09 09:56:08.566294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.400 qpair failed and we were unable to recover it. 00:38:33.400 [2024-12-09 09:56:08.566650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.400 [2024-12-09 09:56:08.566679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.400 qpair failed and we were unable to recover it. 00:38:33.400 [2024-12-09 09:56:08.567008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.400 [2024-12-09 09:56:08.567037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.400 qpair failed and we were unable to recover it. 00:38:33.400 [2024-12-09 09:56:08.567376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.400 [2024-12-09 09:56:08.567406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.400 qpair failed and we were unable to recover it. 00:38:33.400 [2024-12-09 09:56:08.567759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.401 [2024-12-09 09:56:08.567789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.401 qpair failed and we were unable to recover it. 00:38:33.401 [2024-12-09 09:56:08.568133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.401 [2024-12-09 09:56:08.568162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.401 qpair failed and we were unable to recover it. 00:38:33.401 [2024-12-09 09:56:08.568526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.401 [2024-12-09 09:56:08.568555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.401 qpair failed and we were unable to recover it. 00:38:33.401 [2024-12-09 09:56:08.568899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.401 [2024-12-09 09:56:08.568930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.401 qpair failed and we were unable to recover it. 00:38:33.401 [2024-12-09 09:56:08.569275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.401 [2024-12-09 09:56:08.569304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.401 qpair failed and we were unable to recover it. 00:38:33.401 [2024-12-09 09:56:08.569544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.401 [2024-12-09 09:56:08.569573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.401 qpair failed and we were unable to recover it. 00:38:33.401 [2024-12-09 09:56:08.569940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.401 [2024-12-09 09:56:08.569971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.401 qpair failed and we were unable to recover it. 00:38:33.401 [2024-12-09 09:56:08.570319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.401 [2024-12-09 09:56:08.570349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.401 qpair failed and we were unable to recover it. 00:38:33.401 [2024-12-09 09:56:08.570694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.401 [2024-12-09 09:56:08.570724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.401 qpair failed and we were unable to recover it. 00:38:33.401 [2024-12-09 09:56:08.571123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.401 [2024-12-09 09:56:08.571152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.401 qpair failed and we were unable to recover it. 00:38:33.401 [2024-12-09 09:56:08.571503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.401 [2024-12-09 09:56:08.571533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.401 qpair failed and we were unable to recover it. 00:38:33.401 [2024-12-09 09:56:08.571882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.401 [2024-12-09 09:56:08.571912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.401 qpair failed and we were unable to recover it. 00:38:33.401 [2024-12-09 09:56:08.572256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.401 [2024-12-09 09:56:08.572285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.401 qpair failed and we were unable to recover it. 00:38:33.401 [2024-12-09 09:56:08.572630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.401 [2024-12-09 09:56:08.572668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.401 qpair failed and we were unable to recover it. 00:38:33.401 [2024-12-09 09:56:08.573017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.401 [2024-12-09 09:56:08.573046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.401 qpair failed and we were unable to recover it. 00:38:33.401 [2024-12-09 09:56:08.573400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.401 [2024-12-09 09:56:08.573429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.401 qpair failed and we were unable to recover it. 00:38:33.401 [2024-12-09 09:56:08.573782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.401 [2024-12-09 09:56:08.573813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.401 qpair failed and we were unable to recover it. 00:38:33.401 [2024-12-09 09:56:08.574166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.401 [2024-12-09 09:56:08.574194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.401 qpair failed and we were unable to recover it. 00:38:33.401 [2024-12-09 09:56:08.574533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.401 [2024-12-09 09:56:08.574562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.401 qpair failed and we were unable to recover it. 00:38:33.401 [2024-12-09 09:56:08.574908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.401 [2024-12-09 09:56:08.574940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.401 qpair failed and we were unable to recover it. 00:38:33.401 [2024-12-09 09:56:08.575255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.401 [2024-12-09 09:56:08.575284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.401 qpair failed and we were unable to recover it. 00:38:33.401 [2024-12-09 09:56:08.575618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.401 [2024-12-09 09:56:08.575655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.401 qpair failed and we were unable to recover it. 00:38:33.401 [2024-12-09 09:56:08.576019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.401 [2024-12-09 09:56:08.576048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.401 qpair failed and we were unable to recover it. 00:38:33.401 [2024-12-09 09:56:08.576375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.401 [2024-12-09 09:56:08.576403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.401 qpair failed and we were unable to recover it. 00:38:33.401 [2024-12-09 09:56:08.576763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.401 [2024-12-09 09:56:08.576794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.401 qpair failed and we were unable to recover it. 00:38:33.401 [2024-12-09 09:56:08.577035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.401 [2024-12-09 09:56:08.577063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.401 qpair failed and we were unable to recover it. 00:38:33.401 [2024-12-09 09:56:08.577391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.401 [2024-12-09 09:56:08.577419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.401 qpair failed and we were unable to recover it. 00:38:33.401 [2024-12-09 09:56:08.577740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.401 [2024-12-09 09:56:08.577770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.401 qpair failed and we were unable to recover it. 00:38:33.401 [2024-12-09 09:56:08.578086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.401 [2024-12-09 09:56:08.578115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.401 qpair failed and we were unable to recover it. 00:38:33.401 [2024-12-09 09:56:08.578458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.401 [2024-12-09 09:56:08.578487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.401 qpair failed and we were unable to recover it. 00:38:33.401 [2024-12-09 09:56:08.578840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.401 [2024-12-09 09:56:08.578870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.401 qpair failed and we were unable to recover it. 00:38:33.401 [2024-12-09 09:56:08.579212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.401 [2024-12-09 09:56:08.579241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.401 qpair failed and we were unable to recover it. 00:38:33.401 [2024-12-09 09:56:08.579579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.401 [2024-12-09 09:56:08.579607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.401 qpair failed and we were unable to recover it. 00:38:33.401 [2024-12-09 09:56:08.579958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.401 [2024-12-09 09:56:08.579988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.401 qpair failed and we were unable to recover it. 00:38:33.401 [2024-12-09 09:56:08.580331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.401 [2024-12-09 09:56:08.580359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.401 qpair failed and we were unable to recover it. 00:38:33.401 [2024-12-09 09:56:08.580788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.401 [2024-12-09 09:56:08.580818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.401 qpair failed and we were unable to recover it. 00:38:33.401 [2024-12-09 09:56:08.581150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.401 [2024-12-09 09:56:08.581178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.401 qpair failed and we were unable to recover it. 00:38:33.401 [2024-12-09 09:56:08.581520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.401 [2024-12-09 09:56:08.581554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.401 qpair failed and we were unable to recover it. 00:38:33.401 [2024-12-09 09:56:08.581925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.402 [2024-12-09 09:56:08.581956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.402 qpair failed and we were unable to recover it. 00:38:33.402 [2024-12-09 09:56:08.582181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.402 [2024-12-09 09:56:08.582209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.402 qpair failed and we were unable to recover it. 00:38:33.402 [2024-12-09 09:56:08.582545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.402 [2024-12-09 09:56:08.582574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.402 qpair failed and we were unable to recover it. 00:38:33.402 [2024-12-09 09:56:08.582923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.402 [2024-12-09 09:56:08.582953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.402 qpair failed and we were unable to recover it. 00:38:33.402 [2024-12-09 09:56:08.583300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.402 [2024-12-09 09:56:08.583329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.402 qpair failed and we were unable to recover it. 00:38:33.402 [2024-12-09 09:56:08.583677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.402 [2024-12-09 09:56:08.583707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.402 qpair failed and we were unable to recover it. 00:38:33.402 [2024-12-09 09:56:08.584044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.402 [2024-12-09 09:56:08.584073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.402 qpair failed and we were unable to recover it. 00:38:33.402 [2024-12-09 09:56:08.584408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.402 [2024-12-09 09:56:08.584436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.402 qpair failed and we were unable to recover it. 00:38:33.402 [2024-12-09 09:56:08.584780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.402 [2024-12-09 09:56:08.584810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.402 qpair failed and we were unable to recover it. 00:38:33.402 [2024-12-09 09:56:08.585170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.402 [2024-12-09 09:56:08.585199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.402 qpair failed and we were unable to recover it. 00:38:33.402 [2024-12-09 09:56:08.585536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.402 [2024-12-09 09:56:08.585564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.402 qpair failed and we were unable to recover it. 00:38:33.402 [2024-12-09 09:56:08.585904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.402 [2024-12-09 09:56:08.585935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.402 qpair failed and we were unable to recover it. 00:38:33.402 [2024-12-09 09:56:08.586297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.402 [2024-12-09 09:56:08.586327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.402 qpair failed and we were unable to recover it. 00:38:33.402 [2024-12-09 09:56:08.586745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.402 [2024-12-09 09:56:08.586776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.402 qpair failed and we were unable to recover it. 00:38:33.402 [2024-12-09 09:56:08.587102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.402 [2024-12-09 09:56:08.587131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.402 qpair failed and we were unable to recover it. 00:38:33.402 [2024-12-09 09:56:08.587469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.402 [2024-12-09 09:56:08.587497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.402 qpair failed and we were unable to recover it. 00:38:33.402 [2024-12-09 09:56:08.587837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.402 [2024-12-09 09:56:08.587867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.402 qpair failed and we were unable to recover it. 00:38:33.402 [2024-12-09 09:56:08.588217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.402 [2024-12-09 09:56:08.588245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.402 qpair failed and we were unable to recover it. 00:38:33.402 [2024-12-09 09:56:08.588607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.402 [2024-12-09 09:56:08.588636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.402 qpair failed and we were unable to recover it. 00:38:33.402 [2024-12-09 09:56:08.589007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.402 [2024-12-09 09:56:08.589037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.402 qpair failed and we were unable to recover it. 00:38:33.402 [2024-12-09 09:56:08.589402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.402 [2024-12-09 09:56:08.589431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.402 qpair failed and we were unable to recover it. 00:38:33.402 [2024-12-09 09:56:08.589767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.402 [2024-12-09 09:56:08.589798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.402 qpair failed and we were unable to recover it. 00:38:33.402 [2024-12-09 09:56:08.590139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.402 [2024-12-09 09:56:08.590168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.402 qpair failed and we were unable to recover it. 00:38:33.402 [2024-12-09 09:56:08.590511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.402 [2024-12-09 09:56:08.590539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.402 qpair failed and we were unable to recover it. 00:38:33.402 [2024-12-09 09:56:08.590895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.402 [2024-12-09 09:56:08.590925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.402 qpair failed and we were unable to recover it. 00:38:33.402 [2024-12-09 09:56:08.591272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.402 [2024-12-09 09:56:08.591301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.402 qpair failed and we were unable to recover it. 00:38:33.402 [2024-12-09 09:56:08.591662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.402 [2024-12-09 09:56:08.591699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.402 qpair failed and we were unable to recover it. 00:38:33.402 [2024-12-09 09:56:08.592089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.402 [2024-12-09 09:56:08.592118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.402 qpair failed and we were unable to recover it. 00:38:33.402 [2024-12-09 09:56:08.592465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.402 [2024-12-09 09:56:08.592494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.402 qpair failed and we were unable to recover it. 00:38:33.402 [2024-12-09 09:56:08.592813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.402 [2024-12-09 09:56:08.592842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.402 qpair failed and we were unable to recover it. 00:38:33.402 [2024-12-09 09:56:08.593192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.402 [2024-12-09 09:56:08.593221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.402 qpair failed and we were unable to recover it. 00:38:33.402 [2024-12-09 09:56:08.593573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.402 [2024-12-09 09:56:08.593602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.402 qpair failed and we were unable to recover it. 00:38:33.402 [2024-12-09 09:56:08.593955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.402 [2024-12-09 09:56:08.593986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.402 qpair failed and we were unable to recover it. 00:38:33.402 [2024-12-09 09:56:08.594325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.402 [2024-12-09 09:56:08.594354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.402 qpair failed and we were unable to recover it. 00:38:33.402 [2024-12-09 09:56:08.594581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.402 [2024-12-09 09:56:08.594609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.402 qpair failed and we were unable to recover it. 00:38:33.402 [2024-12-09 09:56:08.594977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.402 [2024-12-09 09:56:08.595007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.402 qpair failed and we were unable to recover it. 00:38:33.402 [2024-12-09 09:56:08.595352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.402 [2024-12-09 09:56:08.595380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.402 qpair failed and we were unable to recover it. 00:38:33.402 [2024-12-09 09:56:08.595729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.402 [2024-12-09 09:56:08.595759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.402 qpair failed and we were unable to recover it. 00:38:33.403 [2024-12-09 09:56:08.596081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.403 [2024-12-09 09:56:08.596110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.403 qpair failed and we were unable to recover it. 00:38:33.403 [2024-12-09 09:56:08.596479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.403 [2024-12-09 09:56:08.596507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.403 qpair failed and we were unable to recover it. 00:38:33.403 [2024-12-09 09:56:08.596903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.403 [2024-12-09 09:56:08.596934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.403 qpair failed and we were unable to recover it. 00:38:33.403 [2024-12-09 09:56:08.597260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.403 [2024-12-09 09:56:08.597290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.403 qpair failed and we were unable to recover it. 00:38:33.403 [2024-12-09 09:56:08.597616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.403 [2024-12-09 09:56:08.597653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.403 qpair failed and we were unable to recover it. 00:38:33.403 [2024-12-09 09:56:08.597993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.403 [2024-12-09 09:56:08.598022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.403 qpair failed and we were unable to recover it. 00:38:33.403 [2024-12-09 09:56:08.598381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.403 [2024-12-09 09:56:08.598409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.403 qpair failed and we were unable to recover it. 00:38:33.403 [2024-12-09 09:56:08.598737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.403 [2024-12-09 09:56:08.598766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.403 qpair failed and we were unable to recover it. 00:38:33.403 [2024-12-09 09:56:08.599127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.403 [2024-12-09 09:56:08.599156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.403 qpair failed and we were unable to recover it. 00:38:33.403 [2024-12-09 09:56:08.599490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.403 [2024-12-09 09:56:08.599520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.403 qpair failed and we were unable to recover it. 00:38:33.403 [2024-12-09 09:56:08.599853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.403 [2024-12-09 09:56:08.599883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.403 qpair failed and we were unable to recover it. 00:38:33.403 [2024-12-09 09:56:08.600223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.403 [2024-12-09 09:56:08.600252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.403 qpair failed and we were unable to recover it. 00:38:33.403 [2024-12-09 09:56:08.600597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.403 [2024-12-09 09:56:08.600626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.403 qpair failed and we were unable to recover it. 00:38:33.403 [2024-12-09 09:56:08.601008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.403 [2024-12-09 09:56:08.601038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.403 qpair failed and we were unable to recover it. 00:38:33.403 [2024-12-09 09:56:08.601364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.403 [2024-12-09 09:56:08.601393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.403 qpair failed and we were unable to recover it. 00:38:33.403 [2024-12-09 09:56:08.601737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.403 [2024-12-09 09:56:08.601766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.403 qpair failed and we were unable to recover it. 00:38:33.403 [2024-12-09 09:56:08.602099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.403 [2024-12-09 09:56:08.602128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.403 qpair failed and we were unable to recover it. 00:38:33.403 [2024-12-09 09:56:08.602474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.403 [2024-12-09 09:56:08.602504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.403 qpair failed and we were unable to recover it. 00:38:33.403 [2024-12-09 09:56:08.602857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.403 [2024-12-09 09:56:08.602887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.403 qpair failed and we were unable to recover it. 00:38:33.403 [2024-12-09 09:56:08.603248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.403 [2024-12-09 09:56:08.603277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.403 qpair failed and we were unable to recover it. 00:38:33.403 [2024-12-09 09:56:08.603633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.403 [2024-12-09 09:56:08.603672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.403 qpair failed and we were unable to recover it. 00:38:33.403 [2024-12-09 09:56:08.604018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.403 [2024-12-09 09:56:08.604047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.403 qpair failed and we were unable to recover it. 00:38:33.403 [2024-12-09 09:56:08.604395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.403 [2024-12-09 09:56:08.604424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.403 qpair failed and we were unable to recover it. 00:38:33.403 [2024-12-09 09:56:08.604769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.403 [2024-12-09 09:56:08.604799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.403 qpair failed and we were unable to recover it. 00:38:33.403 [2024-12-09 09:56:08.605150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.403 [2024-12-09 09:56:08.605179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.403 qpair failed and we were unable to recover it. 00:38:33.403 [2024-12-09 09:56:08.605527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.403 [2024-12-09 09:56:08.605556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.403 qpair failed and we were unable to recover it. 00:38:33.403 [2024-12-09 09:56:08.605801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.403 [2024-12-09 09:56:08.605831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.403 qpair failed and we were unable to recover it. 00:38:33.403 [2024-12-09 09:56:08.606173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.403 [2024-12-09 09:56:08.606203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.403 qpair failed and we were unable to recover it. 00:38:33.403 [2024-12-09 09:56:08.606541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.403 [2024-12-09 09:56:08.606571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.403 qpair failed and we were unable to recover it. 00:38:33.403 [2024-12-09 09:56:08.606814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.403 [2024-12-09 09:56:08.606855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.403 qpair failed and we were unable to recover it. 00:38:33.403 [2024-12-09 09:56:08.607205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.403 [2024-12-09 09:56:08.607235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.403 qpair failed and we were unable to recover it. 00:38:33.403 [2024-12-09 09:56:08.607580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.404 [2024-12-09 09:56:08.607609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.404 qpair failed and we were unable to recover it. 00:38:33.404 [2024-12-09 09:56:08.607860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.404 [2024-12-09 09:56:08.607890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.404 qpair failed and we were unable to recover it. 00:38:33.404 [2024-12-09 09:56:08.608237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.404 [2024-12-09 09:56:08.608265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.404 qpair failed and we were unable to recover it. 00:38:33.404 [2024-12-09 09:56:08.608635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.404 [2024-12-09 09:56:08.608674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.404 qpair failed and we were unable to recover it. 00:38:33.404 [2024-12-09 09:56:08.609019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.404 [2024-12-09 09:56:08.609048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.404 qpair failed and we were unable to recover it. 00:38:33.404 [2024-12-09 09:56:08.609390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.404 [2024-12-09 09:56:08.609418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.404 qpair failed and we were unable to recover it. 00:38:33.404 [2024-12-09 09:56:08.609749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.404 [2024-12-09 09:56:08.609779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.404 qpair failed and we were unable to recover it. 00:38:33.404 [2024-12-09 09:56:08.610132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.404 [2024-12-09 09:56:08.610161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.404 qpair failed and we were unable to recover it. 00:38:33.404 [2024-12-09 09:56:08.610388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.404 [2024-12-09 09:56:08.610417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.404 qpair failed and we were unable to recover it. 00:38:33.404 [2024-12-09 09:56:08.610746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.404 [2024-12-09 09:56:08.610776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.404 qpair failed and we were unable to recover it. 00:38:33.404 [2024-12-09 09:56:08.611129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.404 [2024-12-09 09:56:08.611158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.404 qpair failed and we were unable to recover it. 00:38:33.404 [2024-12-09 09:56:08.611519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.404 [2024-12-09 09:56:08.611549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.404 qpair failed and we were unable to recover it. 00:38:33.404 [2024-12-09 09:56:08.611892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.404 [2024-12-09 09:56:08.611922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.404 qpair failed and we were unable to recover it. 00:38:33.404 [2024-12-09 09:56:08.612208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.404 [2024-12-09 09:56:08.612246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.404 qpair failed and we were unable to recover it. 00:38:33.404 [2024-12-09 09:56:08.612564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.404 [2024-12-09 09:56:08.612593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.404 qpair failed and we were unable to recover it. 00:38:33.404 [2024-12-09 09:56:08.612973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.404 [2024-12-09 09:56:08.613003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.404 qpair failed and we were unable to recover it. 00:38:33.404 [2024-12-09 09:56:08.613327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.404 [2024-12-09 09:56:08.613356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.404 qpair failed and we were unable to recover it. 00:38:33.404 [2024-12-09 09:56:08.613679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.404 [2024-12-09 09:56:08.613710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.404 qpair failed and we were unable to recover it. 00:38:33.404 [2024-12-09 09:56:08.614029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.404 [2024-12-09 09:56:08.614059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.404 qpair failed and we were unable to recover it. 00:38:33.404 [2024-12-09 09:56:08.614411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.404 [2024-12-09 09:56:08.614441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.404 qpair failed and we were unable to recover it. 00:38:33.404 [2024-12-09 09:56:08.614795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.404 [2024-12-09 09:56:08.614825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.404 qpair failed and we were unable to recover it. 00:38:33.404 [2024-12-09 09:56:08.615173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.404 [2024-12-09 09:56:08.615203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.404 qpair failed and we were unable to recover it. 00:38:33.404 [2024-12-09 09:56:08.615535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.404 [2024-12-09 09:56:08.615564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.404 qpair failed and we were unable to recover it. 00:38:33.404 [2024-12-09 09:56:08.615810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.404 [2024-12-09 09:56:08.615840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.404 qpair failed and we were unable to recover it. 00:38:33.404 [2024-12-09 09:56:08.616170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.404 [2024-12-09 09:56:08.616200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.404 qpair failed and we were unable to recover it. 00:38:33.404 [2024-12-09 09:56:08.616563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.404 [2024-12-09 09:56:08.616598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.404 qpair failed and we were unable to recover it. 00:38:33.404 [2024-12-09 09:56:08.616882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.404 [2024-12-09 09:56:08.616913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.404 qpair failed and we were unable to recover it. 00:38:33.404 [2024-12-09 09:56:08.617275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.404 [2024-12-09 09:56:08.617305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.404 qpair failed and we were unable to recover it. 00:38:33.404 [2024-12-09 09:56:08.617672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.404 [2024-12-09 09:56:08.617704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.404 qpair failed and we were unable to recover it. 00:38:33.404 [2024-12-09 09:56:08.618081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.404 [2024-12-09 09:56:08.618109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.404 qpair failed and we were unable to recover it. 00:38:33.404 [2024-12-09 09:56:08.618449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.404 [2024-12-09 09:56:08.618478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.404 qpair failed and we were unable to recover it. 00:38:33.404 [2024-12-09 09:56:08.618815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.404 [2024-12-09 09:56:08.618846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.404 qpair failed and we were unable to recover it. 00:38:33.404 [2024-12-09 09:56:08.619200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.404 [2024-12-09 09:56:08.619230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.404 qpair failed and we were unable to recover it. 00:38:33.404 [2024-12-09 09:56:08.619476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.404 [2024-12-09 09:56:08.619504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.404 qpair failed and we were unable to recover it. 00:38:33.404 [2024-12-09 09:56:08.619860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.404 [2024-12-09 09:56:08.619889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.404 qpair failed and we were unable to recover it. 00:38:33.404 [2024-12-09 09:56:08.620231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.404 [2024-12-09 09:56:08.620261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.404 qpair failed and we were unable to recover it. 00:38:33.404 [2024-12-09 09:56:08.620614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.404 [2024-12-09 09:56:08.620651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.404 qpair failed and we were unable to recover it. 00:38:33.404 [2024-12-09 09:56:08.621075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.404 [2024-12-09 09:56:08.621104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.404 qpair failed and we were unable to recover it. 00:38:33.405 [2024-12-09 09:56:08.621439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.405 [2024-12-09 09:56:08.621467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.405 qpair failed and we were unable to recover it. 00:38:33.405 [2024-12-09 09:56:08.621828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.405 [2024-12-09 09:56:08.621859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.405 qpair failed and we were unable to recover it. 00:38:33.405 [2024-12-09 09:56:08.622199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.405 [2024-12-09 09:56:08.622228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.405 qpair failed and we were unable to recover it. 00:38:33.405 [2024-12-09 09:56:08.622657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.405 [2024-12-09 09:56:08.622688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.405 qpair failed and we were unable to recover it. 00:38:33.405 [2024-12-09 09:56:08.623024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.405 [2024-12-09 09:56:08.623055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.405 qpair failed and we were unable to recover it. 00:38:33.405 [2024-12-09 09:56:08.623380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.405 [2024-12-09 09:56:08.623409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.405 qpair failed and we were unable to recover it. 00:38:33.405 [2024-12-09 09:56:08.623743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.405 [2024-12-09 09:56:08.623774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.405 qpair failed and we were unable to recover it. 00:38:33.405 [2024-12-09 09:56:08.624004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.405 [2024-12-09 09:56:08.624033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.405 qpair failed and we were unable to recover it. 00:38:33.405 [2024-12-09 09:56:08.624381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.405 [2024-12-09 09:56:08.624416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.405 qpair failed and we were unable to recover it. 00:38:33.405 [2024-12-09 09:56:08.624666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.405 [2024-12-09 09:56:08.624697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.405 qpair failed and we were unable to recover it. 00:38:33.405 [2024-12-09 09:56:08.624931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.405 [2024-12-09 09:56:08.624960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.405 qpair failed and we were unable to recover it. 00:38:33.405 [2024-12-09 09:56:08.625362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.405 [2024-12-09 09:56:08.625391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.405 qpair failed and we were unable to recover it. 00:38:33.405 [2024-12-09 09:56:08.625712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.405 [2024-12-09 09:56:08.625742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.405 qpair failed and we were unable to recover it. 00:38:33.405 [2024-12-09 09:56:08.626102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.405 [2024-12-09 09:56:08.626132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.405 qpair failed and we were unable to recover it. 00:38:33.405 [2024-12-09 09:56:08.626475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.405 [2024-12-09 09:56:08.626505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.405 qpair failed and we were unable to recover it. 00:38:33.405 [2024-12-09 09:56:08.626855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.405 [2024-12-09 09:56:08.626887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.405 qpair failed and we were unable to recover it. 00:38:33.405 [2024-12-09 09:56:08.627229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.405 [2024-12-09 09:56:08.627258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.405 qpair failed and we were unable to recover it. 00:38:33.405 [2024-12-09 09:56:08.627616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.405 [2024-12-09 09:56:08.627655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.405 qpair failed and we were unable to recover it. 00:38:33.405 [2024-12-09 09:56:08.628002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.405 [2024-12-09 09:56:08.628031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.405 qpair failed and we were unable to recover it. 00:38:33.405 [2024-12-09 09:56:08.628376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.405 [2024-12-09 09:56:08.628407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.405 qpair failed and we were unable to recover it. 00:38:33.405 [2024-12-09 09:56:08.628754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.405 [2024-12-09 09:56:08.628785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.405 qpair failed and we were unable to recover it. 00:38:33.405 [2024-12-09 09:56:08.629023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.405 [2024-12-09 09:56:08.629052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.405 qpair failed and we were unable to recover it. 00:38:33.405 [2024-12-09 09:56:08.629394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.405 [2024-12-09 09:56:08.629423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.405 qpair failed and we were unable to recover it. 00:38:33.405 [2024-12-09 09:56:08.629772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.405 [2024-12-09 09:56:08.629802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.405 qpair failed and we were unable to recover it. 00:38:33.405 [2024-12-09 09:56:08.630147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.405 [2024-12-09 09:56:08.630176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.405 qpair failed and we were unable to recover it. 00:38:33.405 [2024-12-09 09:56:08.630538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.405 [2024-12-09 09:56:08.630567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.405 qpair failed and we were unable to recover it. 00:38:33.405 [2024-12-09 09:56:08.630909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.405 [2024-12-09 09:56:08.630940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.405 qpair failed and we were unable to recover it. 00:38:33.405 [2024-12-09 09:56:08.631279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.405 [2024-12-09 09:56:08.631307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.405 qpair failed and we were unable to recover it. 00:38:33.405 [2024-12-09 09:56:08.631660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.405 [2024-12-09 09:56:08.631695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.405 qpair failed and we were unable to recover it. 00:38:33.405 [2024-12-09 09:56:08.632043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.405 [2024-12-09 09:56:08.632074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.405 qpair failed and we were unable to recover it. 00:38:33.405 [2024-12-09 09:56:08.632421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.405 [2024-12-09 09:56:08.632450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.405 qpair failed and we were unable to recover it. 00:38:33.405 [2024-12-09 09:56:08.632785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.405 [2024-12-09 09:56:08.632816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.405 qpair failed and we were unable to recover it. 00:38:33.405 [2024-12-09 09:56:08.633157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.405 [2024-12-09 09:56:08.633187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.405 qpair failed and we were unable to recover it. 00:38:33.405 [2024-12-09 09:56:08.633477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.405 [2024-12-09 09:56:08.633507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.405 qpair failed and we were unable to recover it. 00:38:33.405 [2024-12-09 09:56:08.633853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.405 [2024-12-09 09:56:08.633883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.405 qpair failed and we were unable to recover it. 00:38:33.405 [2024-12-09 09:56:08.634225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.405 [2024-12-09 09:56:08.634256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.405 qpair failed and we were unable to recover it. 00:38:33.405 [2024-12-09 09:56:08.634598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.405 [2024-12-09 09:56:08.634630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.405 qpair failed and we were unable to recover it. 00:38:33.405 [2024-12-09 09:56:08.635018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.405 [2024-12-09 09:56:08.635049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.406 qpair failed and we were unable to recover it. 00:38:33.406 [2024-12-09 09:56:08.635308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.406 [2024-12-09 09:56:08.635338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.406 qpair failed and we were unable to recover it. 00:38:33.406 [2024-12-09 09:56:08.635685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.406 [2024-12-09 09:56:08.635715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.406 qpair failed and we were unable to recover it. 00:38:33.406 [2024-12-09 09:56:08.636061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.406 [2024-12-09 09:56:08.636089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.406 qpair failed and we were unable to recover it. 00:38:33.406 [2024-12-09 09:56:08.636460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.406 [2024-12-09 09:56:08.636490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.406 qpair failed and we were unable to recover it. 00:38:33.406 [2024-12-09 09:56:08.636860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.406 [2024-12-09 09:56:08.636892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.406 qpair failed and we were unable to recover it. 00:38:33.406 [2024-12-09 09:56:08.637229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.406 [2024-12-09 09:56:08.637257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.406 qpair failed and we were unable to recover it. 00:38:33.406 [2024-12-09 09:56:08.637595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.406 [2024-12-09 09:56:08.637624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.406 qpair failed and we were unable to recover it. 00:38:33.406 [2024-12-09 09:56:08.637897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.406 [2024-12-09 09:56:08.637927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.406 qpair failed and we were unable to recover it. 00:38:33.406 [2024-12-09 09:56:08.638279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.406 [2024-12-09 09:56:08.638310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.406 qpair failed and we were unable to recover it. 00:38:33.406 [2024-12-09 09:56:08.638667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.406 [2024-12-09 09:56:08.638698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.406 qpair failed and we were unable to recover it. 00:38:33.406 [2024-12-09 09:56:08.639050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.406 [2024-12-09 09:56:08.639079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.406 qpair failed and we were unable to recover it. 00:38:33.406 [2024-12-09 09:56:08.639446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.406 [2024-12-09 09:56:08.639475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.406 qpair failed and we were unable to recover it. 00:38:33.406 [2024-12-09 09:56:08.639809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.406 [2024-12-09 09:56:08.639839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.406 qpair failed and we were unable to recover it. 00:38:33.406 [2024-12-09 09:56:08.640200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.406 [2024-12-09 09:56:08.640231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.406 qpair failed and we were unable to recover it. 00:38:33.406 [2024-12-09 09:56:08.640573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.406 [2024-12-09 09:56:08.640602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.406 qpair failed and we were unable to recover it. 00:38:33.406 [2024-12-09 09:56:08.640960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.406 [2024-12-09 09:56:08.640990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.406 qpair failed and we were unable to recover it. 00:38:33.406 [2024-12-09 09:56:08.641228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.406 [2024-12-09 09:56:08.641257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.406 qpair failed and we were unable to recover it. 00:38:33.406 [2024-12-09 09:56:08.641611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.406 [2024-12-09 09:56:08.641650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.406 qpair failed and we were unable to recover it. 00:38:33.406 [2024-12-09 09:56:08.642015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.406 [2024-12-09 09:56:08.642044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.406 qpair failed and we were unable to recover it. 00:38:33.406 [2024-12-09 09:56:08.642388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.406 [2024-12-09 09:56:08.642418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.406 qpair failed and we were unable to recover it. 00:38:33.406 [2024-12-09 09:56:08.642795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.406 [2024-12-09 09:56:08.642826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.406 qpair failed and we were unable to recover it. 00:38:33.406 [2024-12-09 09:56:08.643160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.406 [2024-12-09 09:56:08.643189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.406 qpair failed and we were unable to recover it. 00:38:33.406 [2024-12-09 09:56:08.643537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.406 [2024-12-09 09:56:08.643565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.406 qpair failed and we were unable to recover it. 00:38:33.406 [2024-12-09 09:56:08.643921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.406 [2024-12-09 09:56:08.643952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.406 qpair failed and we were unable to recover it. 00:38:33.406 [2024-12-09 09:56:08.644295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.406 [2024-12-09 09:56:08.644325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.406 qpair failed and we were unable to recover it. 00:38:33.406 [2024-12-09 09:56:08.644665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.406 [2024-12-09 09:56:08.644694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.406 qpair failed and we were unable to recover it. 00:38:33.406 [2024-12-09 09:56:08.645033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.406 [2024-12-09 09:56:08.645062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.406 qpair failed and we were unable to recover it. 00:38:33.406 [2024-12-09 09:56:08.645432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.406 [2024-12-09 09:56:08.645461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.406 qpair failed and we were unable to recover it. 00:38:33.406 [2024-12-09 09:56:08.645800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.406 [2024-12-09 09:56:08.645830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.406 qpair failed and we were unable to recover it. 00:38:33.406 [2024-12-09 09:56:08.646149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.406 [2024-12-09 09:56:08.646179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.406 qpair failed and we were unable to recover it. 00:38:33.406 [2024-12-09 09:56:08.646521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.406 [2024-12-09 09:56:08.646550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.406 qpair failed and we were unable to recover it. 00:38:33.406 [2024-12-09 09:56:08.646899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.406 [2024-12-09 09:56:08.646931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.406 qpair failed and we were unable to recover it. 00:38:33.406 [2024-12-09 09:56:08.647250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.406 [2024-12-09 09:56:08.647279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.406 qpair failed and we were unable to recover it. 00:38:33.406 [2024-12-09 09:56:08.647602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.406 [2024-12-09 09:56:08.647631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.406 qpair failed and we were unable to recover it. 00:38:33.406 [2024-12-09 09:56:08.647984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.406 [2024-12-09 09:56:08.648013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.406 qpair failed and we were unable to recover it. 00:38:33.406 [2024-12-09 09:56:08.648372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.406 [2024-12-09 09:56:08.648401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.406 qpair failed and we were unable to recover it. 00:38:33.406 [2024-12-09 09:56:08.648733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.406 [2024-12-09 09:56:08.648763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.406 qpair failed and we were unable to recover it. 00:38:33.406 [2024-12-09 09:56:08.649088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.406 [2024-12-09 09:56:08.649117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.407 qpair failed and we were unable to recover it. 00:38:33.407 [2024-12-09 09:56:08.649439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.407 [2024-12-09 09:56:08.649468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.407 qpair failed and we were unable to recover it. 00:38:33.407 [2024-12-09 09:56:08.649849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.407 [2024-12-09 09:56:08.649879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.407 qpair failed and we were unable to recover it. 00:38:33.407 [2024-12-09 09:56:08.650219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.407 [2024-12-09 09:56:08.650247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.407 qpair failed and we were unable to recover it. 00:38:33.407 [2024-12-09 09:56:08.650613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.407 [2024-12-09 09:56:08.650652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.407 qpair failed and we were unable to recover it. 00:38:33.407 [2024-12-09 09:56:08.650991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.407 [2024-12-09 09:56:08.651020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.407 qpair failed and we were unable to recover it. 00:38:33.407 [2024-12-09 09:56:08.651373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.407 [2024-12-09 09:56:08.651401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.407 qpair failed and we were unable to recover it. 00:38:33.407 [2024-12-09 09:56:08.651756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.407 [2024-12-09 09:56:08.651787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.407 qpair failed and we were unable to recover it. 00:38:33.407 [2024-12-09 09:56:08.652061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.407 [2024-12-09 09:56:08.652090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.407 qpair failed and we were unable to recover it. 00:38:33.407 [2024-12-09 09:56:08.652421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.407 [2024-12-09 09:56:08.652449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.407 qpair failed and we were unable to recover it. 00:38:33.407 [2024-12-09 09:56:08.652790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.407 [2024-12-09 09:56:08.652821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.407 qpair failed and we were unable to recover it. 00:38:33.407 [2024-12-09 09:56:08.653231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.407 [2024-12-09 09:56:08.653260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.407 qpair failed and we were unable to recover it. 00:38:33.407 [2024-12-09 09:56:08.653609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.407 [2024-12-09 09:56:08.653646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.407 qpair failed and we were unable to recover it. 00:38:33.407 [2024-12-09 09:56:08.653997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.407 [2024-12-09 09:56:08.654026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.407 qpair failed and we were unable to recover it. 00:38:33.407 [2024-12-09 09:56:08.654283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.407 [2024-12-09 09:56:08.654313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.407 qpair failed and we were unable to recover it. 00:38:33.407 [2024-12-09 09:56:08.654662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.407 [2024-12-09 09:56:08.654693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.407 qpair failed and we were unable to recover it. 00:38:33.407 [2024-12-09 09:56:08.655036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.407 [2024-12-09 09:56:08.655065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.407 qpair failed and we were unable to recover it. 00:38:33.407 [2024-12-09 09:56:08.655295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.407 [2024-12-09 09:56:08.655324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.407 qpair failed and we were unable to recover it. 00:38:33.407 [2024-12-09 09:56:08.655690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.407 [2024-12-09 09:56:08.655721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.407 qpair failed and we were unable to recover it. 00:38:33.407 [2024-12-09 09:56:08.656080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.407 [2024-12-09 09:56:08.656110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.407 qpair failed and we were unable to recover it. 00:38:33.407 [2024-12-09 09:56:08.656463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.407 [2024-12-09 09:56:08.656492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.407 qpair failed and we were unable to recover it. 00:38:33.407 [2024-12-09 09:56:08.656875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.407 [2024-12-09 09:56:08.656911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.407 qpair failed and we were unable to recover it. 00:38:33.407 [2024-12-09 09:56:08.657250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.407 [2024-12-09 09:56:08.657279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.407 qpair failed and we were unable to recover it. 00:38:33.407 [2024-12-09 09:56:08.657629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.407 [2024-12-09 09:56:08.657668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.407 qpair failed and we were unable to recover it. 00:38:33.407 [2024-12-09 09:56:08.658001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.407 [2024-12-09 09:56:08.658030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.407 qpair failed and we were unable to recover it. 00:38:33.407 [2024-12-09 09:56:08.658386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.407 [2024-12-09 09:56:08.658417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.407 qpair failed and we were unable to recover it. 00:38:33.407 [2024-12-09 09:56:08.658764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.407 [2024-12-09 09:56:08.658794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.407 qpair failed and we were unable to recover it. 00:38:33.407 [2024-12-09 09:56:08.659146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.407 [2024-12-09 09:56:08.659176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.407 qpair failed and we were unable to recover it. 00:38:33.407 [2024-12-09 09:56:08.659513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.407 [2024-12-09 09:56:08.659541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.407 qpair failed and we were unable to recover it. 00:38:33.407 [2024-12-09 09:56:08.659870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.407 [2024-12-09 09:56:08.659900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.407 qpair failed and we were unable to recover it. 00:38:33.407 [2024-12-09 09:56:08.660132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.407 [2024-12-09 09:56:08.660161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.407 qpair failed and we were unable to recover it. 00:38:33.407 [2024-12-09 09:56:08.660479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.407 [2024-12-09 09:56:08.660509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.407 qpair failed and we were unable to recover it. 00:38:33.407 [2024-12-09 09:56:08.660850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.407 [2024-12-09 09:56:08.660880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.407 qpair failed and we were unable to recover it. 00:38:33.407 [2024-12-09 09:56:08.661232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.407 [2024-12-09 09:56:08.661261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.407 qpair failed and we were unable to recover it. 00:38:33.407 [2024-12-09 09:56:08.661630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.407 [2024-12-09 09:56:08.661669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.407 qpair failed and we were unable to recover it. 00:38:33.407 [2024-12-09 09:56:08.662008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.407 [2024-12-09 09:56:08.662037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.407 qpair failed and we were unable to recover it. 00:38:33.407 [2024-12-09 09:56:08.662384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.407 [2024-12-09 09:56:08.662412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.407 qpair failed and we were unable to recover it. 00:38:33.407 [2024-12-09 09:56:08.662765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.407 [2024-12-09 09:56:08.662794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.407 qpair failed and we were unable to recover it. 00:38:33.408 [2024-12-09 09:56:08.663162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.408 [2024-12-09 09:56:08.663192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.408 qpair failed and we were unable to recover it. 00:38:33.408 [2024-12-09 09:56:08.663512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.408 [2024-12-09 09:56:08.663542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.408 qpair failed and we were unable to recover it. 00:38:33.408 [2024-12-09 09:56:08.663887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.408 [2024-12-09 09:56:08.663918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.408 qpair failed and we were unable to recover it. 00:38:33.408 [2024-12-09 09:56:08.664252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.408 [2024-12-09 09:56:08.664281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.408 qpair failed and we were unable to recover it. 00:38:33.408 [2024-12-09 09:56:08.664636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.408 [2024-12-09 09:56:08.664676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.408 qpair failed and we were unable to recover it. 00:38:33.408 [2024-12-09 09:56:08.665090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.408 [2024-12-09 09:56:08.665119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.408 qpair failed and we were unable to recover it. 00:38:33.408 [2024-12-09 09:56:08.665427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.408 [2024-12-09 09:56:08.665456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.408 qpair failed and we were unable to recover it. 00:38:33.408 [2024-12-09 09:56:08.665831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.408 [2024-12-09 09:56:08.665861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.408 qpair failed and we were unable to recover it. 00:38:33.408 [2024-12-09 09:56:08.666200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.408 [2024-12-09 09:56:08.666230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.408 qpair failed and we were unable to recover it. 00:38:33.408 [2024-12-09 09:56:08.666564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.408 [2024-12-09 09:56:08.666594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.408 qpair failed and we were unable to recover it. 00:38:33.408 [2024-12-09 09:56:08.666874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.408 [2024-12-09 09:56:08.666904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.408 qpair failed and we were unable to recover it. 00:38:33.408 [2024-12-09 09:56:08.667253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.408 [2024-12-09 09:56:08.667283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.408 qpair failed and we were unable to recover it. 00:38:33.408 [2024-12-09 09:56:08.667619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.408 [2024-12-09 09:56:08.667658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.408 qpair failed and we were unable to recover it. 00:38:33.408 [2024-12-09 09:56:08.667996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.408 [2024-12-09 09:56:08.668024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.408 qpair failed and we were unable to recover it. 00:38:33.408 [2024-12-09 09:56:08.668366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.408 [2024-12-09 09:56:08.668395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.408 qpair failed and we were unable to recover it. 00:38:33.408 [2024-12-09 09:56:08.668738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.408 [2024-12-09 09:56:08.668769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.408 qpair failed and we were unable to recover it. 00:38:33.408 [2024-12-09 09:56:08.669126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.408 [2024-12-09 09:56:08.669155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.408 qpair failed and we were unable to recover it. 00:38:33.408 [2024-12-09 09:56:08.669505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.408 [2024-12-09 09:56:08.669534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.408 qpair failed and we were unable to recover it. 00:38:33.408 [2024-12-09 09:56:08.669875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.408 [2024-12-09 09:56:08.669906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.408 qpair failed and we were unable to recover it. 00:38:33.408 [2024-12-09 09:56:08.670255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.408 [2024-12-09 09:56:08.670284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.408 qpair failed and we were unable to recover it. 00:38:33.408 [2024-12-09 09:56:08.670665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.408 [2024-12-09 09:56:08.670695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.408 qpair failed and we were unable to recover it. 00:38:33.408 [2024-12-09 09:56:08.671027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.408 [2024-12-09 09:56:08.671056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.408 qpair failed and we were unable to recover it. 00:38:33.408 [2024-12-09 09:56:08.671464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.408 [2024-12-09 09:56:08.671493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.408 qpair failed and we were unable to recover it. 00:38:33.408 [2024-12-09 09:56:08.671726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.408 [2024-12-09 09:56:08.671756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.408 qpair failed and we were unable to recover it. 00:38:33.408 [2024-12-09 09:56:08.672120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.408 [2024-12-09 09:56:08.672150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.408 qpair failed and we were unable to recover it. 00:38:33.408 [2024-12-09 09:56:08.672510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.408 [2024-12-09 09:56:08.672539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.408 qpair failed and we were unable to recover it. 00:38:33.408 [2024-12-09 09:56:08.672881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.408 [2024-12-09 09:56:08.672912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.408 qpair failed and we were unable to recover it. 00:38:33.408 [2024-12-09 09:56:08.673271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.408 [2024-12-09 09:56:08.673300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.408 qpair failed and we were unable to recover it. 00:38:33.408 [2024-12-09 09:56:08.673540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.408 [2024-12-09 09:56:08.673569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.408 qpair failed and we were unable to recover it. 00:38:33.408 [2024-12-09 09:56:08.673812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.408 [2024-12-09 09:56:08.673843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.408 qpair failed and we were unable to recover it. 00:38:33.408 [2024-12-09 09:56:08.674063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.408 [2024-12-09 09:56:08.674092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.408 qpair failed and we were unable to recover it. 00:38:33.408 [2024-12-09 09:56:08.674400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.408 [2024-12-09 09:56:08.674429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.408 qpair failed and we were unable to recover it. 00:38:33.408 [2024-12-09 09:56:08.674725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.409 [2024-12-09 09:56:08.674761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.409 qpair failed and we were unable to recover it. 00:38:33.409 [2024-12-09 09:56:08.675078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.409 [2024-12-09 09:56:08.675108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.409 qpair failed and we were unable to recover it. 00:38:33.409 [2024-12-09 09:56:08.675470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.409 [2024-12-09 09:56:08.675499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.409 qpair failed and we were unable to recover it. 00:38:33.409 [2024-12-09 09:56:08.675826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.409 [2024-12-09 09:56:08.675856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.409 qpair failed and we were unable to recover it. 00:38:33.409 [2024-12-09 09:56:08.676224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.409 [2024-12-09 09:56:08.676253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.409 qpair failed and we were unable to recover it. 00:38:33.409 [2024-12-09 09:56:08.676598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.409 [2024-12-09 09:56:08.676627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.409 qpair failed and we were unable to recover it. 00:38:33.409 [2024-12-09 09:56:08.676998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.409 [2024-12-09 09:56:08.677037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.409 qpair failed and we were unable to recover it. 00:38:33.409 [2024-12-09 09:56:08.677360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.409 [2024-12-09 09:56:08.677389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.409 qpair failed and we were unable to recover it. 00:38:33.409 [2024-12-09 09:56:08.677713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.409 [2024-12-09 09:56:08.677746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.409 qpair failed and we were unable to recover it. 00:38:33.409 [2024-12-09 09:56:08.678183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.409 [2024-12-09 09:56:08.678211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.409 qpair failed and we were unable to recover it. 00:38:33.409 [2024-12-09 09:56:08.678557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.409 [2024-12-09 09:56:08.678585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.409 qpair failed and we were unable to recover it. 00:38:33.409 [2024-12-09 09:56:08.678919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.409 [2024-12-09 09:56:08.678949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.409 qpair failed and we were unable to recover it. 00:38:33.409 [2024-12-09 09:56:08.679304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.409 [2024-12-09 09:56:08.679333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.409 qpair failed and we were unable to recover it. 00:38:33.409 [2024-12-09 09:56:08.679691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.409 [2024-12-09 09:56:08.679721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.409 qpair failed and we were unable to recover it. 00:38:33.409 [2024-12-09 09:56:08.680056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.409 [2024-12-09 09:56:08.680085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.409 qpair failed and we were unable to recover it. 00:38:33.409 [2024-12-09 09:56:08.680421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.409 [2024-12-09 09:56:08.680450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.409 qpair failed and we were unable to recover it. 00:38:33.409 [2024-12-09 09:56:08.680819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.409 [2024-12-09 09:56:08.680850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.409 qpair failed and we were unable to recover it. 00:38:33.409 [2024-12-09 09:56:08.681165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.409 [2024-12-09 09:56:08.681195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.409 qpair failed and we were unable to recover it. 00:38:33.409 [2024-12-09 09:56:08.681445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.409 [2024-12-09 09:56:08.681474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.409 qpair failed and we were unable to recover it. 00:38:33.409 [2024-12-09 09:56:08.681792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.409 [2024-12-09 09:56:08.681834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.409 qpair failed and we were unable to recover it. 00:38:33.409 [2024-12-09 09:56:08.682191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.409 [2024-12-09 09:56:08.682220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.409 qpair failed and we were unable to recover it. 00:38:33.409 [2024-12-09 09:56:08.682565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.409 [2024-12-09 09:56:08.682595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.409 qpair failed and we were unable to recover it. 00:38:33.409 [2024-12-09 09:56:08.682931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.409 [2024-12-09 09:56:08.682961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.409 qpair failed and we were unable to recover it. 00:38:33.409 [2024-12-09 09:56:08.683312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.409 [2024-12-09 09:56:08.683343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.409 qpair failed and we were unable to recover it. 00:38:33.409 [2024-12-09 09:56:08.683693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.409 [2024-12-09 09:56:08.683723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.409 qpair failed and we were unable to recover it. 00:38:33.409 [2024-12-09 09:56:08.684160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.409 [2024-12-09 09:56:08.684190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.409 qpair failed and we were unable to recover it. 00:38:33.409 [2024-12-09 09:56:08.684527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.409 [2024-12-09 09:56:08.684556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.409 qpair failed and we were unable to recover it. 00:38:33.409 [2024-12-09 09:56:08.684918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.409 [2024-12-09 09:56:08.684948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.409 qpair failed and we were unable to recover it. 00:38:33.409 [2024-12-09 09:56:08.685291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.409 [2024-12-09 09:56:08.685319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.409 qpair failed and we were unable to recover it. 00:38:33.409 [2024-12-09 09:56:08.685676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.409 [2024-12-09 09:56:08.685707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.409 qpair failed and we were unable to recover it. 00:38:33.409 [2024-12-09 09:56:08.686035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.409 [2024-12-09 09:56:08.686065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.409 qpair failed and we were unable to recover it. 00:38:33.409 [2024-12-09 09:56:08.686420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.409 [2024-12-09 09:56:08.686449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.409 qpair failed and we were unable to recover it. 00:38:33.409 [2024-12-09 09:56:08.686816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.409 [2024-12-09 09:56:08.686845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.409 qpair failed and we were unable to recover it. 00:38:33.409 [2024-12-09 09:56:08.687208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.409 [2024-12-09 09:56:08.687238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.409 qpair failed and we were unable to recover it. 00:38:33.409 [2024-12-09 09:56:08.687556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.409 [2024-12-09 09:56:08.687586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.409 qpair failed and we were unable to recover it. 00:38:33.409 [2024-12-09 09:56:08.687940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.409 [2024-12-09 09:56:08.687970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.409 qpair failed and we were unable to recover it. 00:38:33.409 [2024-12-09 09:56:08.688332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.409 [2024-12-09 09:56:08.688361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.409 qpair failed and we were unable to recover it. 00:38:33.409 [2024-12-09 09:56:08.688689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.409 [2024-12-09 09:56:08.688720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.409 qpair failed and we were unable to recover it. 00:38:33.409 [2024-12-09 09:56:08.689097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.410 [2024-12-09 09:56:08.689126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.410 qpair failed and we were unable to recover it. 00:38:33.410 [2024-12-09 09:56:08.689448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.410 [2024-12-09 09:56:08.689477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.410 qpair failed and we were unable to recover it. 00:38:33.410 [2024-12-09 09:56:08.689824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.410 [2024-12-09 09:56:08.689855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.410 qpair failed and we were unable to recover it. 00:38:33.410 [2024-12-09 09:56:08.690087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.410 [2024-12-09 09:56:08.690116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.410 qpair failed and we were unable to recover it. 00:38:33.410 [2024-12-09 09:56:08.690541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.410 [2024-12-09 09:56:08.690570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.410 qpair failed and we were unable to recover it. 00:38:33.410 [2024-12-09 09:56:08.690911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.410 [2024-12-09 09:56:08.690941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.410 qpair failed and we were unable to recover it. 00:38:33.410 [2024-12-09 09:56:08.691294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.410 [2024-12-09 09:56:08.691322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.410 qpair failed and we were unable to recover it. 00:38:33.410 [2024-12-09 09:56:08.691668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.410 [2024-12-09 09:56:08.691699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.410 qpair failed and we were unable to recover it. 00:38:33.410 [2024-12-09 09:56:08.692074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.410 [2024-12-09 09:56:08.692102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.410 qpair failed and we were unable to recover it. 00:38:33.410 [2024-12-09 09:56:08.692459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.410 [2024-12-09 09:56:08.692488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.410 qpair failed and we were unable to recover it. 00:38:33.410 [2024-12-09 09:56:08.692820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.410 [2024-12-09 09:56:08.692850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.410 qpair failed and we were unable to recover it. 00:38:33.410 [2024-12-09 09:56:08.693281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.410 [2024-12-09 09:56:08.693310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.410 qpair failed and we were unable to recover it. 00:38:33.410 [2024-12-09 09:56:08.693532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.410 [2024-12-09 09:56:08.693561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.410 qpair failed and we were unable to recover it. 00:38:33.410 [2024-12-09 09:56:08.693974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.410 [2024-12-09 09:56:08.694005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.410 qpair failed and we were unable to recover it. 00:38:33.410 [2024-12-09 09:56:08.694347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.410 [2024-12-09 09:56:08.694376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.410 qpair failed and we were unable to recover it. 00:38:33.410 [2024-12-09 09:56:08.694704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.410 [2024-12-09 09:56:08.694734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.410 qpair failed and we were unable to recover it. 00:38:33.410 [2024-12-09 09:56:08.695095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.410 [2024-12-09 09:56:08.695123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.410 qpair failed and we were unable to recover it. 00:38:33.410 [2024-12-09 09:56:08.695460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.410 [2024-12-09 09:56:08.695489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.410 qpair failed and we were unable to recover it. 00:38:33.410 [2024-12-09 09:56:08.695737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.410 [2024-12-09 09:56:08.695767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.410 qpair failed and we were unable to recover it. 00:38:33.410 [2024-12-09 09:56:08.696111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.410 [2024-12-09 09:56:08.696139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.410 qpair failed and we were unable to recover it. 00:38:33.410 [2024-12-09 09:56:08.696482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.410 [2024-12-09 09:56:08.696510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.410 qpair failed and we were unable to recover it. 00:38:33.410 [2024-12-09 09:56:08.696869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.410 [2024-12-09 09:56:08.696900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.410 qpair failed and we were unable to recover it. 00:38:33.410 [2024-12-09 09:56:08.697222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.410 [2024-12-09 09:56:08.697256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.410 qpair failed and we were unable to recover it. 00:38:33.410 [2024-12-09 09:56:08.697603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.410 [2024-12-09 09:56:08.697631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.410 qpair failed and we were unable to recover it. 00:38:33.410 [2024-12-09 09:56:08.697971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.410 [2024-12-09 09:56:08.698000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.410 qpair failed and we were unable to recover it. 00:38:33.410 [2024-12-09 09:56:08.698364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.410 [2024-12-09 09:56:08.698392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.410 qpair failed and we were unable to recover it. 00:38:33.410 [2024-12-09 09:56:08.698769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.410 [2024-12-09 09:56:08.698799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.410 qpair failed and we were unable to recover it. 00:38:33.410 [2024-12-09 09:56:08.699157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.410 [2024-12-09 09:56:08.699186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.410 qpair failed and we were unable to recover it. 00:38:33.410 [2024-12-09 09:56:08.699510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.410 [2024-12-09 09:56:08.699538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.410 qpair failed and we were unable to recover it. 00:38:33.410 [2024-12-09 09:56:08.699882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.410 [2024-12-09 09:56:08.699912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.410 qpair failed and we were unable to recover it. 00:38:33.410 [2024-12-09 09:56:08.700256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.410 [2024-12-09 09:56:08.700285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.410 qpair failed and we were unable to recover it. 00:38:33.410 [2024-12-09 09:56:08.700630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.410 [2024-12-09 09:56:08.700668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.410 qpair failed and we were unable to recover it. 00:38:33.410 [2024-12-09 09:56:08.701065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.410 [2024-12-09 09:56:08.701094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.410 qpair failed and we were unable to recover it. 00:38:33.410 [2024-12-09 09:56:08.701445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.410 [2024-12-09 09:56:08.701474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.410 qpair failed and we were unable to recover it. 00:38:33.410 [2024-12-09 09:56:08.701815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.410 [2024-12-09 09:56:08.701844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.410 qpair failed and we were unable to recover it. 00:38:33.410 [2024-12-09 09:56:08.702200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.410 [2024-12-09 09:56:08.702229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.410 qpair failed and we were unable to recover it. 00:38:33.410 [2024-12-09 09:56:08.702579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.410 [2024-12-09 09:56:08.702608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.410 qpair failed and we were unable to recover it. 00:38:33.410 [2024-12-09 09:56:08.703036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.410 [2024-12-09 09:56:08.703066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.410 qpair failed and we were unable to recover it. 00:38:33.411 [2024-12-09 09:56:08.703412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.411 [2024-12-09 09:56:08.703440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.411 qpair failed and we were unable to recover it. 00:38:33.411 [2024-12-09 09:56:08.703788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.411 [2024-12-09 09:56:08.703820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.411 qpair failed and we were unable to recover it. 00:38:33.411 [2024-12-09 09:56:08.704162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.411 [2024-12-09 09:56:08.704191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.411 qpair failed and we were unable to recover it. 00:38:33.411 [2024-12-09 09:56:08.704531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.411 [2024-12-09 09:56:08.704566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.411 qpair failed and we were unable to recover it. 00:38:33.411 [2024-12-09 09:56:08.704911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.411 [2024-12-09 09:56:08.704942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.411 qpair failed and we were unable to recover it. 00:38:33.411 [2024-12-09 09:56:08.705297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.411 [2024-12-09 09:56:08.705326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.411 qpair failed and we were unable to recover it. 00:38:33.411 [2024-12-09 09:56:08.705669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.411 [2024-12-09 09:56:08.705699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.411 qpair failed and we were unable to recover it. 00:38:33.411 [2024-12-09 09:56:08.706028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.411 [2024-12-09 09:56:08.706057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.411 qpair failed and we were unable to recover it. 00:38:33.411 [2024-12-09 09:56:08.706419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.411 [2024-12-09 09:56:08.706448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.411 qpair failed and we were unable to recover it. 00:38:33.411 [2024-12-09 09:56:08.706880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.411 [2024-12-09 09:56:08.706909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.411 qpair failed and we were unable to recover it. 00:38:33.411 [2024-12-09 09:56:08.707257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.411 [2024-12-09 09:56:08.707285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.411 qpair failed and we were unable to recover it. 00:38:33.411 [2024-12-09 09:56:08.707647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.411 [2024-12-09 09:56:08.707683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.411 qpair failed and we were unable to recover it. 00:38:33.411 [2024-12-09 09:56:08.708043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.411 [2024-12-09 09:56:08.708072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.411 qpair failed and we were unable to recover it. 00:38:33.411 [2024-12-09 09:56:08.708426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.411 [2024-12-09 09:56:08.708454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.411 qpair failed and we were unable to recover it. 00:38:33.411 [2024-12-09 09:56:08.708790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.411 [2024-12-09 09:56:08.708821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.411 qpair failed and we were unable to recover it. 00:38:33.411 [2024-12-09 09:56:08.709135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.411 [2024-12-09 09:56:08.709163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.411 qpair failed and we were unable to recover it. 00:38:33.411 [2024-12-09 09:56:08.709510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.411 [2024-12-09 09:56:08.709539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.411 qpair failed and we were unable to recover it. 00:38:33.411 [2024-12-09 09:56:08.709875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.411 [2024-12-09 09:56:08.709905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.411 qpair failed and we were unable to recover it. 00:38:33.411 [2024-12-09 09:56:08.710233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.411 [2024-12-09 09:56:08.710262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.411 qpair failed and we were unable to recover it. 00:38:33.411 [2024-12-09 09:56:08.710552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.411 [2024-12-09 09:56:08.710581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.411 qpair failed and we were unable to recover it. 00:38:33.411 [2024-12-09 09:56:08.710991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.411 [2024-12-09 09:56:08.711021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.411 qpair failed and we were unable to recover it. 00:38:33.411 [2024-12-09 09:56:08.711341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.411 [2024-12-09 09:56:08.711371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.411 qpair failed and we were unable to recover it. 00:38:33.411 [2024-12-09 09:56:08.711751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.411 [2024-12-09 09:56:08.711781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.411 qpair failed and we were unable to recover it. 00:38:33.411 [2024-12-09 09:56:08.712110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.411 [2024-12-09 09:56:08.712140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.411 qpair failed and we were unable to recover it. 00:38:33.411 [2024-12-09 09:56:08.712500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.411 [2024-12-09 09:56:08.712529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.411 qpair failed and we were unable to recover it. 00:38:33.411 [2024-12-09 09:56:08.712874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.411 [2024-12-09 09:56:08.712905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.411 qpair failed and we were unable to recover it. 00:38:33.411 [2024-12-09 09:56:08.713150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.411 [2024-12-09 09:56:08.713178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.411 qpair failed and we were unable to recover it. 00:38:33.411 [2024-12-09 09:56:08.713532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.411 [2024-12-09 09:56:08.713561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.411 qpair failed and we were unable to recover it. 00:38:33.411 [2024-12-09 09:56:08.713909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.411 [2024-12-09 09:56:08.713939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.411 qpair failed and we were unable to recover it. 00:38:33.411 [2024-12-09 09:56:08.714279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.411 [2024-12-09 09:56:08.714309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.411 qpair failed and we were unable to recover it. 00:38:33.411 [2024-12-09 09:56:08.714629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.411 [2024-12-09 09:56:08.714666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.411 qpair failed and we were unable to recover it. 00:38:33.411 [2024-12-09 09:56:08.715004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.411 [2024-12-09 09:56:08.715034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.411 qpair failed and we were unable to recover it. 00:38:33.411 [2024-12-09 09:56:08.715395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.411 [2024-12-09 09:56:08.715425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.411 qpair failed and we were unable to recover it. 00:38:33.411 [2024-12-09 09:56:08.715774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.411 [2024-12-09 09:56:08.715804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.411 qpair failed and we were unable to recover it. 00:38:33.411 [2024-12-09 09:56:08.716164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.411 [2024-12-09 09:56:08.716192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.411 qpair failed and we were unable to recover it. 00:38:33.411 [2024-12-09 09:56:08.716621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.411 [2024-12-09 09:56:08.716659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.411 qpair failed and we were unable to recover it. 00:38:33.411 [2024-12-09 09:56:08.717013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.411 [2024-12-09 09:56:08.717041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.411 qpair failed and we were unable to recover it. 00:38:33.411 [2024-12-09 09:56:08.717282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.411 [2024-12-09 09:56:08.717314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.411 qpair failed and we were unable to recover it. 00:38:33.412 [2024-12-09 09:56:08.717647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.412 [2024-12-09 09:56:08.717678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.412 qpair failed and we were unable to recover it. 00:38:33.412 [2024-12-09 09:56:08.718017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.412 [2024-12-09 09:56:08.718046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.412 qpair failed and we were unable to recover it. 00:38:33.412 [2024-12-09 09:56:08.718386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.412 [2024-12-09 09:56:08.718415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.412 qpair failed and we were unable to recover it. 00:38:33.412 [2024-12-09 09:56:08.718855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.412 [2024-12-09 09:56:08.718886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.412 qpair failed and we were unable to recover it. 00:38:33.412 [2024-12-09 09:56:08.719215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.412 [2024-12-09 09:56:08.719245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.412 qpair failed and we were unable to recover it. 00:38:33.412 [2024-12-09 09:56:08.719592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.412 [2024-12-09 09:56:08.719620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.412 qpair failed and we were unable to recover it. 00:38:33.412 [2024-12-09 09:56:08.719958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.412 [2024-12-09 09:56:08.719987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.412 qpair failed and we were unable to recover it. 00:38:33.412 [2024-12-09 09:56:08.720331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.412 [2024-12-09 09:56:08.720359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.412 qpair failed and we were unable to recover it. 00:38:33.412 [2024-12-09 09:56:08.720724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.412 [2024-12-09 09:56:08.720755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.412 qpair failed and we were unable to recover it. 00:38:33.412 [2024-12-09 09:56:08.721104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.412 [2024-12-09 09:56:08.721132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.412 qpair failed and we were unable to recover it. 00:38:33.412 [2024-12-09 09:56:08.721497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.412 [2024-12-09 09:56:08.721526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.412 qpair failed and we were unable to recover it. 00:38:33.412 [2024-12-09 09:56:08.721852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.412 [2024-12-09 09:56:08.721881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.412 qpair failed and we were unable to recover it. 00:38:33.412 [2024-12-09 09:56:08.722249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.412 [2024-12-09 09:56:08.722278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.412 qpair failed and we were unable to recover it. 00:38:33.412 [2024-12-09 09:56:08.722611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.412 [2024-12-09 09:56:08.722649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.412 qpair failed and we were unable to recover it. 00:38:33.412 [2024-12-09 09:56:08.722871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.412 [2024-12-09 09:56:08.722909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.412 qpair failed and we were unable to recover it. 00:38:33.412 [2024-12-09 09:56:08.723253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.412 [2024-12-09 09:56:08.723283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.412 qpair failed and we were unable to recover it. 00:38:33.412 [2024-12-09 09:56:08.723654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.412 [2024-12-09 09:56:08.723685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.412 qpair failed and we were unable to recover it. 00:38:33.412 [2024-12-09 09:56:08.724036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.412 [2024-12-09 09:56:08.724065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.412 qpair failed and we were unable to recover it. 00:38:33.412 [2024-12-09 09:56:08.724402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.412 [2024-12-09 09:56:08.724431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.412 qpair failed and we were unable to recover it. 00:38:33.412 [2024-12-09 09:56:08.724783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.412 [2024-12-09 09:56:08.724813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.412 qpair failed and we were unable to recover it. 00:38:33.412 [2024-12-09 09:56:08.725150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.412 [2024-12-09 09:56:08.725179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.412 qpair failed and we were unable to recover it. 00:38:33.412 [2024-12-09 09:56:08.725403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.412 [2024-12-09 09:56:08.725434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.412 qpair failed and we were unable to recover it. 00:38:33.412 [2024-12-09 09:56:08.725756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.412 [2024-12-09 09:56:08.725787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.412 qpair failed and we were unable to recover it. 00:38:33.412 [2024-12-09 09:56:08.726134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.412 [2024-12-09 09:56:08.726163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.412 qpair failed and we were unable to recover it. 00:38:33.412 [2024-12-09 09:56:08.726532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.412 [2024-12-09 09:56:08.726560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.412 qpair failed and we were unable to recover it. 00:38:33.412 [2024-12-09 09:56:08.726906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.412 [2024-12-09 09:56:08.726936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.412 qpair failed and we were unable to recover it. 00:38:33.412 [2024-12-09 09:56:08.727279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.412 [2024-12-09 09:56:08.727308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.412 qpair failed and we were unable to recover it. 00:38:33.412 [2024-12-09 09:56:08.727655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.412 [2024-12-09 09:56:08.727684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.412 qpair failed and we were unable to recover it. 00:38:33.412 [2024-12-09 09:56:08.728041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.412 [2024-12-09 09:56:08.728071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.412 qpair failed and we were unable to recover it. 00:38:33.412 [2024-12-09 09:56:08.728427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.412 [2024-12-09 09:56:08.728456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.412 qpair failed and we were unable to recover it. 00:38:33.412 [2024-12-09 09:56:08.728810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.412 [2024-12-09 09:56:08.728840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.412 qpair failed and we were unable to recover it. 00:38:33.412 [2024-12-09 09:56:08.729077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.412 [2024-12-09 09:56:08.729109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.412 qpair failed and we were unable to recover it. 00:38:33.412 [2024-12-09 09:56:08.729454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.412 [2024-12-09 09:56:08.729484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.412 qpair failed and we were unable to recover it. 00:38:33.412 [2024-12-09 09:56:08.729848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.412 [2024-12-09 09:56:08.729879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.412 qpair failed and we were unable to recover it. 00:38:33.412 [2024-12-09 09:56:08.730110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.412 [2024-12-09 09:56:08.730138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.412 qpair failed and we were unable to recover it. 00:38:33.412 [2024-12-09 09:56:08.730468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.412 [2024-12-09 09:56:08.730497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.412 qpair failed and we were unable to recover it. 00:38:33.413 [2024-12-09 09:56:08.730929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.413 [2024-12-09 09:56:08.730959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.413 qpair failed and we were unable to recover it. 00:38:33.413 [2024-12-09 09:56:08.731300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.413 [2024-12-09 09:56:08.731328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.413 qpair failed and we were unable to recover it. 00:38:33.413 [2024-12-09 09:56:08.731567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.413 [2024-12-09 09:56:08.731598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.413 qpair failed and we were unable to recover it. 00:38:33.413 [2024-12-09 09:56:08.731969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.413 [2024-12-09 09:56:08.731999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.413 qpair failed and we were unable to recover it. 00:38:33.413 [2024-12-09 09:56:08.732334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.413 [2024-12-09 09:56:08.732362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.413 qpair failed and we were unable to recover it. 00:38:33.413 [2024-12-09 09:56:08.732744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.413 [2024-12-09 09:56:08.732781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.413 qpair failed and we were unable to recover it. 00:38:33.413 [2024-12-09 09:56:08.733144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.413 [2024-12-09 09:56:08.733173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.413 qpair failed and we were unable to recover it. 00:38:33.413 [2024-12-09 09:56:08.733515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.413 [2024-12-09 09:56:08.733544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.413 qpair failed and we were unable to recover it. 00:38:33.413 [2024-12-09 09:56:08.733967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.413 [2024-12-09 09:56:08.733997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.413 qpair failed and we were unable to recover it. 00:38:33.413 [2024-12-09 09:56:08.734342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.413 [2024-12-09 09:56:08.734370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.413 qpair failed and we were unable to recover it. 00:38:33.413 [2024-12-09 09:56:08.734706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.413 [2024-12-09 09:56:08.734735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.413 qpair failed and we were unable to recover it. 00:38:33.413 [2024-12-09 09:56:08.735076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.413 [2024-12-09 09:56:08.735103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.413 qpair failed and we were unable to recover it. 00:38:33.413 [2024-12-09 09:56:08.735462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.413 [2024-12-09 09:56:08.735490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.413 qpair failed and we were unable to recover it. 00:38:33.413 [2024-12-09 09:56:08.735875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.413 [2024-12-09 09:56:08.735903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.413 qpair failed and we were unable to recover it. 00:38:33.413 [2024-12-09 09:56:08.736258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.413 [2024-12-09 09:56:08.736286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.413 qpair failed and we were unable to recover it. 00:38:33.413 [2024-12-09 09:56:08.736670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.413 [2024-12-09 09:56:08.736700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.413 qpair failed and we were unable to recover it. 00:38:33.413 [2024-12-09 09:56:08.737043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.413 [2024-12-09 09:56:08.737071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.413 qpair failed and we were unable to recover it. 00:38:33.413 [2024-12-09 09:56:08.737413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.413 [2024-12-09 09:56:08.737441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.413 qpair failed and we were unable to recover it. 00:38:33.413 [2024-12-09 09:56:08.737799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.413 [2024-12-09 09:56:08.737829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.413 qpair failed and we were unable to recover it. 00:38:33.413 [2024-12-09 09:56:08.738180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.413 [2024-12-09 09:56:08.738209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.413 qpair failed and we were unable to recover it. 00:38:33.413 [2024-12-09 09:56:08.738551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.413 [2024-12-09 09:56:08.738580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.413 qpair failed and we were unable to recover it. 00:38:33.413 [2024-12-09 09:56:08.738948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.413 [2024-12-09 09:56:08.738978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.413 qpair failed and we were unable to recover it. 00:38:33.413 [2024-12-09 09:56:08.739346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.413 [2024-12-09 09:56:08.739375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.413 qpair failed and we were unable to recover it. 00:38:33.413 [2024-12-09 09:56:08.739710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.413 [2024-12-09 09:56:08.739741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.413 qpair failed and we were unable to recover it. 00:38:33.413 [2024-12-09 09:56:08.740084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.413 [2024-12-09 09:56:08.740114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.413 qpair failed and we were unable to recover it. 00:38:33.413 [2024-12-09 09:56:08.740450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.413 [2024-12-09 09:56:08.740480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.413 qpair failed and we were unable to recover it. 00:38:33.413 [2024-12-09 09:56:08.740826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.413 [2024-12-09 09:56:08.740858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.413 qpair failed and we were unable to recover it. 00:38:33.413 [2024-12-09 09:56:08.741202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.413 [2024-12-09 09:56:08.741231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.413 qpair failed and we were unable to recover it. 00:38:33.413 [2024-12-09 09:56:08.741575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.413 [2024-12-09 09:56:08.741604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.413 qpair failed and we were unable to recover it. 00:38:33.413 [2024-12-09 09:56:08.741854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.413 [2024-12-09 09:56:08.741885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.413 qpair failed and we were unable to recover it. 00:38:33.413 [2024-12-09 09:56:08.742241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.413 [2024-12-09 09:56:08.742272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.413 qpair failed and we were unable to recover it. 00:38:33.413 [2024-12-09 09:56:08.742509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.414 [2024-12-09 09:56:08.742542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.414 qpair failed and we were unable to recover it. 00:38:33.414 [2024-12-09 09:56:08.742880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.414 [2024-12-09 09:56:08.742911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.414 qpair failed and we were unable to recover it. 00:38:33.414 [2024-12-09 09:56:08.743258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.414 [2024-12-09 09:56:08.743288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.414 qpair failed and we were unable to recover it. 00:38:33.414 [2024-12-09 09:56:08.743651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.414 [2024-12-09 09:56:08.743682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.414 qpair failed and we were unable to recover it. 00:38:33.414 [2024-12-09 09:56:08.744019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.414 [2024-12-09 09:56:08.744049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.414 qpair failed and we were unable to recover it. 00:38:33.414 [2024-12-09 09:56:08.744460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.414 [2024-12-09 09:56:08.744489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.414 qpair failed and we were unable to recover it. 00:38:33.414 [2024-12-09 09:56:08.744761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.414 [2024-12-09 09:56:08.744793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.414 qpair failed and we were unable to recover it. 00:38:33.414 [2024-12-09 09:56:08.745142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.414 [2024-12-09 09:56:08.745172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.414 qpair failed and we were unable to recover it. 00:38:33.414 [2024-12-09 09:56:08.745512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.414 [2024-12-09 09:56:08.745541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.414 qpair failed and we were unable to recover it. 00:38:33.414 [2024-12-09 09:56:08.745893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.414 [2024-12-09 09:56:08.745924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.414 qpair failed and we were unable to recover it. 00:38:33.414 [2024-12-09 09:56:08.746277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.414 [2024-12-09 09:56:08.746306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.414 qpair failed and we were unable to recover it. 00:38:33.414 [2024-12-09 09:56:08.746668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.414 [2024-12-09 09:56:08.746712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.414 qpair failed and we were unable to recover it. 00:38:33.414 [2024-12-09 09:56:08.747155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.414 [2024-12-09 09:56:08.747184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.414 qpair failed and we were unable to recover it. 00:38:33.414 [2024-12-09 09:56:08.747531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.414 [2024-12-09 09:56:08.747560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.414 qpair failed and we were unable to recover it. 00:38:33.414 [2024-12-09 09:56:08.747929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.414 [2024-12-09 09:56:08.747961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.414 qpair failed and we were unable to recover it. 00:38:33.414 [2024-12-09 09:56:08.748317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.414 [2024-12-09 09:56:08.748353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.414 qpair failed and we were unable to recover it. 00:38:33.414 [2024-12-09 09:56:08.748708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.414 [2024-12-09 09:56:08.748739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.414 qpair failed and we were unable to recover it. 00:38:33.414 [2024-12-09 09:56:08.748987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.414 [2024-12-09 09:56:08.749019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.414 qpair failed and we were unable to recover it. 00:38:33.414 [2024-12-09 09:56:08.749350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.414 [2024-12-09 09:56:08.749380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.414 qpair failed and we were unable to recover it. 00:38:33.414 [2024-12-09 09:56:08.749724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.414 [2024-12-09 09:56:08.749754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.414 qpair failed and we were unable to recover it. 00:38:33.414 [2024-12-09 09:56:08.750105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.414 [2024-12-09 09:56:08.750134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.414 qpair failed and we were unable to recover it. 00:38:33.414 [2024-12-09 09:56:08.750478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.414 [2024-12-09 09:56:08.750507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.414 qpair failed and we were unable to recover it. 00:38:33.414 [2024-12-09 09:56:08.750905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.414 [2024-12-09 09:56:08.750936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.414 qpair failed and we were unable to recover it. 00:38:33.414 [2024-12-09 09:56:08.751284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.414 [2024-12-09 09:56:08.751314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.414 qpair failed and we were unable to recover it. 00:38:33.414 [2024-12-09 09:56:08.751648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.414 [2024-12-09 09:56:08.751679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.414 qpair failed and we were unable to recover it. 00:38:33.414 [2024-12-09 09:56:08.751916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.414 [2024-12-09 09:56:08.751947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.414 qpair failed and we were unable to recover it. 00:38:33.414 [2024-12-09 09:56:08.752269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.414 [2024-12-09 09:56:08.752298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.414 qpair failed and we were unable to recover it. 00:38:33.414 [2024-12-09 09:56:08.752671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.414 [2024-12-09 09:56:08.752702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.414 qpair failed and we were unable to recover it. 00:38:33.414 [2024-12-09 09:56:08.753057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.414 [2024-12-09 09:56:08.753087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.414 qpair failed and we were unable to recover it. 00:38:33.414 [2024-12-09 09:56:08.753414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.414 [2024-12-09 09:56:08.753444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.414 qpair failed and we were unable to recover it. 00:38:33.414 [2024-12-09 09:56:08.753679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.414 [2024-12-09 09:56:08.753712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.414 qpair failed and we were unable to recover it. 00:38:33.414 [2024-12-09 09:56:08.754053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.414 [2024-12-09 09:56:08.754082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.414 qpair failed and we were unable to recover it. 00:38:33.414 [2024-12-09 09:56:08.754332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.414 [2024-12-09 09:56:08.754361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.414 qpair failed and we were unable to recover it. 00:38:33.414 [2024-12-09 09:56:08.754694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.414 [2024-12-09 09:56:08.754724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.414 qpair failed and we were unable to recover it. 00:38:33.414 [2024-12-09 09:56:08.755096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.414 [2024-12-09 09:56:08.755126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.414 qpair failed and we were unable to recover it. 00:38:33.414 [2024-12-09 09:56:08.755493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.414 [2024-12-09 09:56:08.755521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.414 qpair failed and we were unable to recover it. 00:38:33.414 [2024-12-09 09:56:08.755889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.414 [2024-12-09 09:56:08.755919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.414 qpair failed and we were unable to recover it. 00:38:33.414 [2024-12-09 09:56:08.756269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.414 [2024-12-09 09:56:08.756298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.414 qpair failed and we were unable to recover it. 00:38:33.414 [2024-12-09 09:56:08.756661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.415 [2024-12-09 09:56:08.756692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.415 qpair failed and we were unable to recover it. 00:38:33.415 [2024-12-09 09:56:08.757050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.415 [2024-12-09 09:56:08.757081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.415 qpair failed and we were unable to recover it. 00:38:33.415 [2024-12-09 09:56:08.757421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.415 [2024-12-09 09:56:08.757450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.415 qpair failed and we were unable to recover it. 00:38:33.415 [2024-12-09 09:56:08.757799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.415 [2024-12-09 09:56:08.757829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.415 qpair failed and we were unable to recover it. 00:38:33.415 [2024-12-09 09:56:08.758172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.415 [2024-12-09 09:56:08.758212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.415 qpair failed and we were unable to recover it. 00:38:33.415 [2024-12-09 09:56:08.758563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.415 [2024-12-09 09:56:08.758592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.415 qpair failed and we were unable to recover it. 00:38:33.415 [2024-12-09 09:56:08.758943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.415 [2024-12-09 09:56:08.758974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.415 qpair failed and we were unable to recover it. 00:38:33.415 [2024-12-09 09:56:08.759224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.415 [2024-12-09 09:56:08.759252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.415 qpair failed and we were unable to recover it. 00:38:33.415 [2024-12-09 09:56:08.759587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.415 [2024-12-09 09:56:08.759617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.415 qpair failed and we were unable to recover it. 00:38:33.415 [2024-12-09 09:56:08.759958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.415 [2024-12-09 09:56:08.759988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.415 qpair failed and we were unable to recover it. 00:38:33.415 [2024-12-09 09:56:08.760352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.415 [2024-12-09 09:56:08.760382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.415 qpair failed and we were unable to recover it. 00:38:33.415 [2024-12-09 09:56:08.760709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.415 [2024-12-09 09:56:08.760740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.415 qpair failed and we were unable to recover it. 00:38:33.415 [2024-12-09 09:56:08.761000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.415 [2024-12-09 09:56:08.761030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.415 qpair failed and we were unable to recover it. 00:38:33.415 [2024-12-09 09:56:08.761343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.415 [2024-12-09 09:56:08.761371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.415 qpair failed and we were unable to recover it. 00:38:33.415 [2024-12-09 09:56:08.761730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.415 [2024-12-09 09:56:08.761760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.415 qpair failed and we were unable to recover it. 00:38:33.415 [2024-12-09 09:56:08.762097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.415 [2024-12-09 09:56:08.762126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.415 qpair failed and we were unable to recover it. 00:38:33.415 [2024-12-09 09:56:08.762470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.415 [2024-12-09 09:56:08.762499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.415 qpair failed and we were unable to recover it. 00:38:33.415 [2024-12-09 09:56:08.762853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.415 [2024-12-09 09:56:08.762883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.415 qpair failed and we were unable to recover it. 00:38:33.415 [2024-12-09 09:56:08.763236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.415 [2024-12-09 09:56:08.763266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.415 qpair failed and we were unable to recover it. 00:38:33.415 [2024-12-09 09:56:08.763622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.415 [2024-12-09 09:56:08.763659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.415 qpair failed and we were unable to recover it. 00:38:33.415 [2024-12-09 09:56:08.763824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.415 [2024-12-09 09:56:08.763853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.415 qpair failed and we were unable to recover it. 00:38:33.415 [2024-12-09 09:56:08.764176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.415 [2024-12-09 09:56:08.764205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.415 qpair failed and we were unable to recover it. 00:38:33.415 [2024-12-09 09:56:08.764551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.415 [2024-12-09 09:56:08.764581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.415 qpair failed and we were unable to recover it. 00:38:33.415 [2024-12-09 09:56:08.764923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.415 [2024-12-09 09:56:08.764955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.415 qpair failed and we were unable to recover it. 00:38:33.415 [2024-12-09 09:56:08.765295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.415 [2024-12-09 09:56:08.765324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.415 qpair failed and we were unable to recover it. 00:38:33.415 [2024-12-09 09:56:08.765683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.415 [2024-12-09 09:56:08.765713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.415 qpair failed and we were unable to recover it. 00:38:33.415 [2024-12-09 09:56:08.766065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.415 [2024-12-09 09:56:08.766093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.415 qpair failed and we were unable to recover it. 00:38:33.415 [2024-12-09 09:56:08.766403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.415 [2024-12-09 09:56:08.766431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.415 qpair failed and we were unable to recover it. 00:38:33.415 [2024-12-09 09:56:08.766761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.415 [2024-12-09 09:56:08.766791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.415 qpair failed and we were unable to recover it. 00:38:33.415 [2024-12-09 09:56:08.767123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.415 [2024-12-09 09:56:08.767152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.415 qpair failed and we were unable to recover it. 00:38:33.415 [2024-12-09 09:56:08.767506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.415 [2024-12-09 09:56:08.767533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.415 qpair failed and we were unable to recover it. 00:38:33.415 [2024-12-09 09:56:08.767795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.415 [2024-12-09 09:56:08.767825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.415 qpair failed and we were unable to recover it. 00:38:33.415 [2024-12-09 09:56:08.768197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.415 [2024-12-09 09:56:08.768227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.415 qpair failed and we were unable to recover it. 00:38:33.415 [2024-12-09 09:56:08.768585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.415 [2024-12-09 09:56:08.768614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.415 qpair failed and we were unable to recover it. 00:38:33.415 [2024-12-09 09:56:08.768971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.415 [2024-12-09 09:56:08.769001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.415 qpair failed and we were unable to recover it. 00:38:33.415 [2024-12-09 09:56:08.769344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.415 [2024-12-09 09:56:08.769373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.415 qpair failed and we were unable to recover it. 00:38:33.415 [2024-12-09 09:56:08.769717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.415 [2024-12-09 09:56:08.769748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.415 qpair failed and we were unable to recover it. 00:38:33.415 [2024-12-09 09:56:08.770109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.415 [2024-12-09 09:56:08.770138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.415 qpair failed and we were unable to recover it. 00:38:33.416 [2024-12-09 09:56:08.770483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.416 [2024-12-09 09:56:08.770511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.416 qpair failed and we were unable to recover it. 00:38:33.416 [2024-12-09 09:56:08.770767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.416 [2024-12-09 09:56:08.770796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.416 qpair failed and we were unable to recover it. 00:38:33.416 [2024-12-09 09:56:08.771163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.416 [2024-12-09 09:56:08.771192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.416 qpair failed and we were unable to recover it. 00:38:33.416 [2024-12-09 09:56:08.771547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.416 [2024-12-09 09:56:08.771575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.416 qpair failed and we were unable to recover it. 00:38:33.416 [2024-12-09 09:56:08.771819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.416 [2024-12-09 09:56:08.771848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.416 qpair failed and we were unable to recover it. 00:38:33.416 [2024-12-09 09:56:08.772189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.416 [2024-12-09 09:56:08.772217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.416 qpair failed and we were unable to recover it. 00:38:33.416 [2024-12-09 09:56:08.772558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.416 [2024-12-09 09:56:08.772586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.416 qpair failed and we were unable to recover it. 00:38:33.416 [2024-12-09 09:56:08.772947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.416 [2024-12-09 09:56:08.772992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.416 qpair failed and we were unable to recover it. 00:38:33.416 [2024-12-09 09:56:08.773365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.416 [2024-12-09 09:56:08.773394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.416 qpair failed and we were unable to recover it. 00:38:33.416 [2024-12-09 09:56:08.773734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.416 [2024-12-09 09:56:08.773765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.416 qpair failed and we were unable to recover it. 00:38:33.416 [2024-12-09 09:56:08.774111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.416 [2024-12-09 09:56:08.774139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.416 qpair failed and we were unable to recover it. 00:38:33.416 [2024-12-09 09:56:08.774386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.416 [2024-12-09 09:56:08.774414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.416 qpair failed and we were unable to recover it. 00:38:33.416 [2024-12-09 09:56:08.774757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.416 [2024-12-09 09:56:08.774787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.416 qpair failed and we were unable to recover it. 00:38:33.416 [2024-12-09 09:56:08.775122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.416 [2024-12-09 09:56:08.775152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.416 qpair failed and we were unable to recover it. 00:38:33.416 [2024-12-09 09:56:08.775495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.416 [2024-12-09 09:56:08.775524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.416 qpair failed and we were unable to recover it. 00:38:33.416 [2024-12-09 09:56:08.775876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.416 [2024-12-09 09:56:08.775906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.416 qpair failed and we were unable to recover it. 00:38:33.416 [2024-12-09 09:56:08.776256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.416 [2024-12-09 09:56:08.776284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.416 qpair failed and we were unable to recover it. 00:38:33.416 [2024-12-09 09:56:08.776646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.416 [2024-12-09 09:56:08.776677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.416 qpair failed and we were unable to recover it. 00:38:33.416 [2024-12-09 09:56:08.776908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.416 [2024-12-09 09:56:08.776937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.416 qpair failed and we were unable to recover it. 00:38:33.416 [2024-12-09 09:56:08.777306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.416 [2024-12-09 09:56:08.777334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.416 qpair failed and we were unable to recover it. 00:38:33.416 [2024-12-09 09:56:08.777678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.416 [2024-12-09 09:56:08.777709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.416 qpair failed and we were unable to recover it. 00:38:33.416 [2024-12-09 09:56:08.778063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.416 [2024-12-09 09:56:08.778098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.416 qpair failed and we were unable to recover it. 00:38:33.416 [2024-12-09 09:56:08.778441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.416 [2024-12-09 09:56:08.778470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.416 qpair failed and we were unable to recover it. 00:38:33.416 [2024-12-09 09:56:08.778691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.416 [2024-12-09 09:56:08.778721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.416 qpair failed and we were unable to recover it. 00:38:33.416 [2024-12-09 09:56:08.779056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.416 [2024-12-09 09:56:08.779084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.416 qpair failed and we were unable to recover it. 00:38:33.416 [2024-12-09 09:56:08.779435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.416 [2024-12-09 09:56:08.779463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.416 qpair failed and we were unable to recover it. 00:38:33.416 [2024-12-09 09:56:08.779710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.416 [2024-12-09 09:56:08.779740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.416 qpair failed and we were unable to recover it. 00:38:33.416 [2024-12-09 09:56:08.779979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.416 [2024-12-09 09:56:08.780008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.416 qpair failed and we were unable to recover it. 00:38:33.416 [2024-12-09 09:56:08.780327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.416 [2024-12-09 09:56:08.780364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.416 qpair failed and we were unable to recover it. 00:38:33.416 [2024-12-09 09:56:08.780690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.416 [2024-12-09 09:56:08.780720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.416 qpair failed and we were unable to recover it. 00:38:33.416 [2024-12-09 09:56:08.781070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.416 [2024-12-09 09:56:08.781098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.416 qpair failed and we were unable to recover it. 00:38:33.416 [2024-12-09 09:56:08.781466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.416 [2024-12-09 09:56:08.781494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.416 qpair failed and we were unable to recover it. 00:38:33.416 [2024-12-09 09:56:08.781822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.416 [2024-12-09 09:56:08.781853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.416 qpair failed and we were unable to recover it. 00:38:33.416 [2024-12-09 09:56:08.782172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.416 [2024-12-09 09:56:08.782201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.416 qpair failed and we were unable to recover it. 00:38:33.416 [2024-12-09 09:56:08.782582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.416 [2024-12-09 09:56:08.782611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.416 qpair failed and we were unable to recover it. 00:38:33.416 [2024-12-09 09:56:08.782973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.416 [2024-12-09 09:56:08.783003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.416 qpair failed and we were unable to recover it. 00:38:33.416 [2024-12-09 09:56:08.783319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.416 [2024-12-09 09:56:08.783348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.416 qpair failed and we were unable to recover it. 00:38:33.416 [2024-12-09 09:56:08.783670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.416 [2024-12-09 09:56:08.783699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.416 qpair failed and we were unable to recover it. 00:38:33.417 [2024-12-09 09:56:08.784019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.417 [2024-12-09 09:56:08.784048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.417 qpair failed and we were unable to recover it. 00:38:33.417 [2024-12-09 09:56:08.784375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.417 [2024-12-09 09:56:08.784403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.417 qpair failed and we were unable to recover it. 00:38:33.417 [2024-12-09 09:56:08.784658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.417 [2024-12-09 09:56:08.784688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.417 qpair failed and we were unable to recover it. 00:38:33.417 [2024-12-09 09:56:08.785016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.417 [2024-12-09 09:56:08.785045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.417 qpair failed and we were unable to recover it. 00:38:33.417 [2024-12-09 09:56:08.785439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.417 [2024-12-09 09:56:08.785467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.417 qpair failed and we were unable to recover it. 00:38:33.417 [2024-12-09 09:56:08.785803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.417 [2024-12-09 09:56:08.785834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.417 qpair failed and we were unable to recover it. 00:38:33.417 [2024-12-09 09:56:08.786178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.417 [2024-12-09 09:56:08.786208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.417 qpair failed and we were unable to recover it. 00:38:33.417 [2024-12-09 09:56:08.786543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.417 [2024-12-09 09:56:08.786571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.417 qpair failed and we were unable to recover it. 00:38:33.417 [2024-12-09 09:56:08.786917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.417 [2024-12-09 09:56:08.786947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.417 qpair failed and we were unable to recover it. 00:38:33.417 [2024-12-09 09:56:08.787288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.417 [2024-12-09 09:56:08.787316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.417 qpair failed and we were unable to recover it. 00:38:33.417 [2024-12-09 09:56:08.787672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.417 [2024-12-09 09:56:08.787703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.417 qpair failed and we were unable to recover it. 00:38:33.417 [2024-12-09 09:56:08.788049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.417 [2024-12-09 09:56:08.788078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.417 qpair failed and we were unable to recover it. 00:38:33.417 [2024-12-09 09:56:08.788307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.417 [2024-12-09 09:56:08.788336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.417 qpair failed and we were unable to recover it. 00:38:33.417 [2024-12-09 09:56:08.788587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.417 [2024-12-09 09:56:08.788615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.417 qpair failed and we were unable to recover it. 00:38:33.417 [2024-12-09 09:56:08.788995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.417 [2024-12-09 09:56:08.789025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.417 qpair failed and we were unable to recover it. 00:38:33.417 [2024-12-09 09:56:08.789337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.417 [2024-12-09 09:56:08.789366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.417 qpair failed and we were unable to recover it. 00:38:33.417 [2024-12-09 09:56:08.789717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.417 [2024-12-09 09:56:08.789748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.417 qpair failed and we were unable to recover it. 00:38:33.417 [2024-12-09 09:56:08.790114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.417 [2024-12-09 09:56:08.790143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.417 qpair failed and we were unable to recover it. 00:38:33.417 [2024-12-09 09:56:08.790487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.417 [2024-12-09 09:56:08.790515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.417 qpair failed and we were unable to recover it. 00:38:33.417 [2024-12-09 09:56:08.790852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.417 [2024-12-09 09:56:08.790883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.417 qpair failed and we were unable to recover it. 00:38:33.417 [2024-12-09 09:56:08.791243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.417 [2024-12-09 09:56:08.791272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.417 qpair failed and we were unable to recover it. 00:38:33.417 [2024-12-09 09:56:08.791635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.417 [2024-12-09 09:56:08.791673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.417 qpair failed and we were unable to recover it. 00:38:33.417 [2024-12-09 09:56:08.792030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.417 [2024-12-09 09:56:08.792059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.417 qpair failed and we were unable to recover it. 00:38:33.417 [2024-12-09 09:56:08.792402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.417 [2024-12-09 09:56:08.792431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.417 qpair failed and we were unable to recover it. 00:38:33.417 [2024-12-09 09:56:08.792850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.417 [2024-12-09 09:56:08.792880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.417 qpair failed and we were unable to recover it. 00:38:33.417 [2024-12-09 09:56:08.793218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.417 [2024-12-09 09:56:08.793248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.417 qpair failed and we were unable to recover it. 00:38:33.417 [2024-12-09 09:56:08.793615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.417 [2024-12-09 09:56:08.793654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.417 qpair failed and we were unable to recover it. 00:38:33.417 [2024-12-09 09:56:08.793864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.417 [2024-12-09 09:56:08.793892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.417 qpair failed and we were unable to recover it. 00:38:33.417 [2024-12-09 09:56:08.794241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.417 [2024-12-09 09:56:08.794269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.417 qpair failed and we were unable to recover it. 00:38:33.417 [2024-12-09 09:56:08.794612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.417 [2024-12-09 09:56:08.794650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.417 qpair failed and we were unable to recover it. 00:38:33.417 [2024-12-09 09:56:08.794981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.417 [2024-12-09 09:56:08.795010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.417 qpair failed and we were unable to recover it. 00:38:33.417 [2024-12-09 09:56:08.795357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.417 [2024-12-09 09:56:08.795386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.417 qpair failed and we were unable to recover it. 00:38:33.417 [2024-12-09 09:56:08.795630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.417 [2024-12-09 09:56:08.795670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.417 qpair failed and we were unable to recover it. 00:38:33.417 [2024-12-09 09:56:08.795992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.417 [2024-12-09 09:56:08.796020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.417 qpair failed and we were unable to recover it. 00:38:33.417 [2024-12-09 09:56:08.796372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.417 [2024-12-09 09:56:08.796400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.417 qpair failed and we were unable to recover it. 00:38:33.417 [2024-12-09 09:56:08.796745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.417 [2024-12-09 09:56:08.796775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.417 qpair failed and we were unable to recover it. 00:38:33.418 [2024-12-09 09:56:08.797149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.418 [2024-12-09 09:56:08.797178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.418 qpair failed and we were unable to recover it. 00:38:33.418 [2024-12-09 09:56:08.797417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.418 [2024-12-09 09:56:08.797450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.418 qpair failed and we were unable to recover it. 00:38:33.418 [2024-12-09 09:56:08.797783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.418 [2024-12-09 09:56:08.797812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.418 qpair failed and we were unable to recover it. 00:38:33.418 [2024-12-09 09:56:08.798166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.418 [2024-12-09 09:56:08.798195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.418 qpair failed and we were unable to recover it. 00:38:33.418 [2024-12-09 09:56:08.798522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.418 [2024-12-09 09:56:08.798558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.418 qpair failed and we were unable to recover it. 00:38:33.418 [2024-12-09 09:56:08.798859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.418 [2024-12-09 09:56:08.798889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.418 qpair failed and we were unable to recover it. 00:38:33.418 [2024-12-09 09:56:08.799235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.418 [2024-12-09 09:56:08.799263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.418 qpair failed and we were unable to recover it. 00:38:33.418 [2024-12-09 09:56:08.799660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.418 [2024-12-09 09:56:08.799690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.418 qpair failed and we were unable to recover it. 00:38:33.418 [2024-12-09 09:56:08.800031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.418 [2024-12-09 09:56:08.800059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.418 qpair failed and we were unable to recover it. 00:38:33.418 [2024-12-09 09:56:08.800408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.418 [2024-12-09 09:56:08.800436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.418 qpair failed and we were unable to recover it. 00:38:33.418 [2024-12-09 09:56:08.800783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.418 [2024-12-09 09:56:08.800813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.418 qpair failed and we were unable to recover it. 00:38:33.418 [2024-12-09 09:56:08.801167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.418 [2024-12-09 09:56:08.801196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.418 qpair failed and we were unable to recover it. 00:38:33.418 [2024-12-09 09:56:08.801521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.418 [2024-12-09 09:56:08.801550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.418 qpair failed and we were unable to recover it. 00:38:33.418 [2024-12-09 09:56:08.801928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.418 [2024-12-09 09:56:08.801958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.418 qpair failed and we were unable to recover it. 00:38:33.418 [2024-12-09 09:56:08.802281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.418 [2024-12-09 09:56:08.802308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.418 qpair failed and we were unable to recover it. 00:38:33.418 [2024-12-09 09:56:08.802622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.418 [2024-12-09 09:56:08.802659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.418 qpair failed and we were unable to recover it. 00:38:33.418 [2024-12-09 09:56:08.802993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.418 [2024-12-09 09:56:08.803022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.418 qpair failed and we were unable to recover it. 00:38:33.418 [2024-12-09 09:56:08.803386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.418 [2024-12-09 09:56:08.803414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.418 qpair failed and we were unable to recover it. 00:38:33.418 [2024-12-09 09:56:08.803760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.418 [2024-12-09 09:56:08.803789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.418 qpair failed and we were unable to recover it. 00:38:33.418 [2024-12-09 09:56:08.804145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.418 [2024-12-09 09:56:08.804174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.418 qpair failed and we were unable to recover it. 00:38:33.418 [2024-12-09 09:56:08.804529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.418 [2024-12-09 09:56:08.804557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.418 qpair failed and we were unable to recover it. 00:38:33.418 [2024-12-09 09:56:08.804944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.418 [2024-12-09 09:56:08.804975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.418 qpair failed and we were unable to recover it. 00:38:33.418 [2024-12-09 09:56:08.805319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.418 [2024-12-09 09:56:08.805347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.418 qpair failed and we were unable to recover it. 00:38:33.418 [2024-12-09 09:56:08.805698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.418 [2024-12-09 09:56:08.805728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.418 qpair failed and we were unable to recover it. 00:38:33.418 [2024-12-09 09:56:08.806023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.418 [2024-12-09 09:56:08.806053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.418 qpair failed and we were unable to recover it. 00:38:33.418 [2024-12-09 09:56:08.806388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.418 [2024-12-09 09:56:08.806417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.418 qpair failed and we were unable to recover it. 00:38:33.418 [2024-12-09 09:56:08.806758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.418 [2024-12-09 09:56:08.806788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.418 qpair failed and we were unable to recover it. 00:38:33.418 [2024-12-09 09:56:08.807131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.418 [2024-12-09 09:56:08.807159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.418 qpair failed and we were unable to recover it. 00:38:33.418 [2024-12-09 09:56:08.807408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.418 [2024-12-09 09:56:08.807437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.418 qpair failed and we were unable to recover it. 00:38:33.418 [2024-12-09 09:56:08.807794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.418 [2024-12-09 09:56:08.807824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.418 qpair failed and we were unable to recover it. 00:38:33.418 [2024-12-09 09:56:08.808168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.418 [2024-12-09 09:56:08.808197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.418 qpair failed and we were unable to recover it. 00:38:33.418 [2024-12-09 09:56:08.808554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.418 [2024-12-09 09:56:08.808583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.418 qpair failed and we were unable to recover it. 00:38:33.418 [2024-12-09 09:56:08.808941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.418 [2024-12-09 09:56:08.808970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.418 qpair failed and we were unable to recover it. 00:38:33.418 [2024-12-09 09:56:08.809323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.418 [2024-12-09 09:56:08.809352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.418 qpair failed and we were unable to recover it. 00:38:33.418 [2024-12-09 09:56:08.809699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.418 [2024-12-09 09:56:08.809729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.419 qpair failed and we were unable to recover it. 00:38:33.419 [2024-12-09 09:56:08.809970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.419 [2024-12-09 09:56:08.809997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.419 qpair failed and we were unable to recover it. 00:38:33.419 [2024-12-09 09:56:08.810353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.419 [2024-12-09 09:56:08.810382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.419 qpair failed and we were unable to recover it. 00:38:33.419 [2024-12-09 09:56:08.810727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.419 [2024-12-09 09:56:08.810757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.419 qpair failed and we were unable to recover it. 00:38:33.419 [2024-12-09 09:56:08.811081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.419 [2024-12-09 09:56:08.811110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.419 qpair failed and we were unable to recover it. 00:38:33.419 [2024-12-09 09:56:08.811386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.419 [2024-12-09 09:56:08.811414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.419 qpair failed and we were unable to recover it. 00:38:33.419 [2024-12-09 09:56:08.811760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.419 [2024-12-09 09:56:08.811790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.419 qpair failed and we were unable to recover it. 00:38:33.419 [2024-12-09 09:56:08.812147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.419 [2024-12-09 09:56:08.812175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.419 qpair failed and we were unable to recover it. 00:38:33.419 [2024-12-09 09:56:08.812524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.419 [2024-12-09 09:56:08.812557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.419 qpair failed and we were unable to recover it. 00:38:33.419 [2024-12-09 09:56:08.812892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.419 [2024-12-09 09:56:08.812922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.419 qpair failed and we were unable to recover it. 00:38:33.419 [2024-12-09 09:56:08.813267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.419 [2024-12-09 09:56:08.813296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.419 qpair failed and we were unable to recover it. 00:38:33.419 [2024-12-09 09:56:08.813660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.419 [2024-12-09 09:56:08.813690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.419 qpair failed and we were unable to recover it. 00:38:33.419 [2024-12-09 09:56:08.814052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.419 [2024-12-09 09:56:08.814081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.419 qpair failed and we were unable to recover it. 00:38:33.419 [2024-12-09 09:56:08.814422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.419 [2024-12-09 09:56:08.814451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.419 qpair failed and we were unable to recover it. 00:38:33.419 [2024-12-09 09:56:08.814803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.419 [2024-12-09 09:56:08.814833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.419 qpair failed and we were unable to recover it. 00:38:33.419 [2024-12-09 09:56:08.815171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.419 [2024-12-09 09:56:08.815200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.419 qpair failed and we were unable to recover it. 00:38:33.419 [2024-12-09 09:56:08.815544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.419 [2024-12-09 09:56:08.815572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.419 qpair failed and we were unable to recover it. 00:38:33.419 [2024-12-09 09:56:08.815924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.419 [2024-12-09 09:56:08.815955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.419 qpair failed and we were unable to recover it. 00:38:33.419 [2024-12-09 09:56:08.816146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.419 [2024-12-09 09:56:08.816174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.419 qpair failed and we were unable to recover it. 00:38:33.419 [2024-12-09 09:56:08.816545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.419 [2024-12-09 09:56:08.816573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.419 qpair failed and we were unable to recover it. 00:38:33.419 [2024-12-09 09:56:08.816917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.419 [2024-12-09 09:56:08.816947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.419 qpair failed and we were unable to recover it. 00:38:33.419 [2024-12-09 09:56:08.817293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.419 [2024-12-09 09:56:08.817321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.419 qpair failed and we were unable to recover it. 00:38:33.419 [2024-12-09 09:56:08.817691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.419 [2024-12-09 09:56:08.817722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.419 qpair failed and we were unable to recover it. 00:38:33.419 [2024-12-09 09:56:08.817962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.419 [2024-12-09 09:56:08.817990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.419 qpair failed and we were unable to recover it. 00:38:33.419 [2024-12-09 09:56:08.818347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.419 [2024-12-09 09:56:08.818376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.419 qpair failed and we were unable to recover it. 00:38:33.419 [2024-12-09 09:56:08.818739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.419 [2024-12-09 09:56:08.818769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.419 qpair failed and we were unable to recover it. 00:38:33.419 [2024-12-09 09:56:08.819107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.419 [2024-12-09 09:56:08.819136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.419 qpair failed and we were unable to recover it. 00:38:33.419 [2024-12-09 09:56:08.819493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.419 [2024-12-09 09:56:08.819523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.419 qpair failed and we were unable to recover it. 00:38:33.419 [2024-12-09 09:56:08.819866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.419 [2024-12-09 09:56:08.819896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.419 qpair failed and we were unable to recover it. 00:38:33.419 [2024-12-09 09:56:08.820236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.419 [2024-12-09 09:56:08.820264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.419 qpair failed and we were unable to recover it. 00:38:33.419 [2024-12-09 09:56:08.820613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.419 [2024-12-09 09:56:08.820655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.419 qpair failed and we were unable to recover it. 00:38:33.419 [2024-12-09 09:56:08.820982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.419 [2024-12-09 09:56:08.821011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.419 qpair failed and we were unable to recover it. 00:38:33.419 [2024-12-09 09:56:08.821360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.419 [2024-12-09 09:56:08.821389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.419 qpair failed and we were unable to recover it. 00:38:33.419 [2024-12-09 09:56:08.821747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.419 [2024-12-09 09:56:08.821777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.419 qpair failed and we were unable to recover it. 00:38:33.419 [2024-12-09 09:56:08.822150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.419 [2024-12-09 09:56:08.822179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.419 qpair failed and we were unable to recover it. 00:38:33.419 [2024-12-09 09:56:08.822556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.419 [2024-12-09 09:56:08.822590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.419 qpair failed and we were unable to recover it. 00:38:33.419 [2024-12-09 09:56:08.822948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.419 [2024-12-09 09:56:08.822977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.419 qpair failed and we were unable to recover it. 00:38:33.419 [2024-12-09 09:56:08.823314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.419 [2024-12-09 09:56:08.823343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.419 qpair failed and we were unable to recover it. 00:38:33.419 [2024-12-09 09:56:08.823699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.420 [2024-12-09 09:56:08.823730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.420 qpair failed and we were unable to recover it. 00:38:33.420 [2024-12-09 09:56:08.824046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.420 [2024-12-09 09:56:08.824074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.420 qpair failed and we were unable to recover it. 00:38:33.420 [2024-12-09 09:56:08.824430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.420 [2024-12-09 09:56:08.824459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.420 qpair failed and we were unable to recover it. 00:38:33.420 [2024-12-09 09:56:08.824805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.420 [2024-12-09 09:56:08.824835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.420 qpair failed and we were unable to recover it. 00:38:33.420 [2024-12-09 09:56:08.825080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.420 [2024-12-09 09:56:08.825108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.420 qpair failed and we were unable to recover it. 00:38:33.420 [2024-12-09 09:56:08.825469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.420 [2024-12-09 09:56:08.825498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.420 qpair failed and we were unable to recover it. 00:38:33.420 [2024-12-09 09:56:08.826018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.420 [2024-12-09 09:56:08.826048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.420 qpair failed and we were unable to recover it. 00:38:33.420 [2024-12-09 09:56:08.826396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.420 [2024-12-09 09:56:08.826425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.420 qpair failed and we were unable to recover it. 00:38:33.420 [2024-12-09 09:56:08.826674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.420 [2024-12-09 09:56:08.826703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.420 qpair failed and we were unable to recover it. 00:38:33.420 [2024-12-09 09:56:08.827094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.420 [2024-12-09 09:56:08.827123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.420 qpair failed and we were unable to recover it. 00:38:33.420 [2024-12-09 09:56:08.827369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.420 [2024-12-09 09:56:08.827398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.420 qpair failed and we were unable to recover it. 00:38:33.420 [2024-12-09 09:56:08.827734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.420 [2024-12-09 09:56:08.827764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.420 qpair failed and we were unable to recover it. 00:38:33.420 [2024-12-09 09:56:08.828119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.420 [2024-12-09 09:56:08.828148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.420 qpair failed and we were unable to recover it. 00:38:33.420 [2024-12-09 09:56:08.828514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.420 [2024-12-09 09:56:08.828543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.420 qpair failed and we were unable to recover it. 00:38:33.420 [2024-12-09 09:56:08.828872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.420 [2024-12-09 09:56:08.828902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.420 qpair failed and we were unable to recover it. 00:38:33.420 [2024-12-09 09:56:08.829289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.420 [2024-12-09 09:56:08.829318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.420 qpair failed and we were unable to recover it. 00:38:33.420 [2024-12-09 09:56:08.829675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.420 [2024-12-09 09:56:08.829706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.420 qpair failed and we were unable to recover it. 00:38:33.420 [2024-12-09 09:56:08.830050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.420 [2024-12-09 09:56:08.830079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.420 qpair failed and we were unable to recover it. 00:38:33.420 [2024-12-09 09:56:08.830421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.420 [2024-12-09 09:56:08.830450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.420 qpair failed and we were unable to recover it. 00:38:33.420 [2024-12-09 09:56:08.830793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.420 [2024-12-09 09:56:08.830823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.420 qpair failed and we were unable to recover it. 00:38:33.420 [2024-12-09 09:56:08.831169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.420 [2024-12-09 09:56:08.831197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.420 qpair failed and we were unable to recover it. 00:38:33.420 [2024-12-09 09:56:08.831558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.420 [2024-12-09 09:56:08.831586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.420 qpair failed and we were unable to recover it. 00:38:33.420 [2024-12-09 09:56:08.831918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.420 [2024-12-09 09:56:08.831948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.420 qpair failed and we were unable to recover it. 00:38:33.420 [2024-12-09 09:56:08.832337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.420 [2024-12-09 09:56:08.832366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.420 qpair failed and we were unable to recover it. 00:38:33.420 [2024-12-09 09:56:08.832709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.420 [2024-12-09 09:56:08.832738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.420 qpair failed and we were unable to recover it. 00:38:33.420 [2024-12-09 09:56:08.833096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.420 [2024-12-09 09:56:08.833126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.420 qpair failed and we were unable to recover it. 00:38:33.420 [2024-12-09 09:56:08.833471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.420 [2024-12-09 09:56:08.833500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.420 qpair failed and we were unable to recover it. 00:38:33.420 [2024-12-09 09:56:08.833855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.420 [2024-12-09 09:56:08.833885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.420 qpair failed and we were unable to recover it. 00:38:33.420 [2024-12-09 09:56:08.834226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.420 [2024-12-09 09:56:08.834255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.420 qpair failed and we were unable to recover it. 00:38:33.420 [2024-12-09 09:56:08.834613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.420 [2024-12-09 09:56:08.834649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.420 qpair failed and we were unable to recover it. 00:38:33.420 [2024-12-09 09:56:08.834890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.420 [2024-12-09 09:56:08.834918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.420 qpair failed and we were unable to recover it. 00:38:33.420 [2024-12-09 09:56:08.835238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.420 [2024-12-09 09:56:08.835268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.420 qpair failed and we were unable to recover it. 00:38:33.420 [2024-12-09 09:56:08.835600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.420 [2024-12-09 09:56:08.835630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.420 qpair failed and we were unable to recover it. 00:38:33.420 [2024-12-09 09:56:08.835980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.420 [2024-12-09 09:56:08.836009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.420 qpair failed and we were unable to recover it. 00:38:33.420 [2024-12-09 09:56:08.836357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.420 [2024-12-09 09:56:08.836386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.420 qpair failed and we were unable to recover it. 00:38:33.420 [2024-12-09 09:56:08.836754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.420 [2024-12-09 09:56:08.836784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.420 qpair failed and we were unable to recover it. 00:38:33.420 [2024-12-09 09:56:08.837114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.420 [2024-12-09 09:56:08.837143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.420 qpair failed and we were unable to recover it. 00:38:33.420 [2024-12-09 09:56:08.837514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.420 [2024-12-09 09:56:08.837542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.420 qpair failed and we were unable to recover it. 00:38:33.420 [2024-12-09 09:56:08.837883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.421 [2024-12-09 09:56:08.837919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.421 qpair failed and we were unable to recover it. 00:38:33.421 [2024-12-09 09:56:08.838254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.421 [2024-12-09 09:56:08.838282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.421 qpair failed and we were unable to recover it. 00:38:33.694 [2024-12-09 09:56:08.838601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.694 [2024-12-09 09:56:08.838630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.694 qpair failed and we were unable to recover it. 00:38:33.694 [2024-12-09 09:56:08.839006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.694 [2024-12-09 09:56:08.839036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.694 qpair failed and we were unable to recover it. 00:38:33.694 [2024-12-09 09:56:08.839388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.694 [2024-12-09 09:56:08.839416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.694 qpair failed and we were unable to recover it. 00:38:33.694 [2024-12-09 09:56:08.839705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.694 [2024-12-09 09:56:08.839735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.694 qpair failed and we were unable to recover it. 00:38:33.694 [2024-12-09 09:56:08.840079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.694 [2024-12-09 09:56:08.840107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.694 qpair failed and we were unable to recover it. 00:38:33.694 [2024-12-09 09:56:08.840439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.694 [2024-12-09 09:56:08.840468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.694 qpair failed and we were unable to recover it. 00:38:33.694 [2024-12-09 09:56:08.840792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.694 [2024-12-09 09:56:08.840822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.694 qpair failed and we were unable to recover it. 00:38:33.694 [2024-12-09 09:56:08.841163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.694 [2024-12-09 09:56:08.841192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.694 qpair failed and we were unable to recover it. 00:38:33.694 [2024-12-09 09:56:08.841549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.694 [2024-12-09 09:56:08.841578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.695 qpair failed and we were unable to recover it. 00:38:33.695 [2024-12-09 09:56:08.841942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.695 [2024-12-09 09:56:08.841972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.695 qpair failed and we were unable to recover it. 00:38:33.695 [2024-12-09 09:56:08.842339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.695 [2024-12-09 09:56:08.842368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.695 qpair failed and we were unable to recover it. 00:38:33.695 [2024-12-09 09:56:08.842740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.695 [2024-12-09 09:56:08.842770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.695 qpair failed and we were unable to recover it. 00:38:33.695 [2024-12-09 09:56:08.843117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.695 [2024-12-09 09:56:08.843147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.695 qpair failed and we were unable to recover it. 00:38:33.695 [2024-12-09 09:56:08.843495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.695 [2024-12-09 09:56:08.843523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.695 qpair failed and we were unable to recover it. 00:38:33.695 [2024-12-09 09:56:08.843877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.695 [2024-12-09 09:56:08.843907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.695 qpair failed and we were unable to recover it. 00:38:33.695 [2024-12-09 09:56:08.844141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.695 [2024-12-09 09:56:08.844172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.695 qpair failed and we were unable to recover it. 00:38:33.695 [2024-12-09 09:56:08.844572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.695 [2024-12-09 09:56:08.844601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.695 qpair failed and we were unable to recover it. 00:38:33.695 [2024-12-09 09:56:08.844932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.695 [2024-12-09 09:56:08.844963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.695 qpair failed and we were unable to recover it. 00:38:33.695 [2024-12-09 09:56:08.845306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.695 [2024-12-09 09:56:08.845333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.695 qpair failed and we were unable to recover it. 00:38:33.695 [2024-12-09 09:56:08.845659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.695 [2024-12-09 09:56:08.845689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.695 qpair failed and we were unable to recover it. 00:38:33.695 [2024-12-09 09:56:08.846031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.695 [2024-12-09 09:56:08.846060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.695 qpair failed and we were unable to recover it. 00:38:33.695 [2024-12-09 09:56:08.846418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.695 [2024-12-09 09:56:08.846445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.695 qpair failed and we were unable to recover it. 00:38:33.695 [2024-12-09 09:56:08.846789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.695 [2024-12-09 09:56:08.846819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.695 qpair failed and we were unable to recover it. 00:38:33.695 [2024-12-09 09:56:08.847184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.695 [2024-12-09 09:56:08.847213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.695 qpair failed and we were unable to recover it. 00:38:33.695 [2024-12-09 09:56:08.847559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.695 [2024-12-09 09:56:08.847587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.695 qpair failed and we were unable to recover it. 00:38:33.695 [2024-12-09 09:56:08.847950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.695 [2024-12-09 09:56:08.847986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.695 qpair failed and we were unable to recover it. 00:38:33.695 [2024-12-09 09:56:08.848339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.695 [2024-12-09 09:56:08.848367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.695 qpair failed and we were unable to recover it. 00:38:33.695 [2024-12-09 09:56:08.848715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.695 [2024-12-09 09:56:08.848745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.695 qpair failed and we were unable to recover it. 00:38:33.695 [2024-12-09 09:56:08.849127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.695 [2024-12-09 09:56:08.849156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.695 qpair failed and we were unable to recover it. 00:38:33.695 [2024-12-09 09:56:08.849498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.695 [2024-12-09 09:56:08.849526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.695 qpair failed and we were unable to recover it. 00:38:33.695 [2024-12-09 09:56:08.849755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.695 [2024-12-09 09:56:08.849784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.695 qpair failed and we were unable to recover it. 00:38:33.695 [2024-12-09 09:56:08.850153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.695 [2024-12-09 09:56:08.850181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.695 qpair failed and we were unable to recover it. 00:38:33.695 [2024-12-09 09:56:08.850534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.695 [2024-12-09 09:56:08.850562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.695 qpair failed and we were unable to recover it. 00:38:33.695 [2024-12-09 09:56:08.850932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.695 [2024-12-09 09:56:08.850962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.695 qpair failed and we were unable to recover it. 00:38:33.695 [2024-12-09 09:56:08.851306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.695 [2024-12-09 09:56:08.851335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.695 qpair failed and we were unable to recover it. 00:38:33.695 [2024-12-09 09:56:08.851686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.695 [2024-12-09 09:56:08.851715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.695 qpair failed and we were unable to recover it. 00:38:33.695 [2024-12-09 09:56:08.852128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.695 [2024-12-09 09:56:08.852156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.695 qpair failed and we were unable to recover it. 00:38:33.695 [2024-12-09 09:56:08.852502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.695 [2024-12-09 09:56:08.852531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.695 qpair failed and we were unable to recover it. 00:38:33.695 [2024-12-09 09:56:08.852766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.695 [2024-12-09 09:56:08.852799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.695 qpair failed and we were unable to recover it. 00:38:33.695 [2024-12-09 09:56:08.853133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.695 [2024-12-09 09:56:08.853163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.695 qpair failed and we were unable to recover it. 00:38:33.695 [2024-12-09 09:56:08.853402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.695 [2024-12-09 09:56:08.853431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.695 qpair failed and we were unable to recover it. 00:38:33.695 [2024-12-09 09:56:08.853759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.695 [2024-12-09 09:56:08.853789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.695 qpair failed and we were unable to recover it. 00:38:33.695 [2024-12-09 09:56:08.854044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.695 [2024-12-09 09:56:08.854072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.695 qpair failed and we were unable to recover it. 00:38:33.695 [2024-12-09 09:56:08.854389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.695 [2024-12-09 09:56:08.854418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.695 qpair failed and we were unable to recover it. 00:38:33.695 [2024-12-09 09:56:08.854657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.695 [2024-12-09 09:56:08.854690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.695 qpair failed and we were unable to recover it. 00:38:33.695 [2024-12-09 09:56:08.855029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.695 [2024-12-09 09:56:08.855058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.695 qpair failed and we were unable to recover it. 00:38:33.696 [2024-12-09 09:56:08.855393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.696 [2024-12-09 09:56:08.855421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.696 qpair failed and we were unable to recover it. 00:38:33.696 [2024-12-09 09:56:08.855774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.696 [2024-12-09 09:56:08.855805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.696 qpair failed and we were unable to recover it. 00:38:33.696 [2024-12-09 09:56:08.856148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.696 [2024-12-09 09:56:08.856178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.696 qpair failed and we were unable to recover it. 00:38:33.696 [2024-12-09 09:56:08.856544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.696 [2024-12-09 09:56:08.856580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.696 qpair failed and we were unable to recover it. 00:38:33.696 [2024-12-09 09:56:08.856950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.696 [2024-12-09 09:56:08.856980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.696 qpair failed and we were unable to recover it. 00:38:33.696 [2024-12-09 09:56:08.857335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.696 [2024-12-09 09:56:08.857364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.696 qpair failed and we were unable to recover it. 00:38:33.696 [2024-12-09 09:56:08.857603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.696 [2024-12-09 09:56:08.857634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.696 qpair failed and we were unable to recover it. 00:38:33.696 [2024-12-09 09:56:08.858011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.696 [2024-12-09 09:56:08.858042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.696 qpair failed and we were unable to recover it. 00:38:33.696 [2024-12-09 09:56:08.858385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.696 [2024-12-09 09:56:08.858415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.696 qpair failed and we were unable to recover it. 00:38:33.696 [2024-12-09 09:56:08.858629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.696 [2024-12-09 09:56:08.858667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.696 qpair failed and we were unable to recover it. 00:38:33.696 [2024-12-09 09:56:08.859039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.696 [2024-12-09 09:56:08.859068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.696 qpair failed and we were unable to recover it. 00:38:33.696 [2024-12-09 09:56:08.859408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.696 [2024-12-09 09:56:08.859437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.696 qpair failed and we were unable to recover it. 00:38:33.696 [2024-12-09 09:56:08.859681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.696 [2024-12-09 09:56:08.859715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.696 qpair failed and we were unable to recover it. 00:38:33.696 [2024-12-09 09:56:08.860107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.696 [2024-12-09 09:56:08.860135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.696 qpair failed and we were unable to recover it. 00:38:33.696 [2024-12-09 09:56:08.860471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.696 [2024-12-09 09:56:08.860500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.696 qpair failed and we were unable to recover it. 00:38:33.696 [2024-12-09 09:56:08.860851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.696 [2024-12-09 09:56:08.860882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.696 qpair failed and we were unable to recover it. 00:38:33.696 [2024-12-09 09:56:08.861291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.696 [2024-12-09 09:56:08.861320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.696 qpair failed and we were unable to recover it. 00:38:33.696 [2024-12-09 09:56:08.861669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.696 [2024-12-09 09:56:08.861700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.696 qpair failed and we were unable to recover it. 00:38:33.696 [2024-12-09 09:56:08.862038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.696 [2024-12-09 09:56:08.862069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.696 qpair failed and we were unable to recover it. 00:38:33.696 [2024-12-09 09:56:08.862432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.696 [2024-12-09 09:56:08.862463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.696 qpair failed and we were unable to recover it. 00:38:33.696 [2024-12-09 09:56:08.862715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.696 [2024-12-09 09:56:08.862751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.696 qpair failed and we were unable to recover it. 00:38:33.696 [2024-12-09 09:56:08.863112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.696 [2024-12-09 09:56:08.863143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.696 qpair failed and we were unable to recover it. 00:38:33.696 [2024-12-09 09:56:08.863548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.696 [2024-12-09 09:56:08.863578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.696 qpair failed and we were unable to recover it. 00:38:33.696 [2024-12-09 09:56:08.863804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.696 [2024-12-09 09:56:08.863837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.696 qpair failed and we were unable to recover it. 00:38:33.696 [2024-12-09 09:56:08.864170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.696 [2024-12-09 09:56:08.864200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.696 qpair failed and we were unable to recover it. 00:38:33.696 [2024-12-09 09:56:08.864550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.696 [2024-12-09 09:56:08.864580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.696 qpair failed and we were unable to recover it. 00:38:33.696 [2024-12-09 09:56:08.864915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.696 [2024-12-09 09:56:08.864945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.696 qpair failed and we were unable to recover it. 00:38:33.696 [2024-12-09 09:56:08.865299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.696 [2024-12-09 09:56:08.865329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.696 qpair failed and we were unable to recover it. 00:38:33.696 [2024-12-09 09:56:08.865733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.696 [2024-12-09 09:56:08.865764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.696 qpair failed and we were unable to recover it. 00:38:33.696 [2024-12-09 09:56:08.866109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.696 [2024-12-09 09:56:08.866138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.696 qpair failed and we were unable to recover it. 00:38:33.696 [2024-12-09 09:56:08.866473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.696 [2024-12-09 09:56:08.866503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.696 qpair failed and we were unable to recover it. 00:38:33.696 [2024-12-09 09:56:08.866891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.696 [2024-12-09 09:56:08.866922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.696 qpair failed and we were unable to recover it. 00:38:33.696 [2024-12-09 09:56:08.867264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.696 [2024-12-09 09:56:08.867293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.696 qpair failed and we were unable to recover it. 00:38:33.696 [2024-12-09 09:56:08.867674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.696 [2024-12-09 09:56:08.867707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.696 qpair failed and we were unable to recover it. 00:38:33.696 [2024-12-09 09:56:08.868017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.696 [2024-12-09 09:56:08.868048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.696 qpair failed and we were unable to recover it. 00:38:33.696 [2024-12-09 09:56:08.868285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.696 [2024-12-09 09:56:08.868317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.696 qpair failed and we were unable to recover it. 00:38:33.696 [2024-12-09 09:56:08.868656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.696 [2024-12-09 09:56:08.868687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.696 qpair failed and we were unable to recover it. 00:38:33.696 [2024-12-09 09:56:08.869028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.697 [2024-12-09 09:56:08.869059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.697 qpair failed and we were unable to recover it. 00:38:33.697 [2024-12-09 09:56:08.869396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.697 [2024-12-09 09:56:08.869426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.697 qpair failed and we were unable to recover it. 00:38:33.697 [2024-12-09 09:56:08.869761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.697 [2024-12-09 09:56:08.869793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.697 qpair failed and we were unable to recover it. 00:38:33.697 [2024-12-09 09:56:08.870145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.697 [2024-12-09 09:56:08.870175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.697 qpair failed and we were unable to recover it. 00:38:33.697 [2024-12-09 09:56:08.870532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.697 [2024-12-09 09:56:08.870562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.697 qpair failed and we were unable to recover it. 00:38:33.697 [2024-12-09 09:56:08.870921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.697 [2024-12-09 09:56:08.870952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.697 qpair failed and we were unable to recover it. 00:38:33.697 [2024-12-09 09:56:08.871311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.697 [2024-12-09 09:56:08.871341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.697 qpair failed and we were unable to recover it. 00:38:33.697 [2024-12-09 09:56:08.871675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.697 [2024-12-09 09:56:08.871706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.697 qpair failed and we were unable to recover it. 00:38:33.697 [2024-12-09 09:56:08.871962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.697 [2024-12-09 09:56:08.871993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.697 qpair failed and we were unable to recover it. 00:38:33.697 [2024-12-09 09:56:08.872350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.697 [2024-12-09 09:56:08.872379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.697 qpair failed and we were unable to recover it. 00:38:33.697 [2024-12-09 09:56:08.872734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.697 [2024-12-09 09:56:08.872771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.697 qpair failed and we were unable to recover it. 00:38:33.697 [2024-12-09 09:56:08.873125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.697 [2024-12-09 09:56:08.873152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.697 qpair failed and we were unable to recover it. 00:38:33.697 [2024-12-09 09:56:08.873511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.697 [2024-12-09 09:56:08.873541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.697 qpair failed and we were unable to recover it. 00:38:33.697 [2024-12-09 09:56:08.873912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.697 [2024-12-09 09:56:08.873943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.697 qpair failed and we were unable to recover it. 00:38:33.697 [2024-12-09 09:56:08.874295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.697 [2024-12-09 09:56:08.874323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.697 qpair failed and we were unable to recover it. 00:38:33.697 [2024-12-09 09:56:08.874733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.697 [2024-12-09 09:56:08.874763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.697 qpair failed and we were unable to recover it. 00:38:33.697 [2024-12-09 09:56:08.875091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.697 [2024-12-09 09:56:08.875121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.697 qpair failed and we were unable to recover it. 00:38:33.697 [2024-12-09 09:56:08.875374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.697 [2024-12-09 09:56:08.875403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.697 qpair failed and we were unable to recover it. 00:38:33.697 [2024-12-09 09:56:08.875742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.697 [2024-12-09 09:56:08.875772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.697 qpair failed and we were unable to recover it. 00:38:33.697 [2024-12-09 09:56:08.876125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.697 [2024-12-09 09:56:08.876154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.697 qpair failed and we were unable to recover it. 00:38:33.697 [2024-12-09 09:56:08.876441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.697 [2024-12-09 09:56:08.876471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.697 qpair failed and we were unable to recover it. 00:38:33.697 [2024-12-09 09:56:08.876811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.697 [2024-12-09 09:56:08.876842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.697 qpair failed and we were unable to recover it. 00:38:33.697 [2024-12-09 09:56:08.877060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.697 [2024-12-09 09:56:08.877087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.697 qpair failed and we were unable to recover it. 00:38:33.697 [2024-12-09 09:56:08.877442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.697 [2024-12-09 09:56:08.877470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.697 qpair failed and we were unable to recover it. 00:38:33.697 [2024-12-09 09:56:08.877810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.697 [2024-12-09 09:56:08.877841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.697 qpair failed and we were unable to recover it. 00:38:33.697 [2024-12-09 09:56:08.878172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.697 [2024-12-09 09:56:08.878202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.697 qpair failed and we were unable to recover it. 00:38:33.697 [2024-12-09 09:56:08.878573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.697 [2024-12-09 09:56:08.878602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.697 qpair failed and we were unable to recover it. 00:38:33.697 [2024-12-09 09:56:08.878965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.697 [2024-12-09 09:56:08.878995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.697 qpair failed and we were unable to recover it. 00:38:33.697 [2024-12-09 09:56:08.879346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.697 [2024-12-09 09:56:08.879374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.697 qpair failed and we were unable to recover it. 00:38:33.697 [2024-12-09 09:56:08.879703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.697 [2024-12-09 09:56:08.879734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.697 qpair failed and we were unable to recover it. 00:38:33.697 [2024-12-09 09:56:08.880107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.697 [2024-12-09 09:56:08.880135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.697 qpair failed and we were unable to recover it. 00:38:33.697 [2024-12-09 09:56:08.880454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.697 [2024-12-09 09:56:08.880483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.697 qpair failed and we were unable to recover it. 00:38:33.697 [2024-12-09 09:56:08.880830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.697 [2024-12-09 09:56:08.880861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.697 qpair failed and we were unable to recover it. 00:38:33.697 [2024-12-09 09:56:08.881177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.697 [2024-12-09 09:56:08.881206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.697 qpair failed and we were unable to recover it. 00:38:33.697 [2024-12-09 09:56:08.881545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.697 [2024-12-09 09:56:08.881573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.697 qpair failed and we were unable to recover it. 00:38:33.697 [2024-12-09 09:56:08.881943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.697 [2024-12-09 09:56:08.881973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.697 qpair failed and we were unable to recover it. 00:38:33.697 [2024-12-09 09:56:08.882213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.697 [2024-12-09 09:56:08.882241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.697 qpair failed and we were unable to recover it. 00:38:33.697 [2024-12-09 09:56:08.882692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.698 [2024-12-09 09:56:08.882727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.698 qpair failed and we were unable to recover it. 00:38:33.698 [2024-12-09 09:56:08.883110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.698 [2024-12-09 09:56:08.883141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.698 qpair failed and we were unable to recover it. 00:38:33.698 [2024-12-09 09:56:08.883468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.698 [2024-12-09 09:56:08.883497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.698 qpair failed and we were unable to recover it. 00:38:33.698 [2024-12-09 09:56:08.883833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.698 [2024-12-09 09:56:08.883863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.698 qpair failed and we were unable to recover it. 00:38:33.698 [2024-12-09 09:56:08.884199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.698 [2024-12-09 09:56:08.884229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.698 qpair failed and we were unable to recover it. 00:38:33.698 [2024-12-09 09:56:08.884570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.698 [2024-12-09 09:56:08.884600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.698 qpair failed and we were unable to recover it. 00:38:33.698 [2024-12-09 09:56:08.884924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.698 [2024-12-09 09:56:08.884955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.698 qpair failed and we were unable to recover it. 00:38:33.698 [2024-12-09 09:56:08.885330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.698 [2024-12-09 09:56:08.885359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.698 qpair failed and we were unable to recover it. 00:38:33.698 [2024-12-09 09:56:08.885704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.698 [2024-12-09 09:56:08.885734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.698 qpair failed and we were unable to recover it. 00:38:33.698 [2024-12-09 09:56:08.886095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.698 [2024-12-09 09:56:08.886124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.698 qpair failed and we were unable to recover it. 00:38:33.698 [2024-12-09 09:56:08.886471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.698 [2024-12-09 09:56:08.886501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.698 qpair failed and we were unable to recover it. 00:38:33.698 [2024-12-09 09:56:08.886857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.698 [2024-12-09 09:56:08.886887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.698 qpair failed and we were unable to recover it. 00:38:33.698 [2024-12-09 09:56:08.887259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.698 [2024-12-09 09:56:08.887288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.698 qpair failed and we were unable to recover it. 00:38:33.698 [2024-12-09 09:56:08.887701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.698 [2024-12-09 09:56:08.887733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.698 qpair failed and we were unable to recover it. 00:38:33.698 [2024-12-09 09:56:08.888080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.698 [2024-12-09 09:56:08.888122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.698 qpair failed and we were unable to recover it. 00:38:33.698 [2024-12-09 09:56:08.888445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.698 [2024-12-09 09:56:08.888475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.698 qpair failed and we were unable to recover it. 00:38:33.698 [2024-12-09 09:56:08.888834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.698 [2024-12-09 09:56:08.888866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.698 qpair failed and we were unable to recover it. 00:38:33.698 [2024-12-09 09:56:08.889198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.698 [2024-12-09 09:56:08.889227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.698 qpair failed and we were unable to recover it. 00:38:33.698 [2024-12-09 09:56:08.889576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.698 [2024-12-09 09:56:08.889604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.698 qpair failed and we were unable to recover it. 00:38:33.698 [2024-12-09 09:56:08.889967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.698 [2024-12-09 09:56:08.889997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.698 qpair failed and we were unable to recover it. 00:38:33.698 [2024-12-09 09:56:08.890339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.698 [2024-12-09 09:56:08.890367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.698 qpair failed and we were unable to recover it. 00:38:33.698 [2024-12-09 09:56:08.890716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.698 [2024-12-09 09:56:08.890746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.698 qpair failed and we were unable to recover it. 00:38:33.698 [2024-12-09 09:56:08.891088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.698 [2024-12-09 09:56:08.891117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.698 qpair failed and we were unable to recover it. 00:38:33.698 [2024-12-09 09:56:08.891487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.698 [2024-12-09 09:56:08.891517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.698 qpair failed and we were unable to recover it. 00:38:33.698 [2024-12-09 09:56:08.891912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.698 [2024-12-09 09:56:08.891942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.698 qpair failed and we were unable to recover it. 00:38:33.698 [2024-12-09 09:56:08.892286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.698 [2024-12-09 09:56:08.892315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.698 qpair failed and we were unable to recover it. 00:38:33.698 [2024-12-09 09:56:08.892667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.698 [2024-12-09 09:56:08.892697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.698 qpair failed and we were unable to recover it. 00:38:33.698 [2024-12-09 09:56:08.893098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.698 [2024-12-09 09:56:08.893127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.698 qpair failed and we were unable to recover it. 00:38:33.698 [2024-12-09 09:56:08.893446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.698 [2024-12-09 09:56:08.893475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.698 qpair failed and we were unable to recover it. 00:38:33.698 [2024-12-09 09:56:08.893809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.698 [2024-12-09 09:56:08.893840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.698 qpair failed and we were unable to recover it. 00:38:33.698 [2024-12-09 09:56:08.894190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.698 [2024-12-09 09:56:08.894219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.698 qpair failed and we were unable to recover it. 00:38:33.698 [2024-12-09 09:56:08.894568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.698 [2024-12-09 09:56:08.894596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.698 qpair failed and we were unable to recover it. 00:38:33.698 [2024-12-09 09:56:08.894950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.698 [2024-12-09 09:56:08.894981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.698 qpair failed and we were unable to recover it. 00:38:33.698 [2024-12-09 09:56:08.895197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.698 [2024-12-09 09:56:08.895225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.698 qpair failed and we were unable to recover it. 00:38:33.698 [2024-12-09 09:56:08.895547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.698 [2024-12-09 09:56:08.895576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.698 qpair failed and we were unable to recover it. 00:38:33.698 [2024-12-09 09:56:08.895987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.698 [2024-12-09 09:56:08.896017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.698 qpair failed and we were unable to recover it. 00:38:33.698 [2024-12-09 09:56:08.896345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.698 [2024-12-09 09:56:08.896374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.698 qpair failed and we were unable to recover it. 00:38:33.698 [2024-12-09 09:56:08.896740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.698 [2024-12-09 09:56:08.896771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.698 qpair failed and we were unable to recover it. 00:38:33.698 [2024-12-09 09:56:08.897125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.699 [2024-12-09 09:56:08.897154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.699 qpair failed and we were unable to recover it. 00:38:33.699 [2024-12-09 09:56:08.897491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.699 [2024-12-09 09:56:08.897521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.699 qpair failed and we were unable to recover it. 00:38:33.699 [2024-12-09 09:56:08.897879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.699 [2024-12-09 09:56:08.897910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.699 qpair failed and we were unable to recover it. 00:38:33.699 [2024-12-09 09:56:08.898265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.699 [2024-12-09 09:56:08.898294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.699 qpair failed and we were unable to recover it. 00:38:33.699 [2024-12-09 09:56:08.898649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.699 [2024-12-09 09:56:08.898680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.699 qpair failed and we were unable to recover it. 00:38:33.699 [2024-12-09 09:56:08.899039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.699 [2024-12-09 09:56:08.899069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.699 qpair failed and we were unable to recover it. 00:38:33.699 [2024-12-09 09:56:08.899411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.699 [2024-12-09 09:56:08.899440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.699 qpair failed and we were unable to recover it. 00:38:33.699 [2024-12-09 09:56:08.899887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.699 [2024-12-09 09:56:08.899917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.699 qpair failed and we were unable to recover it. 00:38:33.699 [2024-12-09 09:56:08.900301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.699 [2024-12-09 09:56:08.900330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.699 qpair failed and we were unable to recover it. 00:38:33.699 [2024-12-09 09:56:08.900583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.699 [2024-12-09 09:56:08.900611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.699 qpair failed and we were unable to recover it. 00:38:33.699 [2024-12-09 09:56:08.901056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.699 [2024-12-09 09:56:08.901086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.699 qpair failed and we were unable to recover it. 00:38:33.699 [2024-12-09 09:56:08.901417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.699 [2024-12-09 09:56:08.901446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.699 qpair failed and we were unable to recover it. 00:38:33.699 [2024-12-09 09:56:08.901789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.699 [2024-12-09 09:56:08.901818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.699 qpair failed and we were unable to recover it. 00:38:33.699 [2024-12-09 09:56:08.902194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.699 [2024-12-09 09:56:08.902224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.699 qpair failed and we were unable to recover it. 00:38:33.699 [2024-12-09 09:56:08.902562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.699 [2024-12-09 09:56:08.902591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.699 qpair failed and we were unable to recover it. 00:38:33.699 [2024-12-09 09:56:08.902902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.699 [2024-12-09 09:56:08.902931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.699 qpair failed and we were unable to recover it. 00:38:33.699 [2024-12-09 09:56:08.903282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.699 [2024-12-09 09:56:08.903312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.699 qpair failed and we were unable to recover it. 00:38:33.699 [2024-12-09 09:56:08.903660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.699 [2024-12-09 09:56:08.903692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.699 qpair failed and we were unable to recover it. 00:38:33.699 [2024-12-09 09:56:08.904043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.699 [2024-12-09 09:56:08.904072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.699 qpair failed and we were unable to recover it. 00:38:33.699 [2024-12-09 09:56:08.904288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.699 [2024-12-09 09:56:08.904316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.699 qpair failed and we were unable to recover it. 00:38:33.699 [2024-12-09 09:56:08.904667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.699 [2024-12-09 09:56:08.904697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.699 qpair failed and we were unable to recover it. 00:38:33.699 [2024-12-09 09:56:08.905037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.699 [2024-12-09 09:56:08.905065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.699 qpair failed and we were unable to recover it. 00:38:33.699 [2024-12-09 09:56:08.905412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.699 [2024-12-09 09:56:08.905441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.699 qpair failed and we were unable to recover it. 00:38:33.699 [2024-12-09 09:56:08.905804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.699 [2024-12-09 09:56:08.905834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.699 qpair failed and we were unable to recover it. 00:38:33.699 [2024-12-09 09:56:08.906164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.699 [2024-12-09 09:56:08.906192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.699 qpair failed and we were unable to recover it. 00:38:33.699 [2024-12-09 09:56:08.906538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.699 [2024-12-09 09:56:08.906566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.699 qpair failed and we were unable to recover it. 00:38:33.699 [2024-12-09 09:56:08.906895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.699 [2024-12-09 09:56:08.906925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.699 qpair failed and we were unable to recover it. 00:38:33.699 [2024-12-09 09:56:08.907273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.699 [2024-12-09 09:56:08.907301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.699 qpair failed and we were unable to recover it. 00:38:33.699 [2024-12-09 09:56:08.907657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.699 [2024-12-09 09:56:08.907687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.699 qpair failed and we were unable to recover it. 00:38:33.699 [2024-12-09 09:56:08.908101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.699 [2024-12-09 09:56:08.908130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.699 qpair failed and we were unable to recover it. 00:38:33.699 [2024-12-09 09:56:08.908462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.699 [2024-12-09 09:56:08.908491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.699 qpair failed and we were unable to recover it. 00:38:33.699 [2024-12-09 09:56:08.908853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.699 [2024-12-09 09:56:08.908883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.699 qpair failed and we were unable to recover it. 00:38:33.699 [2024-12-09 09:56:08.909248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.700 [2024-12-09 09:56:08.909276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.700 qpair failed and we were unable to recover it. 00:38:33.700 [2024-12-09 09:56:08.909618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.700 [2024-12-09 09:56:08.909654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.700 qpair failed and we were unable to recover it. 00:38:33.700 [2024-12-09 09:56:08.909989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.700 [2024-12-09 09:56:08.910017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.700 qpair failed and we were unable to recover it. 00:38:33.700 [2024-12-09 09:56:08.910274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.700 [2024-12-09 09:56:08.910304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.700 qpair failed and we were unable to recover it. 00:38:33.700 [2024-12-09 09:56:08.910686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.700 [2024-12-09 09:56:08.910716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.700 qpair failed and we were unable to recover it. 00:38:33.700 [2024-12-09 09:56:08.911062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.700 [2024-12-09 09:56:08.911090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.700 qpair failed and we were unable to recover it. 00:38:33.700 [2024-12-09 09:56:08.911470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.700 [2024-12-09 09:56:08.911499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.700 qpair failed and we were unable to recover it. 00:38:33.700 [2024-12-09 09:56:08.911839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.700 [2024-12-09 09:56:08.911869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.700 qpair failed and we were unable to recover it. 00:38:33.700 [2024-12-09 09:56:08.912198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.700 [2024-12-09 09:56:08.912226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.700 qpair failed and we were unable to recover it. 00:38:33.700 [2024-12-09 09:56:08.912562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.700 [2024-12-09 09:56:08.912591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.700 qpair failed and we were unable to recover it. 00:38:33.700 [2024-12-09 09:56:08.912921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.700 [2024-12-09 09:56:08.912952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.700 qpair failed and we were unable to recover it. 00:38:33.700 [2024-12-09 09:56:08.913113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.700 [2024-12-09 09:56:08.913141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.700 qpair failed and we were unable to recover it. 00:38:33.700 [2024-12-09 09:56:08.913381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.700 [2024-12-09 09:56:08.913417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.700 qpair failed and we were unable to recover it. 00:38:33.700 [2024-12-09 09:56:08.913771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.700 [2024-12-09 09:56:08.913801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.700 qpair failed and we were unable to recover it. 00:38:33.700 [2024-12-09 09:56:08.914136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.700 [2024-12-09 09:56:08.914165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.700 qpair failed and we were unable to recover it. 00:38:33.700 [2024-12-09 09:56:08.914527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.700 [2024-12-09 09:56:08.914563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.700 qpair failed and we were unable to recover it. 00:38:33.700 [2024-12-09 09:56:08.914893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.700 [2024-12-09 09:56:08.914922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.700 qpair failed and we were unable to recover it. 00:38:33.700 [2024-12-09 09:56:08.915265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.700 [2024-12-09 09:56:08.915293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.700 qpair failed and we were unable to recover it. 00:38:33.700 [2024-12-09 09:56:08.915729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.700 [2024-12-09 09:56:08.915759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.700 qpair failed and we were unable to recover it. 00:38:33.700 [2024-12-09 09:56:08.916095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.700 [2024-12-09 09:56:08.916122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.700 qpair failed and we were unable to recover it. 00:38:33.700 [2024-12-09 09:56:08.916481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.700 [2024-12-09 09:56:08.916509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.700 qpair failed and we were unable to recover it. 00:38:33.700 [2024-12-09 09:56:08.916800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.700 [2024-12-09 09:56:08.916829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.700 qpair failed and we were unable to recover it. 00:38:33.700 [2024-12-09 09:56:08.917184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.700 [2024-12-09 09:56:08.917213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.700 qpair failed and we were unable to recover it. 00:38:33.700 [2024-12-09 09:56:08.917570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.700 [2024-12-09 09:56:08.917598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.700 qpair failed and we were unable to recover it. 00:38:33.700 [2024-12-09 09:56:08.918041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.700 [2024-12-09 09:56:08.918072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.700 qpair failed and we were unable to recover it. 00:38:33.700 [2024-12-09 09:56:08.918386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.700 [2024-12-09 09:56:08.918414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.700 qpair failed and we were unable to recover it. 00:38:33.700 [2024-12-09 09:56:08.918658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.700 [2024-12-09 09:56:08.918688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.700 qpair failed and we were unable to recover it. 00:38:33.700 [2024-12-09 09:56:08.919039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.700 [2024-12-09 09:56:08.919068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.700 qpair failed and we were unable to recover it. 00:38:33.700 [2024-12-09 09:56:08.919357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.700 [2024-12-09 09:56:08.919385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.700 qpair failed and we were unable to recover it. 00:38:33.700 [2024-12-09 09:56:08.919723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.700 [2024-12-09 09:56:08.919754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.700 qpair failed and we were unable to recover it. 00:38:33.700 [2024-12-09 09:56:08.920109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.700 [2024-12-09 09:56:08.920138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.700 qpair failed and we were unable to recover it. 00:38:33.700 [2024-12-09 09:56:08.920490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.700 [2024-12-09 09:56:08.920519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.700 qpair failed and we were unable to recover it. 00:38:33.700 [2024-12-09 09:56:08.920871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.700 [2024-12-09 09:56:08.920901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.700 qpair failed and we were unable to recover it. 00:38:33.700 [2024-12-09 09:56:08.921257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.700 [2024-12-09 09:56:08.921284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.700 qpair failed and we were unable to recover it. 00:38:33.700 [2024-12-09 09:56:08.921516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.700 [2024-12-09 09:56:08.921545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.700 qpair failed and we were unable to recover it. 00:38:33.700 [2024-12-09 09:56:08.921939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.700 [2024-12-09 09:56:08.921969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.700 qpair failed and we were unable to recover it. 00:38:33.700 [2024-12-09 09:56:08.922288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.700 [2024-12-09 09:56:08.922317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.700 qpair failed and we were unable to recover it. 00:38:33.700 [2024-12-09 09:56:08.922658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.700 [2024-12-09 09:56:08.922688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.700 qpair failed and we were unable to recover it. 00:38:33.701 [2024-12-09 09:56:08.923036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.701 [2024-12-09 09:56:08.923065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.701 qpair failed and we were unable to recover it. 00:38:33.701 [2024-12-09 09:56:08.923426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.701 [2024-12-09 09:56:08.923455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.701 qpair failed and we were unable to recover it. 00:38:33.701 [2024-12-09 09:56:08.923772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.701 [2024-12-09 09:56:08.923802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.701 qpair failed and we were unable to recover it. 00:38:33.701 [2024-12-09 09:56:08.924136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.701 [2024-12-09 09:56:08.924164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.701 qpair failed and we were unable to recover it. 00:38:33.701 [2024-12-09 09:56:08.924513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.701 [2024-12-09 09:56:08.924541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.701 qpair failed and we were unable to recover it. 00:38:33.701 [2024-12-09 09:56:08.924795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.701 [2024-12-09 09:56:08.924825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.701 qpair failed and we were unable to recover it. 00:38:33.701 [2024-12-09 09:56:08.925165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.701 [2024-12-09 09:56:08.925194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.701 qpair failed and we were unable to recover it. 00:38:33.701 [2024-12-09 09:56:08.925547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.701 [2024-12-09 09:56:08.925576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.701 qpair failed and we were unable to recover it. 00:38:33.701 [2024-12-09 09:56:08.925931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.701 [2024-12-09 09:56:08.925961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.701 qpair failed and we were unable to recover it. 00:38:33.701 [2024-12-09 09:56:08.926300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.701 [2024-12-09 09:56:08.926329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.701 qpair failed and we were unable to recover it. 00:38:33.701 [2024-12-09 09:56:08.926670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.701 [2024-12-09 09:56:08.926700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.701 qpair failed and we were unable to recover it. 00:38:33.701 [2024-12-09 09:56:08.927042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.701 [2024-12-09 09:56:08.927070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.701 qpair failed and we were unable to recover it. 00:38:33.701 [2024-12-09 09:56:08.927489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.701 [2024-12-09 09:56:08.927518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.701 qpair failed and we were unable to recover it. 00:38:33.701 [2024-12-09 09:56:08.927889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.701 [2024-12-09 09:56:08.927919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.701 qpair failed and we were unable to recover it. 00:38:33.701 [2024-12-09 09:56:08.928262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.701 [2024-12-09 09:56:08.928290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.701 qpair failed and we were unable to recover it. 00:38:33.701 [2024-12-09 09:56:08.928687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.701 [2024-12-09 09:56:08.928718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.701 qpair failed and we were unable to recover it. 00:38:33.701 [2024-12-09 09:56:08.929037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.701 [2024-12-09 09:56:08.929067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.701 qpair failed and we were unable to recover it. 00:38:33.701 [2024-12-09 09:56:08.929423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.701 [2024-12-09 09:56:08.929451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.701 qpair failed and we were unable to recover it. 00:38:33.701 [2024-12-09 09:56:08.929802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.701 [2024-12-09 09:56:08.929832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.701 qpair failed and we were unable to recover it. 00:38:33.701 [2024-12-09 09:56:08.930058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.701 [2024-12-09 09:56:08.930086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.701 qpair failed and we were unable to recover it. 00:38:33.701 [2024-12-09 09:56:08.930336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.701 [2024-12-09 09:56:08.930368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.701 qpair failed and we were unable to recover it. 00:38:33.701 [2024-12-09 09:56:08.930702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.701 [2024-12-09 09:56:08.930732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.701 qpair failed and we were unable to recover it. 00:38:33.701 [2024-12-09 09:56:08.931094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.701 [2024-12-09 09:56:08.931122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.701 qpair failed and we were unable to recover it. 00:38:33.701 [2024-12-09 09:56:08.931481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.701 [2024-12-09 09:56:08.931510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.701 qpair failed and we were unable to recover it. 00:38:33.701 [2024-12-09 09:56:08.931852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.701 [2024-12-09 09:56:08.931882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.701 qpair failed and we were unable to recover it. 00:38:33.701 [2024-12-09 09:56:08.932235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.701 [2024-12-09 09:56:08.932264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.701 qpair failed and we were unable to recover it. 00:38:33.701 [2024-12-09 09:56:08.932616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.701 [2024-12-09 09:56:08.932652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.701 qpair failed and we were unable to recover it. 00:38:33.701 [2024-12-09 09:56:08.933001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.701 [2024-12-09 09:56:08.933030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.701 qpair failed and we were unable to recover it. 00:38:33.701 [2024-12-09 09:56:08.933387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.701 [2024-12-09 09:56:08.933416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.701 qpair failed and we were unable to recover it. 00:38:33.701 [2024-12-09 09:56:08.933760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.701 [2024-12-09 09:56:08.933791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.701 qpair failed and we were unable to recover it. 00:38:33.701 [2024-12-09 09:56:08.934116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.701 [2024-12-09 09:56:08.934144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.701 qpair failed and we were unable to recover it. 00:38:33.701 [2024-12-09 09:56:08.934500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.701 [2024-12-09 09:56:08.934529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.701 qpair failed and we were unable to recover it. 00:38:33.701 [2024-12-09 09:56:08.934881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.701 [2024-12-09 09:56:08.934911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.701 qpair failed and we were unable to recover it. 00:38:33.701 [2024-12-09 09:56:08.935246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.701 [2024-12-09 09:56:08.935275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.701 qpair failed and we were unable to recover it. 00:38:33.701 [2024-12-09 09:56:08.935620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.701 [2024-12-09 09:56:08.935660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.701 qpair failed and we were unable to recover it. 00:38:33.701 [2024-12-09 09:56:08.936012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.701 [2024-12-09 09:56:08.936042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.701 qpair failed and we were unable to recover it. 00:38:33.701 [2024-12-09 09:56:08.936402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.701 [2024-12-09 09:56:08.936431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.701 qpair failed and we were unable to recover it. 00:38:33.701 [2024-12-09 09:56:08.936784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.701 [2024-12-09 09:56:08.936814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.701 qpair failed and we were unable to recover it. 00:38:33.702 [2024-12-09 09:56:08.937142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.702 [2024-12-09 09:56:08.937170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.702 qpair failed and we were unable to recover it. 00:38:33.702 [2024-12-09 09:56:08.937403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.702 [2024-12-09 09:56:08.937434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.702 qpair failed and we were unable to recover it. 00:38:33.702 [2024-12-09 09:56:08.937774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.702 [2024-12-09 09:56:08.937805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.702 qpair failed and we were unable to recover it. 00:38:33.702 [2024-12-09 09:56:08.938142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.702 [2024-12-09 09:56:08.938171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.702 qpair failed and we were unable to recover it. 00:38:33.702 [2024-12-09 09:56:08.938514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.702 [2024-12-09 09:56:08.938549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.702 qpair failed and we were unable to recover it. 00:38:33.702 [2024-12-09 09:56:08.938893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.702 [2024-12-09 09:56:08.938923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.702 qpair failed and we were unable to recover it. 00:38:33.702 [2024-12-09 09:56:08.939160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.702 [2024-12-09 09:56:08.939190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.702 qpair failed and we were unable to recover it. 00:38:33.702 [2024-12-09 09:56:08.939571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.702 [2024-12-09 09:56:08.939600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.702 qpair failed and we were unable to recover it. 00:38:33.702 [2024-12-09 09:56:08.939998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.702 [2024-12-09 09:56:08.940029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.702 qpair failed and we were unable to recover it. 00:38:33.702 [2024-12-09 09:56:08.940364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.702 [2024-12-09 09:56:08.940394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.702 qpair failed and we were unable to recover it. 00:38:33.702 [2024-12-09 09:56:08.940725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.702 [2024-12-09 09:56:08.940756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.702 qpair failed and we were unable to recover it. 00:38:33.702 [2024-12-09 09:56:08.941095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.702 [2024-12-09 09:56:08.941125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.702 qpair failed and we were unable to recover it. 00:38:33.702 [2024-12-09 09:56:08.941488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.702 [2024-12-09 09:56:08.941517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.702 qpair failed and we were unable to recover it. 00:38:33.702 [2024-12-09 09:56:08.941909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.702 [2024-12-09 09:56:08.941939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.702 qpair failed and we were unable to recover it. 00:38:33.702 [2024-12-09 09:56:08.942263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.702 [2024-12-09 09:56:08.942293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.702 qpair failed and we were unable to recover it. 00:38:33.702 [2024-12-09 09:56:08.942525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.702 [2024-12-09 09:56:08.942555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.702 qpair failed and we were unable to recover it. 00:38:33.702 [2024-12-09 09:56:08.942910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.702 [2024-12-09 09:56:08.942940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.702 qpair failed and we were unable to recover it. 00:38:33.702 [2024-12-09 09:56:08.943288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.702 [2024-12-09 09:56:08.943316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.702 qpair failed and we were unable to recover it. 00:38:33.702 [2024-12-09 09:56:08.943662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.702 [2024-12-09 09:56:08.943692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.702 qpair failed and we were unable to recover it. 00:38:33.702 [2024-12-09 09:56:08.944043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.702 [2024-12-09 09:56:08.944072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.702 qpair failed and we were unable to recover it. 00:38:33.702 [2024-12-09 09:56:08.944422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.702 [2024-12-09 09:56:08.944452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.702 qpair failed and we were unable to recover it. 00:38:33.702 [2024-12-09 09:56:08.944799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.702 [2024-12-09 09:56:08.944830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.702 qpair failed and we were unable to recover it. 00:38:33.702 [2024-12-09 09:56:08.945183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.702 [2024-12-09 09:56:08.945211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.702 qpair failed and we were unable to recover it. 00:38:33.702 [2024-12-09 09:56:08.945551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.702 [2024-12-09 09:56:08.945580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.702 qpair failed and we were unable to recover it. 00:38:33.702 [2024-12-09 09:56:08.945915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.702 [2024-12-09 09:56:08.945945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.702 qpair failed and we were unable to recover it. 00:38:33.702 [2024-12-09 09:56:08.946280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.702 [2024-12-09 09:56:08.946309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.702 qpair failed and we were unable to recover it. 00:38:33.702 [2024-12-09 09:56:08.946675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.702 [2024-12-09 09:56:08.946705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.702 qpair failed and we were unable to recover it. 00:38:33.702 [2024-12-09 09:56:08.947049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.702 [2024-12-09 09:56:08.947079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.702 qpair failed and we were unable to recover it. 00:38:33.702 [2024-12-09 09:56:08.947493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.702 [2024-12-09 09:56:08.947521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.702 qpair failed and we were unable to recover it. 00:38:33.702 [2024-12-09 09:56:08.947887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.702 [2024-12-09 09:56:08.947917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.702 qpair failed and we were unable to recover it. 00:38:33.702 [2024-12-09 09:56:08.948254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.702 [2024-12-09 09:56:08.948283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.702 qpair failed and we were unable to recover it. 00:38:33.702 [2024-12-09 09:56:08.948562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.702 [2024-12-09 09:56:08.948591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.702 qpair failed and we were unable to recover it. 00:38:33.702 [2024-12-09 09:56:08.948948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.702 [2024-12-09 09:56:08.948979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.702 qpair failed and we were unable to recover it. 00:38:33.702 [2024-12-09 09:56:08.949308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.702 [2024-12-09 09:56:08.949337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.702 qpair failed and we were unable to recover it. 00:38:33.702 [2024-12-09 09:56:08.949584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.702 [2024-12-09 09:56:08.949613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.702 qpair failed and we were unable to recover it. 00:38:33.702 [2024-12-09 09:56:08.949973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.702 [2024-12-09 09:56:08.950003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.702 qpair failed and we were unable to recover it. 00:38:33.702 [2024-12-09 09:56:08.950340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.702 [2024-12-09 09:56:08.950368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.702 qpair failed and we were unable to recover it. 00:38:33.702 [2024-12-09 09:56:08.950598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.702 [2024-12-09 09:56:08.950628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.702 qpair failed and we were unable to recover it. 00:38:33.703 [2024-12-09 09:56:08.950987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.703 [2024-12-09 09:56:08.951016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.703 qpair failed and we were unable to recover it. 00:38:33.703 [2024-12-09 09:56:08.951360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.703 [2024-12-09 09:56:08.951389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.703 qpair failed and we were unable to recover it. 00:38:33.703 [2024-12-09 09:56:08.951737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.703 [2024-12-09 09:56:08.951767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.703 qpair failed and we were unable to recover it. 00:38:33.703 [2024-12-09 09:56:08.952124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.703 [2024-12-09 09:56:08.952153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.703 qpair failed and we were unable to recover it. 00:38:33.703 [2024-12-09 09:56:08.952495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.703 [2024-12-09 09:56:08.952524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.703 qpair failed and we were unable to recover it. 00:38:33.703 [2024-12-09 09:56:08.952859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.703 [2024-12-09 09:56:08.952889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.703 qpair failed and we were unable to recover it. 00:38:33.703 [2024-12-09 09:56:08.953247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.703 [2024-12-09 09:56:08.953276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.703 qpair failed and we were unable to recover it. 00:38:33.703 [2024-12-09 09:56:08.953619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.703 [2024-12-09 09:56:08.953668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.703 qpair failed and we were unable to recover it. 00:38:33.703 [2024-12-09 09:56:08.954016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.703 [2024-12-09 09:56:08.954045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.703 qpair failed and we were unable to recover it. 00:38:33.703 [2024-12-09 09:56:08.954382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.703 [2024-12-09 09:56:08.954411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.703 qpair failed and we were unable to recover it. 00:38:33.703 [2024-12-09 09:56:08.954791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.703 [2024-12-09 09:56:08.954820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.703 qpair failed and we were unable to recover it. 00:38:33.703 [2024-12-09 09:56:08.955052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.703 [2024-12-09 09:56:08.955084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.703 qpair failed and we were unable to recover it. 00:38:33.703 [2024-12-09 09:56:08.955394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.703 [2024-12-09 09:56:08.955423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.703 qpair failed and we were unable to recover it. 00:38:33.703 [2024-12-09 09:56:08.955758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.703 [2024-12-09 09:56:08.955788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.703 qpair failed and we were unable to recover it. 00:38:33.703 [2024-12-09 09:56:08.956142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.703 [2024-12-09 09:56:08.956171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.703 qpair failed and we were unable to recover it. 00:38:33.703 [2024-12-09 09:56:08.956517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.703 [2024-12-09 09:56:08.956547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.703 qpair failed and we were unable to recover it. 00:38:33.703 [2024-12-09 09:56:08.956888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.703 [2024-12-09 09:56:08.956918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.703 qpair failed and we were unable to recover it. 00:38:33.703 [2024-12-09 09:56:08.957252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.703 [2024-12-09 09:56:08.957281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.703 qpair failed and we were unable to recover it. 00:38:33.703 [2024-12-09 09:56:08.957627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.703 [2024-12-09 09:56:08.957666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.703 qpair failed and we were unable to recover it. 00:38:33.703 [2024-12-09 09:56:08.958024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.703 [2024-12-09 09:56:08.958053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.703 qpair failed and we were unable to recover it. 00:38:33.703 [2024-12-09 09:56:08.958411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.703 [2024-12-09 09:56:08.958440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.703 qpair failed and we were unable to recover it. 00:38:33.703 [2024-12-09 09:56:08.958776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.703 [2024-12-09 09:56:08.958808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.703 qpair failed and we were unable to recover it. 00:38:33.703 [2024-12-09 09:56:08.959232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.703 [2024-12-09 09:56:08.959261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.703 qpair failed and we were unable to recover it. 00:38:33.703 [2024-12-09 09:56:08.959499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.703 [2024-12-09 09:56:08.959527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.703 qpair failed and we were unable to recover it. 00:38:33.703 [2024-12-09 09:56:08.959880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.703 [2024-12-09 09:56:08.959911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.703 qpair failed and we were unable to recover it. 00:38:33.703 [2024-12-09 09:56:08.960256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.703 [2024-12-09 09:56:08.960286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.703 qpair failed and we were unable to recover it. 00:38:33.703 [2024-12-09 09:56:08.960627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.703 [2024-12-09 09:56:08.960665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.703 qpair failed and we were unable to recover it. 00:38:33.703 [2024-12-09 09:56:08.960905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.703 [2024-12-09 09:56:08.960933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.703 qpair failed and we were unable to recover it. 00:38:33.703 [2024-12-09 09:56:08.961169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.703 [2024-12-09 09:56:08.961201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.703 qpair failed and we were unable to recover it. 00:38:33.703 [2024-12-09 09:56:08.961558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.703 [2024-12-09 09:56:08.961587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.703 qpair failed and we were unable to recover it. 00:38:33.703 [2024-12-09 09:56:08.961922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.703 [2024-12-09 09:56:08.961952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.703 qpair failed and we were unable to recover it. 00:38:33.703 [2024-12-09 09:56:08.962275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.703 [2024-12-09 09:56:08.962303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.703 qpair failed and we were unable to recover it. 00:38:33.703 [2024-12-09 09:56:08.962672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.703 [2024-12-09 09:56:08.962703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.703 qpair failed and we were unable to recover it. 00:38:33.703 [2024-12-09 09:56:08.963016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.703 [2024-12-09 09:56:08.963054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.703 qpair failed and we were unable to recover it. 00:38:33.703 [2024-12-09 09:56:08.963362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.703 [2024-12-09 09:56:08.963398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.703 qpair failed and we were unable to recover it. 00:38:33.703 [2024-12-09 09:56:08.963634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.703 [2024-12-09 09:56:08.963674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.703 qpair failed and we were unable to recover it. 00:38:33.703 [2024-12-09 09:56:08.964024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.703 [2024-12-09 09:56:08.964053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.703 qpair failed and we were unable to recover it. 00:38:33.703 [2024-12-09 09:56:08.964390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.703 [2024-12-09 09:56:08.964419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.704 qpair failed and we were unable to recover it. 00:38:33.704 [2024-12-09 09:56:08.964765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.704 [2024-12-09 09:56:08.964794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.704 qpair failed and we were unable to recover it. 00:38:33.704 [2024-12-09 09:56:08.965153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.704 [2024-12-09 09:56:08.965182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.704 qpair failed and we were unable to recover it. 00:38:33.704 [2024-12-09 09:56:08.965516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.704 [2024-12-09 09:56:08.965545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.704 qpair failed and we were unable to recover it. 00:38:33.704 [2024-12-09 09:56:08.965780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.704 [2024-12-09 09:56:08.965812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.704 qpair failed and we were unable to recover it. 00:38:33.704 [2024-12-09 09:56:08.966157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.704 [2024-12-09 09:56:08.966186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.704 qpair failed and we were unable to recover it. 00:38:33.704 [2024-12-09 09:56:08.966530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.704 [2024-12-09 09:56:08.966559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.704 qpair failed and we were unable to recover it. 00:38:33.704 [2024-12-09 09:56:08.966882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.704 [2024-12-09 09:56:08.966912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.704 qpair failed and we were unable to recover it. 00:38:33.704 [2024-12-09 09:56:08.967270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.704 [2024-12-09 09:56:08.967299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.704 qpair failed and we were unable to recover it. 00:38:33.704 [2024-12-09 09:56:08.967653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.704 [2024-12-09 09:56:08.967682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.704 qpair failed and we were unable to recover it. 00:38:33.704 [2024-12-09 09:56:08.968013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.704 [2024-12-09 09:56:08.968042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.704 qpair failed and we were unable to recover it. 00:38:33.704 [2024-12-09 09:56:08.968397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.704 [2024-12-09 09:56:08.968426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.704 qpair failed and we were unable to recover it. 00:38:33.704 [2024-12-09 09:56:08.968658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.704 [2024-12-09 09:56:08.968688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.704 qpair failed and we were unable to recover it. 00:38:33.704 [2024-12-09 09:56:08.969039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.704 [2024-12-09 09:56:08.969069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.704 qpair failed and we were unable to recover it. 00:38:33.704 [2024-12-09 09:56:08.969426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.704 [2024-12-09 09:56:08.969455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.704 qpair failed and we were unable to recover it. 00:38:33.704 [2024-12-09 09:56:08.969768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.704 [2024-12-09 09:56:08.969798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.704 qpair failed and we were unable to recover it. 00:38:33.704 [2024-12-09 09:56:08.970137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.704 [2024-12-09 09:56:08.970166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.704 qpair failed and we were unable to recover it. 00:38:33.704 [2024-12-09 09:56:08.970509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.704 [2024-12-09 09:56:08.970537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.704 qpair failed and we were unable to recover it. 00:38:33.704 [2024-12-09 09:56:08.970964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.704 [2024-12-09 09:56:08.970995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.704 qpair failed and we were unable to recover it. 00:38:33.704 [2024-12-09 09:56:08.971356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.704 [2024-12-09 09:56:08.971385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.704 qpair failed and we were unable to recover it. 00:38:33.704 [2024-12-09 09:56:08.971724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.704 [2024-12-09 09:56:08.971754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.704 qpair failed and we were unable to recover it. 00:38:33.704 [2024-12-09 09:56:08.972164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.704 [2024-12-09 09:56:08.972193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.704 qpair failed and we were unable to recover it. 00:38:33.704 [2024-12-09 09:56:08.972504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.704 [2024-12-09 09:56:08.972533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.704 qpair failed and we were unable to recover it. 00:38:33.704 [2024-12-09 09:56:08.972833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.704 [2024-12-09 09:56:08.972872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.704 qpair failed and we were unable to recover it. 00:38:33.704 [2024-12-09 09:56:08.973225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.704 [2024-12-09 09:56:08.973254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.704 qpair failed and we were unable to recover it. 00:38:33.704 [2024-12-09 09:56:08.973699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.704 [2024-12-09 09:56:08.973730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.704 qpair failed and we were unable to recover it. 00:38:33.704 [2024-12-09 09:56:08.974055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.704 [2024-12-09 09:56:08.974084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.704 qpair failed and we were unable to recover it. 00:38:33.704 [2024-12-09 09:56:08.974341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.704 [2024-12-09 09:56:08.974370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.704 qpair failed and we were unable to recover it. 00:38:33.704 [2024-12-09 09:56:08.974718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.704 [2024-12-09 09:56:08.974748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.704 qpair failed and we were unable to recover it. 00:38:33.704 [2024-12-09 09:56:08.975103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.704 [2024-12-09 09:56:08.975132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.704 qpair failed and we were unable to recover it. 00:38:33.704 [2024-12-09 09:56:08.975477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.704 [2024-12-09 09:56:08.975506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.704 qpair failed and we were unable to recover it. 00:38:33.704 [2024-12-09 09:56:08.975729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.704 [2024-12-09 09:56:08.975758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.704 qpair failed and we were unable to recover it. 00:38:33.704 [2024-12-09 09:56:08.976100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.704 [2024-12-09 09:56:08.976129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.704 qpair failed and we were unable to recover it. 00:38:33.704 [2024-12-09 09:56:08.976529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.704 [2024-12-09 09:56:08.976558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.704 qpair failed and we were unable to recover it. 00:38:33.705 [2024-12-09 09:56:08.976882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.705 [2024-12-09 09:56:08.976913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.705 qpair failed and we were unable to recover it. 00:38:33.705 [2024-12-09 09:56:08.977246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.705 [2024-12-09 09:56:08.977275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.705 qpair failed and we were unable to recover it. 00:38:33.705 [2024-12-09 09:56:08.977635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.705 [2024-12-09 09:56:08.977672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.705 qpair failed and we were unable to recover it. 00:38:33.705 [2024-12-09 09:56:08.978011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.705 [2024-12-09 09:56:08.978040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.705 qpair failed and we were unable to recover it. 00:38:33.705 [2024-12-09 09:56:08.978383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.705 [2024-12-09 09:56:08.978418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.705 qpair failed and we were unable to recover it. 00:38:33.705 [2024-12-09 09:56:08.978757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.705 [2024-12-09 09:56:08.978786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.705 qpair failed and we were unable to recover it. 00:38:33.705 [2024-12-09 09:56:08.979145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.705 [2024-12-09 09:56:08.979174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.705 qpair failed and we were unable to recover it. 00:38:33.705 [2024-12-09 09:56:08.979518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.705 [2024-12-09 09:56:08.979547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.705 qpair failed and we were unable to recover it. 00:38:33.705 [2024-12-09 09:56:08.979885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.705 [2024-12-09 09:56:08.979915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.705 qpair failed and we were unable to recover it. 00:38:33.705 [2024-12-09 09:56:08.980297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.705 [2024-12-09 09:56:08.980327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.705 qpair failed and we were unable to recover it. 00:38:33.705 [2024-12-09 09:56:08.980747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.705 [2024-12-09 09:56:08.980778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.705 qpair failed and we were unable to recover it. 00:38:33.705 [2024-12-09 09:56:08.981098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.705 [2024-12-09 09:56:08.981129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.705 qpair failed and we were unable to recover it. 00:38:33.705 [2024-12-09 09:56:08.981484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.705 [2024-12-09 09:56:08.981513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.705 qpair failed and we were unable to recover it. 00:38:33.705 [2024-12-09 09:56:08.981839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.705 [2024-12-09 09:56:08.981869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.705 qpair failed and we were unable to recover it. 00:38:33.705 [2024-12-09 09:56:08.982107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.705 [2024-12-09 09:56:08.982134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.705 qpair failed and we were unable to recover it. 00:38:33.705 [2024-12-09 09:56:08.982501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.705 [2024-12-09 09:56:08.982530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.705 qpair failed and we were unable to recover it. 00:38:33.705 [2024-12-09 09:56:08.982941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.705 [2024-12-09 09:56:08.982971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.705 qpair failed and we were unable to recover it. 00:38:33.705 [2024-12-09 09:56:08.983289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.705 [2024-12-09 09:56:08.983317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.705 qpair failed and we were unable to recover it. 00:38:33.705 [2024-12-09 09:56:08.983672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.705 [2024-12-09 09:56:08.983705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.705 qpair failed and we were unable to recover it. 00:38:33.705 [2024-12-09 09:56:08.984084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.705 [2024-12-09 09:56:08.984113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.705 qpair failed and we were unable to recover it. 00:38:33.705 [2024-12-09 09:56:08.984474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.705 [2024-12-09 09:56:08.984503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.705 qpair failed and we were unable to recover it. 00:38:33.705 [2024-12-09 09:56:08.984856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.705 [2024-12-09 09:56:08.984886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.705 qpair failed and we were unable to recover it. 00:38:33.705 [2024-12-09 09:56:08.985236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.705 [2024-12-09 09:56:08.985265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.705 qpair failed and we were unable to recover it. 00:38:33.705 [2024-12-09 09:56:08.985602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.705 [2024-12-09 09:56:08.985634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.705 qpair failed and we were unable to recover it. 00:38:33.705 [2024-12-09 09:56:08.985993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.705 [2024-12-09 09:56:08.986023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.705 qpair failed and we were unable to recover it. 00:38:33.705 [2024-12-09 09:56:08.986361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.705 [2024-12-09 09:56:08.986391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.705 qpair failed and we were unable to recover it. 00:38:33.705 [2024-12-09 09:56:08.986755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.705 [2024-12-09 09:56:08.986787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.705 qpair failed and we were unable to recover it. 00:38:33.705 [2024-12-09 09:56:08.987144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.705 [2024-12-09 09:56:08.987174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.705 qpair failed and we were unable to recover it. 00:38:33.705 [2024-12-09 09:56:08.987517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.705 [2024-12-09 09:56:08.987545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.705 qpair failed and we were unable to recover it. 00:38:33.705 [2024-12-09 09:56:08.987885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.705 [2024-12-09 09:56:08.987915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.705 qpair failed and we were unable to recover it. 00:38:33.705 [2024-12-09 09:56:08.988135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.705 [2024-12-09 09:56:08.988162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.705 qpair failed and we were unable to recover it. 00:38:33.705 [2024-12-09 09:56:08.988484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.705 [2024-12-09 09:56:08.988519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.705 qpair failed and we were unable to recover it. 00:38:33.705 [2024-12-09 09:56:08.988889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.705 [2024-12-09 09:56:08.988919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.705 qpair failed and we were unable to recover it. 00:38:33.705 [2024-12-09 09:56:08.989267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.705 [2024-12-09 09:56:08.989295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.705 qpair failed and we were unable to recover it. 00:38:33.705 [2024-12-09 09:56:08.989659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.705 [2024-12-09 09:56:08.989690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.705 qpair failed and we were unable to recover it. 00:38:33.705 [2024-12-09 09:56:08.990018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.705 [2024-12-09 09:56:08.990048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.705 qpair failed and we were unable to recover it. 00:38:33.705 [2024-12-09 09:56:08.990379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.705 [2024-12-09 09:56:08.990408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.705 qpair failed and we were unable to recover it. 00:38:33.705 [2024-12-09 09:56:08.990753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.705 [2024-12-09 09:56:08.990784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.706 qpair failed and we were unable to recover it. 00:38:33.706 [2024-12-09 09:56:08.991029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.706 [2024-12-09 09:56:08.991057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.706 qpair failed and we were unable to recover it. 00:38:33.706 [2024-12-09 09:56:08.991493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.706 [2024-12-09 09:56:08.991522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.706 qpair failed and we were unable to recover it. 00:38:33.706 [2024-12-09 09:56:08.991751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.706 [2024-12-09 09:56:08.991781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.706 qpair failed and we were unable to recover it. 00:38:33.706 [2024-12-09 09:56:08.992130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.706 [2024-12-09 09:56:08.992159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.706 qpair failed and we were unable to recover it. 00:38:33.706 [2024-12-09 09:56:08.992505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.706 [2024-12-09 09:56:08.992535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.706 qpair failed and we were unable to recover it. 00:38:33.706 [2024-12-09 09:56:08.992871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.706 [2024-12-09 09:56:08.992901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.706 qpair failed and we were unable to recover it. 00:38:33.706 [2024-12-09 09:56:08.993232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.706 [2024-12-09 09:56:08.993261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.706 qpair failed and we were unable to recover it. 00:38:33.706 [2024-12-09 09:56:08.993605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.706 [2024-12-09 09:56:08.993635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.706 qpair failed and we were unable to recover it. 00:38:33.706 [2024-12-09 09:56:08.993993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.706 [2024-12-09 09:56:08.994021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.706 qpair failed and we were unable to recover it. 00:38:33.706 [2024-12-09 09:56:08.994342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.706 [2024-12-09 09:56:08.994371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.706 qpair failed and we were unable to recover it. 00:38:33.706 [2024-12-09 09:56:08.994723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.706 [2024-12-09 09:56:08.994753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.706 qpair failed and we were unable to recover it. 00:38:33.706 [2024-12-09 09:56:08.995096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.706 [2024-12-09 09:56:08.995124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.706 qpair failed and we were unable to recover it. 00:38:33.706 [2024-12-09 09:56:08.995474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.706 [2024-12-09 09:56:08.995503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.706 qpair failed and we were unable to recover it. 00:38:33.706 [2024-12-09 09:56:08.995856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.706 [2024-12-09 09:56:08.995886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.706 qpair failed and we were unable to recover it. 00:38:33.706 [2024-12-09 09:56:08.996274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.706 [2024-12-09 09:56:08.996303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.706 qpair failed and we were unable to recover it. 00:38:33.706 [2024-12-09 09:56:08.996537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.706 [2024-12-09 09:56:08.996565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.706 qpair failed and we were unable to recover it. 00:38:33.706 [2024-12-09 09:56:08.996938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.706 [2024-12-09 09:56:08.996968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.706 qpair failed and we were unable to recover it. 00:38:33.706 [2024-12-09 09:56:08.997312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.706 [2024-12-09 09:56:08.997342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.706 qpair failed and we were unable to recover it. 00:38:33.706 [2024-12-09 09:56:08.997668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.706 [2024-12-09 09:56:08.997698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.706 qpair failed and we were unable to recover it. 00:38:33.706 [2024-12-09 09:56:08.998036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.706 [2024-12-09 09:56:08.998065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.706 qpair failed and we were unable to recover it. 00:38:33.706 [2024-12-09 09:56:08.998388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.706 [2024-12-09 09:56:08.998418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.706 qpair failed and we were unable to recover it. 00:38:33.706 [2024-12-09 09:56:08.998762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.706 [2024-12-09 09:56:08.998792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.706 qpair failed and we were unable to recover it. 00:38:33.706 [2024-12-09 09:56:08.999124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.706 [2024-12-09 09:56:08.999153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.706 qpair failed and we were unable to recover it. 00:38:33.706 [2024-12-09 09:56:08.999583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.706 [2024-12-09 09:56:08.999612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.706 qpair failed and we were unable to recover it. 00:38:33.706 [2024-12-09 09:56:08.999905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.706 [2024-12-09 09:56:08.999935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.706 qpair failed and we were unable to recover it. 00:38:33.706 [2024-12-09 09:56:09.000250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.706 [2024-12-09 09:56:09.000280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.706 qpair failed and we were unable to recover it. 00:38:33.706 [2024-12-09 09:56:09.000609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.706 [2024-12-09 09:56:09.000661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.706 qpair failed and we were unable to recover it. 00:38:33.706 [2024-12-09 09:56:09.000873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.706 [2024-12-09 09:56:09.000901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.706 qpair failed and we were unable to recover it. 00:38:33.706 [2024-12-09 09:56:09.001281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.706 [2024-12-09 09:56:09.001310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.706 qpair failed and we were unable to recover it. 00:38:33.706 [2024-12-09 09:56:09.001660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.706 [2024-12-09 09:56:09.001691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.706 qpair failed and we were unable to recover it. 00:38:33.706 [2024-12-09 09:56:09.001954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.706 [2024-12-09 09:56:09.001983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.706 qpair failed and we were unable to recover it. 00:38:33.706 [2024-12-09 09:56:09.002335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.706 [2024-12-09 09:56:09.002364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.706 qpair failed and we were unable to recover it. 00:38:33.706 [2024-12-09 09:56:09.002701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.706 [2024-12-09 09:56:09.002732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.706 qpair failed and we were unable to recover it. 00:38:33.706 [2024-12-09 09:56:09.003058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.706 [2024-12-09 09:56:09.003088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.706 qpair failed and we were unable to recover it. 00:38:33.706 [2024-12-09 09:56:09.003439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.706 [2024-12-09 09:56:09.003475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.706 qpair failed and we were unable to recover it. 00:38:33.706 [2024-12-09 09:56:09.003786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.706 [2024-12-09 09:56:09.003817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.706 qpair failed and we were unable to recover it. 00:38:33.706 [2024-12-09 09:56:09.004152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.706 [2024-12-09 09:56:09.004181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.706 qpair failed and we were unable to recover it. 00:38:33.706 [2024-12-09 09:56:09.004525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.706 [2024-12-09 09:56:09.004553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.707 qpair failed and we were unable to recover it. 00:38:33.707 [2024-12-09 09:56:09.004897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.707 [2024-12-09 09:56:09.004928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.707 qpair failed and we were unable to recover it. 00:38:33.707 [2024-12-09 09:56:09.005273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.707 [2024-12-09 09:56:09.005301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.707 qpair failed and we were unable to recover it. 00:38:33.707 [2024-12-09 09:56:09.005659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.707 [2024-12-09 09:56:09.005688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.707 qpair failed and we were unable to recover it. 00:38:33.707 [2024-12-09 09:56:09.006100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.707 [2024-12-09 09:56:09.006129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.707 qpair failed and we were unable to recover it. 00:38:33.707 [2024-12-09 09:56:09.006457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.707 [2024-12-09 09:56:09.006486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.707 qpair failed and we were unable to recover it. 00:38:33.707 [2024-12-09 09:56:09.006897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.707 [2024-12-09 09:56:09.006927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.707 qpair failed and we were unable to recover it. 00:38:33.707 [2024-12-09 09:56:09.007267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.707 [2024-12-09 09:56:09.007296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.707 qpair failed and we were unable to recover it. 00:38:33.707 [2024-12-09 09:56:09.007652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.707 [2024-12-09 09:56:09.007682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.707 qpair failed and we were unable to recover it. 00:38:33.707 [2024-12-09 09:56:09.008019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.707 [2024-12-09 09:56:09.008048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.707 qpair failed and we were unable to recover it. 00:38:33.707 [2024-12-09 09:56:09.008395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.707 [2024-12-09 09:56:09.008423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.707 qpair failed and we were unable to recover it. 00:38:33.707 [2024-12-09 09:56:09.008773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.707 [2024-12-09 09:56:09.008803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.707 qpair failed and we were unable to recover it. 00:38:33.707 [2024-12-09 09:56:09.009155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.707 [2024-12-09 09:56:09.009184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.707 qpair failed and we were unable to recover it. 00:38:33.707 [2024-12-09 09:56:09.009540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.707 [2024-12-09 09:56:09.009569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.707 qpair failed and we were unable to recover it. 00:38:33.707 [2024-12-09 09:56:09.009963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.707 [2024-12-09 09:56:09.009993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.707 qpair failed and we were unable to recover it. 00:38:33.707 [2024-12-09 09:56:09.010425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.707 [2024-12-09 09:56:09.010454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.707 qpair failed and we were unable to recover it. 00:38:33.707 [2024-12-09 09:56:09.010874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.707 [2024-12-09 09:56:09.010905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.707 qpair failed and we were unable to recover it. 00:38:33.707 [2024-12-09 09:56:09.011239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.707 [2024-12-09 09:56:09.011268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.707 qpair failed and we were unable to recover it. 00:38:33.707 [2024-12-09 09:56:09.011627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.707 [2024-12-09 09:56:09.011667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.707 qpair failed and we were unable to recover it. 00:38:33.707 [2024-12-09 09:56:09.011990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.707 [2024-12-09 09:56:09.012019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.707 qpair failed and we were unable to recover it. 00:38:33.707 [2024-12-09 09:56:09.012346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.707 [2024-12-09 09:56:09.012375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.707 qpair failed and we were unable to recover it. 00:38:33.707 [2024-12-09 09:56:09.012661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.707 [2024-12-09 09:56:09.012691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.707 qpair failed and we were unable to recover it. 00:38:33.707 [2024-12-09 09:56:09.013039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.707 [2024-12-09 09:56:09.013068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.707 qpair failed and we were unable to recover it. 00:38:33.707 [2024-12-09 09:56:09.013397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.707 [2024-12-09 09:56:09.013426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.707 qpair failed and we were unable to recover it. 00:38:33.707 [2024-12-09 09:56:09.013775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.707 [2024-12-09 09:56:09.013810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.707 qpair failed and we were unable to recover it. 00:38:33.707 [2024-12-09 09:56:09.014135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.707 [2024-12-09 09:56:09.014164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.707 qpair failed and we were unable to recover it. 00:38:33.707 [2024-12-09 09:56:09.014507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.707 [2024-12-09 09:56:09.014536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.707 qpair failed and we were unable to recover it. 00:38:33.707 [2024-12-09 09:56:09.014873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.707 [2024-12-09 09:56:09.014903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.707 qpair failed and we were unable to recover it. 00:38:33.707 [2024-12-09 09:56:09.015238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.707 [2024-12-09 09:56:09.015267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.707 qpair failed and we were unable to recover it. 00:38:33.707 [2024-12-09 09:56:09.015512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.707 [2024-12-09 09:56:09.015540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.707 qpair failed and we were unable to recover it. 00:38:33.707 [2024-12-09 09:56:09.015889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.707 [2024-12-09 09:56:09.015919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.707 qpair failed and we were unable to recover it. 00:38:33.707 [2024-12-09 09:56:09.016175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.707 [2024-12-09 09:56:09.016203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.707 qpair failed and we were unable to recover it. 00:38:33.707 [2024-12-09 09:56:09.016531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.707 [2024-12-09 09:56:09.016560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.707 qpair failed and we were unable to recover it. 00:38:33.707 [2024-12-09 09:56:09.016891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.707 [2024-12-09 09:56:09.016921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.707 qpair failed and we were unable to recover it. 00:38:33.707 [2024-12-09 09:56:09.017270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.707 [2024-12-09 09:56:09.017299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.707 qpair failed and we were unable to recover it. 00:38:33.707 [2024-12-09 09:56:09.017659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.707 [2024-12-09 09:56:09.017692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.707 qpair failed and we were unable to recover it. 00:38:33.707 [2024-12-09 09:56:09.017860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.707 [2024-12-09 09:56:09.017889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.707 qpair failed and we were unable to recover it. 00:38:33.707 [2024-12-09 09:56:09.018281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.707 [2024-12-09 09:56:09.018311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.707 qpair failed and we were unable to recover it. 00:38:33.707 [2024-12-09 09:56:09.018689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.708 [2024-12-09 09:56:09.018720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.708 qpair failed and we were unable to recover it. 00:38:33.708 [2024-12-09 09:56:09.019077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.708 [2024-12-09 09:56:09.019106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.708 qpair failed and we were unable to recover it. 00:38:33.708 [2024-12-09 09:56:09.019464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.708 [2024-12-09 09:56:09.019493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.708 qpair failed and we were unable to recover it. 00:38:33.708 [2024-12-09 09:56:09.019814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.708 [2024-12-09 09:56:09.019844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.708 qpair failed and we were unable to recover it. 00:38:33.708 [2024-12-09 09:56:09.020162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.708 [2024-12-09 09:56:09.020198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.708 qpair failed and we were unable to recover it. 00:38:33.708 [2024-12-09 09:56:09.020545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.708 [2024-12-09 09:56:09.020574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.708 qpair failed and we were unable to recover it. 00:38:33.708 [2024-12-09 09:56:09.020926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.708 [2024-12-09 09:56:09.020956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.708 qpair failed and we were unable to recover it. 00:38:33.708 [2024-12-09 09:56:09.021314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.708 [2024-12-09 09:56:09.021342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.708 qpair failed and we were unable to recover it. 00:38:33.708 [2024-12-09 09:56:09.021701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.708 [2024-12-09 09:56:09.021731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.708 qpair failed and we were unable to recover it. 00:38:33.708 [2024-12-09 09:56:09.022068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.708 [2024-12-09 09:56:09.022097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.708 qpair failed and we were unable to recover it. 00:38:33.708 [2024-12-09 09:56:09.022500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.708 [2024-12-09 09:56:09.022531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.708 qpair failed and we were unable to recover it. 00:38:33.708 [2024-12-09 09:56:09.022855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.708 [2024-12-09 09:56:09.022885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.708 qpair failed and we were unable to recover it. 00:38:33.708 [2024-12-09 09:56:09.023247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.708 [2024-12-09 09:56:09.023276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.708 qpair failed and we were unable to recover it. 00:38:33.708 [2024-12-09 09:56:09.023627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.708 [2024-12-09 09:56:09.023666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.708 qpair failed and we were unable to recover it. 00:38:33.708 [2024-12-09 09:56:09.023992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.708 [2024-12-09 09:56:09.024021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.708 qpair failed and we were unable to recover it. 00:38:33.708 [2024-12-09 09:56:09.024360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.708 [2024-12-09 09:56:09.024389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.708 qpair failed and we were unable to recover it. 00:38:33.708 [2024-12-09 09:56:09.024734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.708 [2024-12-09 09:56:09.024764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.708 qpair failed and we were unable to recover it. 00:38:33.708 [2024-12-09 09:56:09.025104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.708 [2024-12-09 09:56:09.025132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.708 qpair failed and we were unable to recover it. 00:38:33.708 [2024-12-09 09:56:09.025515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.708 [2024-12-09 09:56:09.025544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.708 qpair failed and we were unable to recover it. 00:38:33.708 [2024-12-09 09:56:09.025880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.708 [2024-12-09 09:56:09.025911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.708 qpair failed and we were unable to recover it. 00:38:33.708 [2024-12-09 09:56:09.026249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.708 [2024-12-09 09:56:09.026278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.708 qpair failed and we were unable to recover it. 00:38:33.708 [2024-12-09 09:56:09.026653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.708 [2024-12-09 09:56:09.026682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.708 qpair failed and we were unable to recover it. 00:38:33.708 [2024-12-09 09:56:09.026994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.708 [2024-12-09 09:56:09.027023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.708 qpair failed and we were unable to recover it. 00:38:33.708 [2024-12-09 09:56:09.027265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.708 [2024-12-09 09:56:09.027293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.708 qpair failed and we were unable to recover it. 00:38:33.708 [2024-12-09 09:56:09.027616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.708 [2024-12-09 09:56:09.027654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.708 qpair failed and we were unable to recover it. 00:38:33.708 [2024-12-09 09:56:09.027971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.708 [2024-12-09 09:56:09.028000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.708 qpair failed and we were unable to recover it. 00:38:33.708 [2024-12-09 09:56:09.028235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.708 [2024-12-09 09:56:09.028263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.708 qpair failed and we were unable to recover it. 00:38:33.708 [2024-12-09 09:56:09.028597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.708 [2024-12-09 09:56:09.028631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.708 qpair failed and we were unable to recover it. 00:38:33.708 [2024-12-09 09:56:09.028966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.708 [2024-12-09 09:56:09.028997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.708 qpair failed and we were unable to recover it. 00:38:33.708 [2024-12-09 09:56:09.029248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.708 [2024-12-09 09:56:09.029276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.708 qpair failed and we were unable to recover it. 00:38:33.708 [2024-12-09 09:56:09.029600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.708 [2024-12-09 09:56:09.029629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.708 qpair failed and we were unable to recover it. 00:38:33.708 [2024-12-09 09:56:09.029996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.708 [2024-12-09 09:56:09.030026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.708 qpair failed and we were unable to recover it. 00:38:33.708 [2024-12-09 09:56:09.030384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.708 [2024-12-09 09:56:09.030414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.708 qpair failed and we were unable to recover it. 00:38:33.708 [2024-12-09 09:56:09.030663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.708 [2024-12-09 09:56:09.030693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.708 qpair failed and we were unable to recover it. 00:38:33.708 [2024-12-09 09:56:09.031026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.709 [2024-12-09 09:56:09.031055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.709 qpair failed and we were unable to recover it. 00:38:33.709 [2024-12-09 09:56:09.031404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.709 [2024-12-09 09:56:09.031434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.709 qpair failed and we were unable to recover it. 00:38:33.709 [2024-12-09 09:56:09.031774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.709 [2024-12-09 09:56:09.031803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.709 qpair failed and we were unable to recover it. 00:38:33.709 [2024-12-09 09:56:09.032172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.709 [2024-12-09 09:56:09.032201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.709 qpair failed and we were unable to recover it. 00:38:33.709 [2024-12-09 09:56:09.032538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.709 [2024-12-09 09:56:09.032568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.709 qpair failed and we were unable to recover it. 00:38:33.709 [2024-12-09 09:56:09.032831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.709 [2024-12-09 09:56:09.032860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.709 qpair failed and we were unable to recover it. 00:38:33.709 [2024-12-09 09:56:09.033231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.709 [2024-12-09 09:56:09.033260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.709 qpair failed and we were unable to recover it. 00:38:33.709 [2024-12-09 09:56:09.033594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.709 [2024-12-09 09:56:09.033623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.709 qpair failed and we were unable to recover it. 00:38:33.709 [2024-12-09 09:56:09.033977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.709 [2024-12-09 09:56:09.034006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.709 qpair failed and we were unable to recover it. 00:38:33.709 [2024-12-09 09:56:09.034337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.709 [2024-12-09 09:56:09.034365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.709 qpair failed and we were unable to recover it. 00:38:33.709 [2024-12-09 09:56:09.034716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.709 [2024-12-09 09:56:09.034747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.709 qpair failed and we were unable to recover it. 00:38:33.709 [2024-12-09 09:56:09.035073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.709 [2024-12-09 09:56:09.035102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.709 qpair failed and we were unable to recover it. 00:38:33.709 [2024-12-09 09:56:09.035369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.709 [2024-12-09 09:56:09.035397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.709 qpair failed and we were unable to recover it. 00:38:33.709 [2024-12-09 09:56:09.035765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.709 [2024-12-09 09:56:09.035795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.709 qpair failed and we were unable to recover it. 00:38:33.709 [2024-12-09 09:56:09.036150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.709 [2024-12-09 09:56:09.036179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.709 qpair failed and we were unable to recover it. 00:38:33.709 [2024-12-09 09:56:09.036517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.709 [2024-12-09 09:56:09.036546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.709 qpair failed and we were unable to recover it. 00:38:33.709 [2024-12-09 09:56:09.036894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.709 [2024-12-09 09:56:09.036924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.709 qpair failed and we were unable to recover it. 00:38:33.709 [2024-12-09 09:56:09.037251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.709 [2024-12-09 09:56:09.037279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.709 qpair failed and we were unable to recover it. 00:38:33.709 [2024-12-09 09:56:09.037608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.709 [2024-12-09 09:56:09.037655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.709 qpair failed and we were unable to recover it. 00:38:33.709 [2024-12-09 09:56:09.037930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.709 [2024-12-09 09:56:09.037958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.709 qpair failed and we were unable to recover it. 00:38:33.709 [2024-12-09 09:56:09.038270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.709 [2024-12-09 09:56:09.038299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.709 qpair failed and we were unable to recover it. 00:38:33.709 [2024-12-09 09:56:09.038649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.709 [2024-12-09 09:56:09.038680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.709 qpair failed and we were unable to recover it. 00:38:33.709 [2024-12-09 09:56:09.038995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.709 [2024-12-09 09:56:09.039023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.709 qpair failed and we were unable to recover it. 00:38:33.709 [2024-12-09 09:56:09.039334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.709 [2024-12-09 09:56:09.039363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.709 qpair failed and we were unable to recover it. 00:38:33.709 [2024-12-09 09:56:09.039699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.709 [2024-12-09 09:56:09.039729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.709 qpair failed and we were unable to recover it. 00:38:33.709 [2024-12-09 09:56:09.040058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.709 [2024-12-09 09:56:09.040086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.709 qpair failed and we were unable to recover it. 00:38:33.709 [2024-12-09 09:56:09.040515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.709 [2024-12-09 09:56:09.040544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.709 qpair failed and we were unable to recover it. 00:38:33.709 [2024-12-09 09:56:09.040891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.709 [2024-12-09 09:56:09.040922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.709 qpair failed and we were unable to recover it. 00:38:33.709 [2024-12-09 09:56:09.041295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.709 [2024-12-09 09:56:09.041324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.709 qpair failed and we were unable to recover it. 00:38:33.709 [2024-12-09 09:56:09.041677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.709 [2024-12-09 09:56:09.041708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.709 qpair failed and we were unable to recover it. 00:38:33.709 [2024-12-09 09:56:09.042038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.709 [2024-12-09 09:56:09.042067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.709 qpair failed and we were unable to recover it. 00:38:33.709 [2024-12-09 09:56:09.042404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.709 [2024-12-09 09:56:09.042434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.709 qpair failed and we were unable to recover it. 00:38:33.709 [2024-12-09 09:56:09.042769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.709 [2024-12-09 09:56:09.042799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.709 qpair failed and we were unable to recover it. 00:38:33.709 [2024-12-09 09:56:09.043132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.709 [2024-12-09 09:56:09.043161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.709 qpair failed and we were unable to recover it. 00:38:33.709 [2024-12-09 09:56:09.043493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.709 [2024-12-09 09:56:09.043523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.709 qpair failed and we were unable to recover it. 00:38:33.709 [2024-12-09 09:56:09.043861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.709 [2024-12-09 09:56:09.043891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.709 qpair failed and we were unable to recover it. 00:38:33.709 [2024-12-09 09:56:09.044233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.709 [2024-12-09 09:56:09.044262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.709 qpair failed and we were unable to recover it. 00:38:33.709 [2024-12-09 09:56:09.044504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.709 [2024-12-09 09:56:09.044532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.709 qpair failed and we were unable to recover it. 00:38:33.710 [2024-12-09 09:56:09.044879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.710 [2024-12-09 09:56:09.044909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.710 qpair failed and we were unable to recover it. 00:38:33.710 [2024-12-09 09:56:09.045223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.710 [2024-12-09 09:56:09.045259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.710 qpair failed and we were unable to recover it. 00:38:33.710 [2024-12-09 09:56:09.045578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.710 [2024-12-09 09:56:09.045606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.710 qpair failed and we were unable to recover it. 00:38:33.710 [2024-12-09 09:56:09.045950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.710 [2024-12-09 09:56:09.045980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.710 qpair failed and we were unable to recover it. 00:38:33.710 [2024-12-09 09:56:09.046318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.710 [2024-12-09 09:56:09.046346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.710 qpair failed and we were unable to recover it. 00:38:33.710 [2024-12-09 09:56:09.046714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.710 [2024-12-09 09:56:09.046745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.710 qpair failed and we were unable to recover it. 00:38:33.710 [2024-12-09 09:56:09.047095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.710 [2024-12-09 09:56:09.047124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.710 qpair failed and we were unable to recover it. 00:38:33.710 [2024-12-09 09:56:09.047376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.710 [2024-12-09 09:56:09.047405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.710 qpair failed and we were unable to recover it. 00:38:33.710 [2024-12-09 09:56:09.047730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.710 [2024-12-09 09:56:09.047761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.710 qpair failed and we were unable to recover it. 00:38:33.710 [2024-12-09 09:56:09.048128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.710 [2024-12-09 09:56:09.048157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.710 qpair failed and we were unable to recover it. 00:38:33.710 [2024-12-09 09:56:09.048502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.710 [2024-12-09 09:56:09.048531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.710 qpair failed and we were unable to recover it. 00:38:33.710 [2024-12-09 09:56:09.048904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.710 [2024-12-09 09:56:09.048934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.710 qpair failed and we were unable to recover it. 00:38:33.710 [2024-12-09 09:56:09.049268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.710 [2024-12-09 09:56:09.049297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.710 qpair failed and we were unable to recover it. 00:38:33.710 [2024-12-09 09:56:09.049652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.710 [2024-12-09 09:56:09.049682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.710 qpair failed and we were unable to recover it. 00:38:33.710 [2024-12-09 09:56:09.050023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.710 [2024-12-09 09:56:09.050052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.710 qpair failed and we were unable to recover it. 00:38:33.710 [2024-12-09 09:56:09.050392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.710 [2024-12-09 09:56:09.050420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.710 qpair failed and we were unable to recover it. 00:38:33.710 [2024-12-09 09:56:09.050770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.710 [2024-12-09 09:56:09.050800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.710 qpair failed and we were unable to recover it. 00:38:33.710 [2024-12-09 09:56:09.051144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.710 [2024-12-09 09:56:09.051175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.710 qpair failed and we were unable to recover it. 00:38:33.710 [2024-12-09 09:56:09.051528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.710 [2024-12-09 09:56:09.051557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.710 qpair failed and we were unable to recover it. 00:38:33.710 [2024-12-09 09:56:09.051948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.710 [2024-12-09 09:56:09.051978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.710 qpair failed and we were unable to recover it. 00:38:33.710 [2024-12-09 09:56:09.052309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.710 [2024-12-09 09:56:09.052336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.710 qpair failed and we were unable to recover it. 00:38:33.710 [2024-12-09 09:56:09.052713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.710 [2024-12-09 09:56:09.052743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.710 qpair failed and we were unable to recover it. 00:38:33.710 [2024-12-09 09:56:09.053093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.710 [2024-12-09 09:56:09.053122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.710 qpair failed and we were unable to recover it. 00:38:33.710 [2024-12-09 09:56:09.053472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.710 [2024-12-09 09:56:09.053509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.710 qpair failed and we were unable to recover it. 00:38:33.710 [2024-12-09 09:56:09.053754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.710 [2024-12-09 09:56:09.053785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.710 qpair failed and we were unable to recover it. 00:38:33.710 [2024-12-09 09:56:09.054131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.710 [2024-12-09 09:56:09.054160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.710 qpair failed and we were unable to recover it. 00:38:33.710 [2024-12-09 09:56:09.054517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.710 [2024-12-09 09:56:09.054547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.710 qpair failed and we were unable to recover it. 00:38:33.710 [2024-12-09 09:56:09.054939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.710 [2024-12-09 09:56:09.054970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.710 qpair failed and we were unable to recover it. 00:38:33.710 [2024-12-09 09:56:09.055306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.710 [2024-12-09 09:56:09.055333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.710 qpair failed and we were unable to recover it. 00:38:33.710 [2024-12-09 09:56:09.055665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.710 [2024-12-09 09:56:09.055696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.710 qpair failed and we were unable to recover it. 00:38:33.710 [2024-12-09 09:56:09.056025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.710 [2024-12-09 09:56:09.056054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.710 qpair failed and we were unable to recover it. 00:38:33.710 [2024-12-09 09:56:09.056378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.710 [2024-12-09 09:56:09.056407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.710 qpair failed and we were unable to recover it. 00:38:33.710 [2024-12-09 09:56:09.056737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.710 [2024-12-09 09:56:09.056767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.710 qpair failed and we were unable to recover it. 00:38:33.710 [2024-12-09 09:56:09.057143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.710 [2024-12-09 09:56:09.057172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.710 qpair failed and we were unable to recover it. 00:38:33.710 [2024-12-09 09:56:09.057488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.710 [2024-12-09 09:56:09.057517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.710 qpair failed and we were unable to recover it. 00:38:33.710 [2024-12-09 09:56:09.057850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.710 [2024-12-09 09:56:09.057880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.710 qpair failed and we were unable to recover it. 00:38:33.710 [2024-12-09 09:56:09.058228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.710 [2024-12-09 09:56:09.058256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.710 qpair failed and we were unable to recover it. 00:38:33.710 [2024-12-09 09:56:09.058595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.710 [2024-12-09 09:56:09.058624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.711 qpair failed and we were unable to recover it. 00:38:33.711 [2024-12-09 09:56:09.058981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.711 [2024-12-09 09:56:09.059010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.711 qpair failed and we were unable to recover it. 00:38:33.711 [2024-12-09 09:56:09.059372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.711 [2024-12-09 09:56:09.059401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.711 qpair failed and we were unable to recover it. 00:38:33.711 [2024-12-09 09:56:09.059744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.711 [2024-12-09 09:56:09.059775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.711 qpair failed and we were unable to recover it. 00:38:33.711 [2024-12-09 09:56:09.060118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.711 [2024-12-09 09:56:09.060147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.711 qpair failed and we were unable to recover it. 00:38:33.711 [2024-12-09 09:56:09.060480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.711 [2024-12-09 09:56:09.060510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.711 qpair failed and we were unable to recover it. 00:38:33.711 [2024-12-09 09:56:09.060853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.711 [2024-12-09 09:56:09.060883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.711 qpair failed and we were unable to recover it. 00:38:33.711 [2024-12-09 09:56:09.061217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.711 [2024-12-09 09:56:09.061247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.711 qpair failed and we were unable to recover it. 00:38:33.711 [2024-12-09 09:56:09.061608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.711 [2024-12-09 09:56:09.061645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.711 qpair failed and we were unable to recover it. 00:38:33.711 [2024-12-09 09:56:09.062003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.711 [2024-12-09 09:56:09.062032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.711 qpair failed and we were unable to recover it. 00:38:33.711 [2024-12-09 09:56:09.062393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.711 [2024-12-09 09:56:09.062422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.711 qpair failed and we were unable to recover it. 00:38:33.711 [2024-12-09 09:56:09.062664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.711 [2024-12-09 09:56:09.062694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.711 qpair failed and we were unable to recover it. 00:38:33.711 [2024-12-09 09:56:09.063015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.711 [2024-12-09 09:56:09.063044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.711 qpair failed and we were unable to recover it. 00:38:33.711 [2024-12-09 09:56:09.063406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.711 [2024-12-09 09:56:09.063435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.711 qpair failed and we were unable to recover it. 00:38:33.711 [2024-12-09 09:56:09.063792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.711 [2024-12-09 09:56:09.063824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.711 qpair failed and we were unable to recover it. 00:38:33.711 [2024-12-09 09:56:09.064147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.711 [2024-12-09 09:56:09.064178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.711 qpair failed and we were unable to recover it. 00:38:33.711 [2024-12-09 09:56:09.064545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.711 [2024-12-09 09:56:09.064574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.711 qpair failed and we were unable to recover it. 00:38:33.711 [2024-12-09 09:56:09.064900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.711 [2024-12-09 09:56:09.064931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.711 qpair failed and we were unable to recover it. 00:38:33.711 [2024-12-09 09:56:09.065240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.711 [2024-12-09 09:56:09.065269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.711 qpair failed and we were unable to recover it. 00:38:33.711 [2024-12-09 09:56:09.065584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.711 [2024-12-09 09:56:09.065614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.711 qpair failed and we were unable to recover it. 00:38:33.711 [2024-12-09 09:56:09.065983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.711 [2024-12-09 09:56:09.066014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.711 qpair failed and we were unable to recover it. 00:38:33.711 [2024-12-09 09:56:09.066348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.711 [2024-12-09 09:56:09.066377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.711 qpair failed and we were unable to recover it. 00:38:33.711 [2024-12-09 09:56:09.066715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.711 [2024-12-09 09:56:09.066747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.711 qpair failed and we were unable to recover it. 00:38:33.711 [2024-12-09 09:56:09.067047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.711 [2024-12-09 09:56:09.067076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.711 qpair failed and we were unable to recover it. 00:38:33.711 [2024-12-09 09:56:09.067412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.711 [2024-12-09 09:56:09.067441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.711 qpair failed and we were unable to recover it. 00:38:33.711 [2024-12-09 09:56:09.067796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.711 [2024-12-09 09:56:09.067826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.711 qpair failed and we were unable to recover it. 00:38:33.711 [2024-12-09 09:56:09.068166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.711 [2024-12-09 09:56:09.068195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.711 qpair failed and we were unable to recover it. 00:38:33.711 [2024-12-09 09:56:09.068520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.711 [2024-12-09 09:56:09.068555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.711 qpair failed and we were unable to recover it. 00:38:33.711 [2024-12-09 09:56:09.068794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.711 [2024-12-09 09:56:09.068823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.711 qpair failed and we were unable to recover it. 00:38:33.711 [2024-12-09 09:56:09.069099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.711 [2024-12-09 09:56:09.069128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.711 qpair failed and we were unable to recover it. 00:38:33.711 [2024-12-09 09:56:09.069456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.711 [2024-12-09 09:56:09.069485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.711 qpair failed and we were unable to recover it. 00:38:33.711 [2024-12-09 09:56:09.069820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.711 [2024-12-09 09:56:09.069850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.711 qpair failed and we were unable to recover it. 00:38:33.711 [2024-12-09 09:56:09.070183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.711 [2024-12-09 09:56:09.070212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.711 qpair failed and we were unable to recover it. 00:38:33.711 [2024-12-09 09:56:09.070558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.711 [2024-12-09 09:56:09.070587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.711 qpair failed and we were unable to recover it. 00:38:33.711 [2024-12-09 09:56:09.070931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.711 [2024-12-09 09:56:09.070962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.711 qpair failed and we were unable to recover it. 00:38:33.711 [2024-12-09 09:56:09.071275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.711 [2024-12-09 09:56:09.071306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.711 qpair failed and we were unable to recover it. 00:38:33.711 [2024-12-09 09:56:09.071618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.711 [2024-12-09 09:56:09.071658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.711 qpair failed and we were unable to recover it. 00:38:33.711 [2024-12-09 09:56:09.071889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.711 [2024-12-09 09:56:09.071920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.711 qpair failed and we were unable to recover it. 00:38:33.711 [2024-12-09 09:56:09.072246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.711 [2024-12-09 09:56:09.072276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.712 qpair failed and we were unable to recover it. 00:38:33.712 [2024-12-09 09:56:09.072647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.712 [2024-12-09 09:56:09.072677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.712 qpair failed and we were unable to recover it. 00:38:33.712 [2024-12-09 09:56:09.073014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.712 [2024-12-09 09:56:09.073043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.712 qpair failed and we were unable to recover it. 00:38:33.712 [2024-12-09 09:56:09.073414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.712 [2024-12-09 09:56:09.073443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.712 qpair failed and we were unable to recover it. 00:38:33.712 [2024-12-09 09:56:09.073705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.712 [2024-12-09 09:56:09.073735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.712 qpair failed and we were unable to recover it. 00:38:33.712 [2024-12-09 09:56:09.073943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.712 [2024-12-09 09:56:09.073973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.712 qpair failed and we were unable to recover it. 00:38:33.712 [2024-12-09 09:56:09.074305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.712 [2024-12-09 09:56:09.074333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.712 qpair failed and we were unable to recover it. 00:38:33.712 [2024-12-09 09:56:09.074679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.712 [2024-12-09 09:56:09.074709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.712 qpair failed and we were unable to recover it. 00:38:33.712 [2024-12-09 09:56:09.075078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.712 [2024-12-09 09:56:09.075106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.712 qpair failed and we were unable to recover it. 00:38:33.712 [2024-12-09 09:56:09.075446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.712 [2024-12-09 09:56:09.075475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.712 qpair failed and we were unable to recover it. 00:38:33.712 [2024-12-09 09:56:09.075815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.712 [2024-12-09 09:56:09.075845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.712 qpair failed and we were unable to recover it. 00:38:33.712 [2024-12-09 09:56:09.076121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.712 [2024-12-09 09:56:09.076152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.712 qpair failed and we were unable to recover it. 00:38:33.712 [2024-12-09 09:56:09.076471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.712 [2024-12-09 09:56:09.076499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.712 qpair failed and we were unable to recover it. 00:38:33.712 [2024-12-09 09:56:09.076853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.712 [2024-12-09 09:56:09.076883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.712 qpair failed and we were unable to recover it. 00:38:33.712 [2024-12-09 09:56:09.077231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.712 [2024-12-09 09:56:09.077260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.712 qpair failed and we were unable to recover it. 00:38:33.712 [2024-12-09 09:56:09.077465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.712 [2024-12-09 09:56:09.077494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.712 qpair failed and we were unable to recover it. 00:38:33.712 [2024-12-09 09:56:09.077820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.712 [2024-12-09 09:56:09.077860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.712 qpair failed and we were unable to recover it. 00:38:33.712 [2024-12-09 09:56:09.078201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.712 [2024-12-09 09:56:09.078231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.712 qpair failed and we were unable to recover it. 00:38:33.712 [2024-12-09 09:56:09.078596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.712 [2024-12-09 09:56:09.078632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.712 qpair failed and we were unable to recover it. 00:38:33.712 [2024-12-09 09:56:09.078983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.712 [2024-12-09 09:56:09.079013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.712 qpair failed and we were unable to recover it. 00:38:33.712 [2024-12-09 09:56:09.079333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.712 [2024-12-09 09:56:09.079363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.712 qpair failed and we were unable to recover it. 00:38:33.712 [2024-12-09 09:56:09.079687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.712 [2024-12-09 09:56:09.079717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.712 qpair failed and we were unable to recover it. 00:38:33.712 [2024-12-09 09:56:09.080053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.712 [2024-12-09 09:56:09.080081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.712 qpair failed and we were unable to recover it. 00:38:33.712 [2024-12-09 09:56:09.080418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.712 [2024-12-09 09:56:09.080446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.712 qpair failed and we were unable to recover it. 00:38:33.712 [2024-12-09 09:56:09.080789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.712 [2024-12-09 09:56:09.080819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.712 qpair failed and we were unable to recover it. 00:38:33.712 [2024-12-09 09:56:09.081163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.712 [2024-12-09 09:56:09.081192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.712 qpair failed and we were unable to recover it. 00:38:33.712 [2024-12-09 09:56:09.081554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.712 [2024-12-09 09:56:09.081582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.712 qpair failed and we were unable to recover it. 00:38:33.712 [2024-12-09 09:56:09.081958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.712 [2024-12-09 09:56:09.081989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.712 qpair failed and we were unable to recover it. 00:38:33.712 [2024-12-09 09:56:09.082329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.712 [2024-12-09 09:56:09.082357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.712 qpair failed and we were unable to recover it. 00:38:33.712 [2024-12-09 09:56:09.082663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.712 [2024-12-09 09:56:09.082695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.712 qpair failed and we were unable to recover it. 00:38:33.712 [2024-12-09 09:56:09.083051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.712 [2024-12-09 09:56:09.083082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.712 qpair failed and we were unable to recover it. 00:38:33.712 [2024-12-09 09:56:09.083455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.712 [2024-12-09 09:56:09.083483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.712 qpair failed and we were unable to recover it. 00:38:33.712 [2024-12-09 09:56:09.083844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.712 [2024-12-09 09:56:09.083876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.712 qpair failed and we were unable to recover it. 00:38:33.712 [2024-12-09 09:56:09.084234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.712 [2024-12-09 09:56:09.084264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.712 qpair failed and we were unable to recover it. 00:38:33.712 [2024-12-09 09:56:09.084676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.712 [2024-12-09 09:56:09.084707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.712 qpair failed and we were unable to recover it. 00:38:33.712 [2024-12-09 09:56:09.085051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.712 [2024-12-09 09:56:09.085080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.712 qpair failed and we were unable to recover it. 00:38:33.712 [2024-12-09 09:56:09.085431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.712 [2024-12-09 09:56:09.085459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.712 qpair failed and we were unable to recover it. 00:38:33.712 [2024-12-09 09:56:09.085771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.712 [2024-12-09 09:56:09.085802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.712 qpair failed and we were unable to recover it. 00:38:33.712 [2024-12-09 09:56:09.086182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.712 [2024-12-09 09:56:09.086210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.713 qpair failed and we were unable to recover it. 00:38:33.713 [2024-12-09 09:56:09.086569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.713 [2024-12-09 09:56:09.086599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.713 qpair failed and we were unable to recover it. 00:38:33.713 [2024-12-09 09:56:09.086876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.713 [2024-12-09 09:56:09.086907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.713 qpair failed and we were unable to recover it. 00:38:33.713 [2024-12-09 09:56:09.087258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.713 [2024-12-09 09:56:09.087377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.713 qpair failed and we were unable to recover it. 00:38:33.713 [2024-12-09 09:56:09.087731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.713 [2024-12-09 09:56:09.087762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.713 qpair failed and we were unable to recover it. 00:38:33.713 [2024-12-09 09:56:09.088121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.713 [2024-12-09 09:56:09.088151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.713 qpair failed and we were unable to recover it. 00:38:33.713 [2024-12-09 09:56:09.088514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.713 [2024-12-09 09:56:09.088543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.713 qpair failed and we were unable to recover it. 00:38:33.713 [2024-12-09 09:56:09.088902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.713 [2024-12-09 09:56:09.088933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.713 qpair failed and we were unable to recover it. 00:38:33.713 [2024-12-09 09:56:09.089229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.713 [2024-12-09 09:56:09.089258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.713 qpair failed and we were unable to recover it. 00:38:33.713 [2024-12-09 09:56:09.089630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.713 [2024-12-09 09:56:09.089668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.713 qpair failed and we were unable to recover it. 00:38:33.713 [2024-12-09 09:56:09.090022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.713 [2024-12-09 09:56:09.090052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.713 qpair failed and we were unable to recover it. 00:38:33.713 [2024-12-09 09:56:09.090389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.713 [2024-12-09 09:56:09.090417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.713 qpair failed and we were unable to recover it. 00:38:33.713 [2024-12-09 09:56:09.090824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.713 [2024-12-09 09:56:09.090859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.713 qpair failed and we were unable to recover it. 00:38:33.713 [2024-12-09 09:56:09.091236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.713 [2024-12-09 09:56:09.091265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.713 qpair failed and we were unable to recover it. 00:38:33.713 [2024-12-09 09:56:09.091620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.713 [2024-12-09 09:56:09.091663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.713 qpair failed and we were unable to recover it. 00:38:33.713 [2024-12-09 09:56:09.091997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.713 [2024-12-09 09:56:09.092026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.713 qpair failed and we were unable to recover it. 00:38:33.713 [2024-12-09 09:56:09.092352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.713 [2024-12-09 09:56:09.092381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.713 qpair failed and we were unable to recover it. 00:38:33.713 [2024-12-09 09:56:09.092711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.713 [2024-12-09 09:56:09.092741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.713 qpair failed and we were unable to recover it. 00:38:33.713 [2024-12-09 09:56:09.093097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.713 [2024-12-09 09:56:09.093125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.713 qpair failed and we were unable to recover it. 00:38:33.713 [2024-12-09 09:56:09.093379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.713 [2024-12-09 09:56:09.093414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.713 qpair failed and we were unable to recover it. 00:38:33.713 [2024-12-09 09:56:09.093770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.713 [2024-12-09 09:56:09.093800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.713 qpair failed and we were unable to recover it. 00:38:33.713 [2024-12-09 09:56:09.094147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.713 [2024-12-09 09:56:09.094181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.713 qpair failed and we were unable to recover it. 00:38:33.713 [2024-12-09 09:56:09.094516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.713 [2024-12-09 09:56:09.094544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.713 qpair failed and we were unable to recover it. 00:38:33.713 [2024-12-09 09:56:09.094885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.713 [2024-12-09 09:56:09.094916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.713 qpair failed and we were unable to recover it. 00:38:33.713 [2024-12-09 09:56:09.095358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.713 [2024-12-09 09:56:09.095387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.713 qpair failed and we were unable to recover it. 00:38:33.713 [2024-12-09 09:56:09.095727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.713 [2024-12-09 09:56:09.095757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.713 qpair failed and we were unable to recover it. 00:38:33.713 [2024-12-09 09:56:09.095942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.713 [2024-12-09 09:56:09.095971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.713 qpair failed and we were unable to recover it. 00:38:33.713 [2024-12-09 09:56:09.096315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.713 [2024-12-09 09:56:09.096344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.713 qpair failed and we were unable to recover it. 00:38:33.713 [2024-12-09 09:56:09.096701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.713 [2024-12-09 09:56:09.096731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.713 qpair failed and we were unable to recover it. 00:38:33.713 [2024-12-09 09:56:09.097046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.713 [2024-12-09 09:56:09.097074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.713 qpair failed and we were unable to recover it. 00:38:33.713 [2024-12-09 09:56:09.097432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.713 [2024-12-09 09:56:09.097461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.713 qpair failed and we were unable to recover it. 00:38:33.713 [2024-12-09 09:56:09.097743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.713 [2024-12-09 09:56:09.097772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.713 qpair failed and we were unable to recover it. 00:38:33.713 [2024-12-09 09:56:09.098143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.713 [2024-12-09 09:56:09.098172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.713 qpair failed and we were unable to recover it. 00:38:33.713 [2024-12-09 09:56:09.098480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.714 [2024-12-09 09:56:09.098510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.714 qpair failed and we were unable to recover it. 00:38:33.714 [2024-12-09 09:56:09.098836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.714 [2024-12-09 09:56:09.098865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.714 qpair failed and we were unable to recover it. 00:38:33.714 [2024-12-09 09:56:09.099223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.714 [2024-12-09 09:56:09.099252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.714 qpair failed and we were unable to recover it. 00:38:33.714 [2024-12-09 09:56:09.099473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.714 [2024-12-09 09:56:09.099503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.714 qpair failed and we were unable to recover it. 00:38:33.714 [2024-12-09 09:56:09.099859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.714 [2024-12-09 09:56:09.099888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.714 qpair failed and we were unable to recover it. 00:38:33.714 [2024-12-09 09:56:09.100234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.714 [2024-12-09 09:56:09.100264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.714 qpair failed and we were unable to recover it. 00:38:33.714 [2024-12-09 09:56:09.100615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.714 [2024-12-09 09:56:09.100670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.714 qpair failed and we were unable to recover it. 00:38:33.714 [2024-12-09 09:56:09.100993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.714 [2024-12-09 09:56:09.101021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.714 qpair failed and we were unable to recover it. 00:38:33.714 [2024-12-09 09:56:09.101240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.714 [2024-12-09 09:56:09.101267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.714 qpair failed and we were unable to recover it. 00:38:33.714 [2024-12-09 09:56:09.101618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.714 [2024-12-09 09:56:09.101658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.714 qpair failed and we were unable to recover it. 00:38:33.714 [2024-12-09 09:56:09.101977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.714 [2024-12-09 09:56:09.102005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.714 qpair failed and we were unable to recover it. 00:38:33.714 [2024-12-09 09:56:09.102349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.714 [2024-12-09 09:56:09.102378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.714 qpair failed and we were unable to recover it. 00:38:33.714 [2024-12-09 09:56:09.102720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.714 [2024-12-09 09:56:09.102751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.714 qpair failed and we were unable to recover it. 00:38:33.714 [2024-12-09 09:56:09.103117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.714 [2024-12-09 09:56:09.103152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.714 qpair failed and we were unable to recover it. 00:38:33.714 [2024-12-09 09:56:09.103503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.714 [2024-12-09 09:56:09.103532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.714 qpair failed and we were unable to recover it. 00:38:33.714 [2024-12-09 09:56:09.103904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.714 [2024-12-09 09:56:09.103934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.714 qpair failed and we were unable to recover it. 00:38:33.714 [2024-12-09 09:56:09.104159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.714 [2024-12-09 09:56:09.104187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.714 qpair failed and we were unable to recover it. 00:38:33.714 [2024-12-09 09:56:09.104527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.714 [2024-12-09 09:56:09.104556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.714 qpair failed and we were unable to recover it. 00:38:33.714 [2024-12-09 09:56:09.104902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.714 [2024-12-09 09:56:09.104932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.714 qpair failed and we were unable to recover it. 00:38:33.714 [2024-12-09 09:56:09.105284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.714 [2024-12-09 09:56:09.105312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.714 qpair failed and we were unable to recover it. 00:38:33.714 [2024-12-09 09:56:09.105720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.714 [2024-12-09 09:56:09.105751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.714 qpair failed and we were unable to recover it. 00:38:33.714 [2024-12-09 09:56:09.106078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.714 [2024-12-09 09:56:09.106106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.714 qpair failed and we were unable to recover it. 00:38:33.714 [2024-12-09 09:56:09.106322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.714 [2024-12-09 09:56:09.106350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.714 qpair failed and we were unable to recover it. 00:38:33.714 [2024-12-09 09:56:09.106627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.714 [2024-12-09 09:56:09.106668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.714 qpair failed and we were unable to recover it. 00:38:33.714 [2024-12-09 09:56:09.107051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.714 [2024-12-09 09:56:09.107080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.714 qpair failed and we were unable to recover it. 00:38:33.714 [2024-12-09 09:56:09.107421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.714 [2024-12-09 09:56:09.107450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.714 qpair failed and we were unable to recover it. 00:38:33.714 [2024-12-09 09:56:09.107794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.714 [2024-12-09 09:56:09.107825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.714 qpair failed and we were unable to recover it. 00:38:33.714 [2024-12-09 09:56:09.108166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.714 [2024-12-09 09:56:09.108195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.714 qpair failed and we were unable to recover it. 00:38:33.714 [2024-12-09 09:56:09.108541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.714 [2024-12-09 09:56:09.108569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.714 qpair failed and we were unable to recover it. 00:38:33.714 [2024-12-09 09:56:09.108925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.714 [2024-12-09 09:56:09.108955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.714 qpair failed and we were unable to recover it. 00:38:33.714 [2024-12-09 09:56:09.109293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.714 [2024-12-09 09:56:09.109321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.714 qpair failed and we were unable to recover it. 00:38:33.714 [2024-12-09 09:56:09.109688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.714 [2024-12-09 09:56:09.109717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.714 qpair failed and we were unable to recover it. 00:38:33.714 [2024-12-09 09:56:09.110052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.714 [2024-12-09 09:56:09.110082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.714 qpair failed and we were unable to recover it. 00:38:33.714 [2024-12-09 09:56:09.110387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.714 [2024-12-09 09:56:09.110415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.714 qpair failed and we were unable to recover it. 00:38:33.714 [2024-12-09 09:56:09.110633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.714 [2024-12-09 09:56:09.110672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.714 qpair failed and we were unable to recover it. 00:38:33.714 [2024-12-09 09:56:09.110996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.714 [2024-12-09 09:56:09.111024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.714 qpair failed and we were unable to recover it. 00:38:33.714 [2024-12-09 09:56:09.111388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.714 [2024-12-09 09:56:09.111417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.714 qpair failed and we were unable to recover it. 00:38:33.714 [2024-12-09 09:56:09.111727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.714 [2024-12-09 09:56:09.111757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.714 qpair failed and we were unable to recover it. 00:38:33.714 [2024-12-09 09:56:09.112106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.715 [2024-12-09 09:56:09.112135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.715 qpair failed and we were unable to recover it. 00:38:33.715 [2024-12-09 09:56:09.112300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.715 [2024-12-09 09:56:09.112328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.715 qpair failed and we were unable to recover it. 00:38:33.715 [2024-12-09 09:56:09.112689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.715 [2024-12-09 09:56:09.112720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.715 qpair failed and we were unable to recover it. 00:38:33.715 [2024-12-09 09:56:09.113052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.715 [2024-12-09 09:56:09.113081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.715 qpair failed and we were unable to recover it. 00:38:33.715 [2024-12-09 09:56:09.113312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.715 [2024-12-09 09:56:09.113341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.715 qpair failed and we were unable to recover it. 00:38:33.715 [2024-12-09 09:56:09.113603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.715 [2024-12-09 09:56:09.113632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.715 qpair failed and we were unable to recover it. 00:38:33.715 [2024-12-09 09:56:09.113802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.715 [2024-12-09 09:56:09.113832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.715 qpair failed and we were unable to recover it. 00:38:33.715 [2024-12-09 09:56:09.114043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.715 [2024-12-09 09:56:09.114072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.715 qpair failed and we were unable to recover it. 00:38:33.715 [2024-12-09 09:56:09.114400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.715 [2024-12-09 09:56:09.114428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.715 qpair failed and we were unable to recover it. 00:38:33.715 [2024-12-09 09:56:09.114780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.715 [2024-12-09 09:56:09.114811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.715 qpair failed and we were unable to recover it. 00:38:33.715 [2024-12-09 09:56:09.115046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.715 [2024-12-09 09:56:09.115075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.715 qpair failed and we were unable to recover it. 00:38:33.715 [2024-12-09 09:56:09.115426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.715 [2024-12-09 09:56:09.115455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.715 qpair failed and we were unable to recover it. 00:38:33.715 [2024-12-09 09:56:09.115811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.715 [2024-12-09 09:56:09.115841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.715 qpair failed and we were unable to recover it. 00:38:33.715 [2024-12-09 09:56:09.116098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.715 [2024-12-09 09:56:09.116126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.715 qpair failed and we were unable to recover it. 00:38:33.715 [2024-12-09 09:56:09.116502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.715 [2024-12-09 09:56:09.116532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.715 qpair failed and we were unable to recover it. 00:38:33.715 [2024-12-09 09:56:09.116752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.715 [2024-12-09 09:56:09.116782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.715 qpair failed and we were unable to recover it. 00:38:33.715 [2024-12-09 09:56:09.117122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.715 [2024-12-09 09:56:09.117156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.715 qpair failed and we were unable to recover it. 00:38:33.715 [2024-12-09 09:56:09.117381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.715 [2024-12-09 09:56:09.117411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.715 qpair failed and we were unable to recover it. 00:38:33.715 [2024-12-09 09:56:09.117681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.715 [2024-12-09 09:56:09.117712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.715 qpair failed and we were unable to recover it. 00:38:33.715 [2024-12-09 09:56:09.118092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.715 [2024-12-09 09:56:09.118121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.715 qpair failed and we were unable to recover it. 00:38:33.715 [2024-12-09 09:56:09.118376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.715 [2024-12-09 09:56:09.118405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.715 qpair failed and we were unable to recover it. 00:38:33.715 [2024-12-09 09:56:09.118765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.715 [2024-12-09 09:56:09.118796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.715 qpair failed and we were unable to recover it. 00:38:33.715 [2024-12-09 09:56:09.119137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.715 [2024-12-09 09:56:09.119167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.715 qpair failed and we were unable to recover it. 00:38:33.715 [2024-12-09 09:56:09.119491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.715 [2024-12-09 09:56:09.119521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.715 qpair failed and we were unable to recover it. 00:38:33.715 [2024-12-09 09:56:09.119941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.715 [2024-12-09 09:56:09.119971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.715 qpair failed and we were unable to recover it. 00:38:33.715 [2024-12-09 09:56:09.120186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.715 [2024-12-09 09:56:09.120214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.715 qpair failed and we were unable to recover it. 00:38:33.715 [2024-12-09 09:56:09.120550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.715 [2024-12-09 09:56:09.120579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.715 qpair failed and we were unable to recover it. 00:38:33.715 [2024-12-09 09:56:09.120934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.715 [2024-12-09 09:56:09.120963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.715 qpair failed and we were unable to recover it. 00:38:33.715 [2024-12-09 09:56:09.121101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.715 [2024-12-09 09:56:09.121129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.715 qpair failed and we were unable to recover it. 00:38:33.715 [2024-12-09 09:56:09.121483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.715 [2024-12-09 09:56:09.121513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.715 qpair failed and we were unable to recover it. 00:38:33.715 [2024-12-09 09:56:09.121681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.715 [2024-12-09 09:56:09.121710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.715 qpair failed and we were unable to recover it. 00:38:33.715 [2024-12-09 09:56:09.122072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.715 [2024-12-09 09:56:09.122102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.715 qpair failed and we were unable to recover it. 00:38:33.715 [2024-12-09 09:56:09.122446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.715 [2024-12-09 09:56:09.122475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.715 qpair failed and we were unable to recover it. 00:38:33.715 [2024-12-09 09:56:09.122715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.715 [2024-12-09 09:56:09.122746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.715 qpair failed and we were unable to recover it. 00:38:33.715 [2024-12-09 09:56:09.123064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.715 [2024-12-09 09:56:09.123093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.715 qpair failed and we were unable to recover it. 00:38:33.715 [2024-12-09 09:56:09.123432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.715 [2024-12-09 09:56:09.123461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.715 qpair failed and we were unable to recover it. 00:38:33.715 [2024-12-09 09:56:09.123801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.715 [2024-12-09 09:56:09.123831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.715 qpair failed and we were unable to recover it. 00:38:33.715 [2024-12-09 09:56:09.124162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.716 [2024-12-09 09:56:09.124191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.716 qpair failed and we were unable to recover it. 00:38:33.716 [2024-12-09 09:56:09.124555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.716 [2024-12-09 09:56:09.124585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.716 qpair failed and we were unable to recover it. 00:38:33.716 [2024-12-09 09:56:09.124932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.716 [2024-12-09 09:56:09.124962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.716 qpair failed and we were unable to recover it. 00:38:33.716 [2024-12-09 09:56:09.125310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.716 [2024-12-09 09:56:09.125338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.716 qpair failed and we were unable to recover it. 00:38:33.716 [2024-12-09 09:56:09.125741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.716 [2024-12-09 09:56:09.125771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.716 qpair failed and we were unable to recover it. 00:38:33.716 [2024-12-09 09:56:09.126006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.716 [2024-12-09 09:56:09.126034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.716 qpair failed and we were unable to recover it. 00:38:33.716 [2024-12-09 09:56:09.126270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.716 [2024-12-09 09:56:09.126305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.716 qpair failed and we were unable to recover it. 00:38:33.716 [2024-12-09 09:56:09.126695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.716 [2024-12-09 09:56:09.126725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.716 qpair failed and we were unable to recover it. 00:38:33.716 [2024-12-09 09:56:09.126984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.716 [2024-12-09 09:56:09.127012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.716 qpair failed and we were unable to recover it. 00:38:33.716 [2024-12-09 09:56:09.127234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.716 [2024-12-09 09:56:09.127263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.716 qpair failed and we were unable to recover it. 00:38:33.716 [2024-12-09 09:56:09.127521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.716 [2024-12-09 09:56:09.127549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.716 qpair failed and we were unable to recover it. 00:38:33.716 [2024-12-09 09:56:09.127911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.716 [2024-12-09 09:56:09.127942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.716 qpair failed and we were unable to recover it. 00:38:33.716 [2024-12-09 09:56:09.128300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.716 [2024-12-09 09:56:09.128328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.716 qpair failed and we were unable to recover it. 00:38:33.716 [2024-12-09 09:56:09.128702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.716 [2024-12-09 09:56:09.128731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.716 qpair failed and we were unable to recover it. 00:38:33.716 [2024-12-09 09:56:09.129056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.716 [2024-12-09 09:56:09.129085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.716 qpair failed and we were unable to recover it. 00:38:33.716 [2024-12-09 09:56:09.129420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.716 [2024-12-09 09:56:09.129450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.716 qpair failed and we were unable to recover it. 00:38:33.716 [2024-12-09 09:56:09.129581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.716 [2024-12-09 09:56:09.129609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.716 qpair failed and we were unable to recover it. 00:38:33.716 [2024-12-09 09:56:09.129888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.716 [2024-12-09 09:56:09.129918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.716 qpair failed and we were unable to recover it. 00:38:33.716 [2024-12-09 09:56:09.130272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.716 [2024-12-09 09:56:09.130301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.716 qpair failed and we were unable to recover it. 00:38:33.716 [2024-12-09 09:56:09.130669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.716 [2024-12-09 09:56:09.130699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.716 qpair failed and we were unable to recover it. 00:38:33.716 [2024-12-09 09:56:09.131079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.716 [2024-12-09 09:56:09.131109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.716 qpair failed and we were unable to recover it. 00:38:33.716 [2024-12-09 09:56:09.131359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.716 [2024-12-09 09:56:09.131388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.716 qpair failed and we were unable to recover it. 00:38:33.716 [2024-12-09 09:56:09.131665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.716 [2024-12-09 09:56:09.131696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.716 qpair failed and we were unable to recover it. 00:38:33.716 [2024-12-09 09:56:09.132052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.716 [2024-12-09 09:56:09.132081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.716 qpair failed and we were unable to recover it. 00:38:33.716 [2024-12-09 09:56:09.132305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.716 [2024-12-09 09:56:09.132334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.716 qpair failed and we were unable to recover it. 00:38:33.716 [2024-12-09 09:56:09.132574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.716 [2024-12-09 09:56:09.132603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.716 qpair failed and we were unable to recover it. 00:38:33.716 [2024-12-09 09:56:09.132958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.716 [2024-12-09 09:56:09.132989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.716 qpair failed and we were unable to recover it. 00:38:33.716 [2024-12-09 09:56:09.133302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.716 [2024-12-09 09:56:09.133330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.716 qpair failed and we were unable to recover it. 00:38:33.716 [2024-12-09 09:56:09.133669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.716 [2024-12-09 09:56:09.133701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.716 qpair failed and we were unable to recover it. 00:38:33.990 [2024-12-09 09:56:09.134053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.990 [2024-12-09 09:56:09.134083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.990 qpair failed and we were unable to recover it. 00:38:33.990 [2024-12-09 09:56:09.134477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.990 [2024-12-09 09:56:09.134506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.990 qpair failed and we were unable to recover it. 00:38:33.990 [2024-12-09 09:56:09.134876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.990 [2024-12-09 09:56:09.134906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.990 qpair failed and we were unable to recover it. 00:38:33.990 [2024-12-09 09:56:09.135229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.990 [2024-12-09 09:56:09.135260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.990 qpair failed and we were unable to recover it. 00:38:33.990 [2024-12-09 09:56:09.135624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.990 [2024-12-09 09:56:09.135665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.990 qpair failed and we were unable to recover it. 00:38:33.990 [2024-12-09 09:56:09.136008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.990 [2024-12-09 09:56:09.136038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.990 qpair failed and we were unable to recover it. 00:38:33.990 [2024-12-09 09:56:09.136464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.990 [2024-12-09 09:56:09.136493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.990 qpair failed and we were unable to recover it. 00:38:33.990 [2024-12-09 09:56:09.136737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.990 [2024-12-09 09:56:09.136768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.990 qpair failed and we were unable to recover it. 00:38:33.990 [2024-12-09 09:56:09.137011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.990 [2024-12-09 09:56:09.137040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.990 qpair failed and we were unable to recover it. 00:38:33.990 [2024-12-09 09:56:09.137405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.990 [2024-12-09 09:56:09.137433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.990 qpair failed and we were unable to recover it. 00:38:33.990 [2024-12-09 09:56:09.137817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.990 [2024-12-09 09:56:09.137848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.990 qpair failed and we were unable to recover it. 00:38:33.990 [2024-12-09 09:56:09.138211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.990 [2024-12-09 09:56:09.138240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.990 qpair failed and we were unable to recover it. 00:38:33.990 [2024-12-09 09:56:09.138593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.990 [2024-12-09 09:56:09.138621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.990 qpair failed and we were unable to recover it. 00:38:33.990 [2024-12-09 09:56:09.138989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.990 [2024-12-09 09:56:09.139019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.990 qpair failed and we were unable to recover it. 00:38:33.990 [2024-12-09 09:56:09.139378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.990 [2024-12-09 09:56:09.139406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.990 qpair failed and we were unable to recover it. 00:38:33.990 [2024-12-09 09:56:09.139802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.990 [2024-12-09 09:56:09.139831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.990 qpair failed and we were unable to recover it. 00:38:33.990 [2024-12-09 09:56:09.140180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.990 [2024-12-09 09:56:09.140208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.990 qpair failed and we were unable to recover it. 00:38:33.990 [2024-12-09 09:56:09.140538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.990 [2024-12-09 09:56:09.140568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.990 qpair failed and we were unable to recover it. 00:38:33.990 [2024-12-09 09:56:09.140803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.990 [2024-12-09 09:56:09.140845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.990 qpair failed and we were unable to recover it. 00:38:33.990 [2024-12-09 09:56:09.141192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.990 [2024-12-09 09:56:09.141223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.990 qpair failed and we were unable to recover it. 00:38:33.990 [2024-12-09 09:56:09.141463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.990 [2024-12-09 09:56:09.141491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.990 qpair failed and we were unable to recover it. 00:38:33.990 [2024-12-09 09:56:09.141800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.990 [2024-12-09 09:56:09.141832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.990 qpair failed and we were unable to recover it. 00:38:33.990 [2024-12-09 09:56:09.142197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.990 [2024-12-09 09:56:09.142226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.990 qpair failed and we were unable to recover it. 00:38:33.990 [2024-12-09 09:56:09.142599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.990 [2024-12-09 09:56:09.142629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.990 qpair failed and we were unable to recover it. 00:38:33.990 [2024-12-09 09:56:09.142989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.990 [2024-12-09 09:56:09.143019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.990 qpair failed and we were unable to recover it. 00:38:33.990 [2024-12-09 09:56:09.143339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.990 [2024-12-09 09:56:09.143367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.990 qpair failed and we were unable to recover it. 00:38:33.990 [2024-12-09 09:56:09.143729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.990 [2024-12-09 09:56:09.143759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.990 qpair failed and we were unable to recover it. 00:38:33.990 [2024-12-09 09:56:09.144112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.990 [2024-12-09 09:56:09.144139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.990 qpair failed and we were unable to recover it. 00:38:33.990 [2024-12-09 09:56:09.144507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.990 [2024-12-09 09:56:09.144537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.990 qpair failed and we were unable to recover it. 00:38:33.991 [2024-12-09 09:56:09.144895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.991 [2024-12-09 09:56:09.144926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.991 qpair failed and we were unable to recover it. 00:38:33.991 [2024-12-09 09:56:09.145248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.991 [2024-12-09 09:56:09.145278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.991 qpair failed and we were unable to recover it. 00:38:33.991 [2024-12-09 09:56:09.145628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.991 [2024-12-09 09:56:09.145668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.991 qpair failed and we were unable to recover it. 00:38:33.991 [2024-12-09 09:56:09.146078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.991 [2024-12-09 09:56:09.146108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.991 qpair failed and we were unable to recover it. 00:38:33.991 [2024-12-09 09:56:09.146523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.991 [2024-12-09 09:56:09.146552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.991 qpair failed and we were unable to recover it. 00:38:33.991 [2024-12-09 09:56:09.146792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.991 [2024-12-09 09:56:09.146822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.991 qpair failed and we were unable to recover it. 00:38:33.991 [2024-12-09 09:56:09.147145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.991 [2024-12-09 09:56:09.147176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.991 qpair failed and we were unable to recover it. 00:38:33.991 [2024-12-09 09:56:09.147523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.991 [2024-12-09 09:56:09.147552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.991 qpair failed and we were unable to recover it. 00:38:33.991 [2024-12-09 09:56:09.147900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.991 [2024-12-09 09:56:09.147930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.991 qpair failed and we were unable to recover it. 00:38:33.991 [2024-12-09 09:56:09.148277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.991 [2024-12-09 09:56:09.148306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.991 qpair failed and we were unable to recover it. 00:38:33.991 [2024-12-09 09:56:09.148559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.991 [2024-12-09 09:56:09.148588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.991 qpair failed and we were unable to recover it. 00:38:33.991 [2024-12-09 09:56:09.148964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.991 [2024-12-09 09:56:09.148994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.991 qpair failed and we were unable to recover it. 00:38:33.991 [2024-12-09 09:56:09.149359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.991 [2024-12-09 09:56:09.149389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.991 qpair failed and we were unable to recover it. 00:38:33.991 [2024-12-09 09:56:09.149674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.991 [2024-12-09 09:56:09.149703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.991 qpair failed and we were unable to recover it. 00:38:33.991 [2024-12-09 09:56:09.150075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.991 [2024-12-09 09:56:09.150105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.991 qpair failed and we were unable to recover it. 00:38:33.991 [2024-12-09 09:56:09.150438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.991 [2024-12-09 09:56:09.150467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.991 qpair failed and we were unable to recover it. 00:38:33.991 [2024-12-09 09:56:09.150849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.991 [2024-12-09 09:56:09.150879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.991 qpair failed and we were unable to recover it. 00:38:33.991 [2024-12-09 09:56:09.151218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.991 [2024-12-09 09:56:09.151247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.991 qpair failed and we were unable to recover it. 00:38:33.991 [2024-12-09 09:56:09.151584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.991 [2024-12-09 09:56:09.151614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.991 qpair failed and we were unable to recover it. 00:38:33.991 [2024-12-09 09:56:09.151953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.991 [2024-12-09 09:56:09.151983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.991 qpair failed and we were unable to recover it. 00:38:33.991 [2024-12-09 09:56:09.152404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.991 [2024-12-09 09:56:09.152433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.991 qpair failed and we were unable to recover it. 00:38:33.991 [2024-12-09 09:56:09.152789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.991 [2024-12-09 09:56:09.152820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.991 qpair failed and we were unable to recover it. 00:38:33.991 [2024-12-09 09:56:09.153213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.991 [2024-12-09 09:56:09.153242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.991 qpair failed and we were unable to recover it. 00:38:33.991 [2024-12-09 09:56:09.153420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.991 [2024-12-09 09:56:09.153448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.991 qpair failed and we were unable to recover it. 00:38:33.991 [2024-12-09 09:56:09.153843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.991 [2024-12-09 09:56:09.153874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.991 qpair failed and we were unable to recover it. 00:38:33.991 [2024-12-09 09:56:09.154238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.991 [2024-12-09 09:56:09.154267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.991 qpair failed and we were unable to recover it. 00:38:33.991 [2024-12-09 09:56:09.154516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.991 [2024-12-09 09:56:09.154546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.991 qpair failed and we were unable to recover it. 00:38:33.991 [2024-12-09 09:56:09.154915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.991 [2024-12-09 09:56:09.154945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.991 qpair failed and we were unable to recover it. 00:38:33.991 [2024-12-09 09:56:09.155276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.991 [2024-12-09 09:56:09.155305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.991 qpair failed and we were unable to recover it. 00:38:33.991 [2024-12-09 09:56:09.155544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.991 [2024-12-09 09:56:09.155574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.991 qpair failed and we were unable to recover it. 00:38:33.991 [2024-12-09 09:56:09.155951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.991 [2024-12-09 09:56:09.155983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.991 qpair failed and we were unable to recover it. 00:38:33.991 [2024-12-09 09:56:09.156352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.991 [2024-12-09 09:56:09.156381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.991 qpair failed and we were unable to recover it. 00:38:33.991 [2024-12-09 09:56:09.156712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.991 [2024-12-09 09:56:09.156742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.991 qpair failed and we were unable to recover it. 00:38:33.991 [2024-12-09 09:56:09.157136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.991 [2024-12-09 09:56:09.157165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.991 qpair failed and we were unable to recover it. 00:38:33.991 [2024-12-09 09:56:09.157516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.991 [2024-12-09 09:56:09.157545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.991 qpair failed and we were unable to recover it. 00:38:33.991 [2024-12-09 09:56:09.157904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.991 [2024-12-09 09:56:09.157934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.991 qpair failed and we were unable to recover it. 00:38:33.991 [2024-12-09 09:56:09.158311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.991 [2024-12-09 09:56:09.158340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.991 qpair failed and we were unable to recover it. 00:38:33.991 [2024-12-09 09:56:09.158577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.992 [2024-12-09 09:56:09.158607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.992 qpair failed and we were unable to recover it. 00:38:33.992 [2024-12-09 09:56:09.158987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.992 [2024-12-09 09:56:09.159017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.992 qpair failed and we were unable to recover it. 00:38:33.992 [2024-12-09 09:56:09.159358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.992 [2024-12-09 09:56:09.159386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.992 qpair failed and we were unable to recover it. 00:38:33.992 [2024-12-09 09:56:09.159722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.992 [2024-12-09 09:56:09.159753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.992 qpair failed and we were unable to recover it. 00:38:33.992 [2024-12-09 09:56:09.160109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.992 [2024-12-09 09:56:09.160139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.992 qpair failed and we were unable to recover it. 00:38:33.992 [2024-12-09 09:56:09.160498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.992 [2024-12-09 09:56:09.160526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.992 qpair failed and we were unable to recover it. 00:38:33.992 [2024-12-09 09:56:09.160703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.992 [2024-12-09 09:56:09.160736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.992 qpair failed and we were unable to recover it. 00:38:33.992 [2024-12-09 09:56:09.161066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.992 [2024-12-09 09:56:09.161095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.992 qpair failed and we were unable to recover it. 00:38:33.992 [2024-12-09 09:56:09.161337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.992 [2024-12-09 09:56:09.161369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.992 qpair failed and we were unable to recover it. 00:38:33.992 [2024-12-09 09:56:09.161738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.992 [2024-12-09 09:56:09.161769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.992 qpair failed and we were unable to recover it. 00:38:33.992 [2024-12-09 09:56:09.162093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.992 [2024-12-09 09:56:09.162123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.992 qpair failed and we were unable to recover it. 00:38:33.992 [2024-12-09 09:56:09.162455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.992 [2024-12-09 09:56:09.162485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.992 qpair failed and we were unable to recover it. 00:38:33.992 [2024-12-09 09:56:09.162894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.992 [2024-12-09 09:56:09.162923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.992 qpair failed and we were unable to recover it. 00:38:33.992 [2024-12-09 09:56:09.163155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.992 [2024-12-09 09:56:09.163186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.992 qpair failed and we were unable to recover it. 00:38:33.992 [2024-12-09 09:56:09.163440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.992 [2024-12-09 09:56:09.163470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.992 qpair failed and we were unable to recover it. 00:38:33.992 [2024-12-09 09:56:09.163815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.992 [2024-12-09 09:56:09.163845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.992 qpair failed and we were unable to recover it. 00:38:33.992 [2024-12-09 09:56:09.164094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.992 [2024-12-09 09:56:09.164123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.992 qpair failed and we were unable to recover it. 00:38:33.992 [2024-12-09 09:56:09.164403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.992 [2024-12-09 09:56:09.164432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.992 qpair failed and we were unable to recover it. 00:38:33.992 [2024-12-09 09:56:09.164784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.992 [2024-12-09 09:56:09.164813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.992 qpair failed and we were unable to recover it. 00:38:33.992 [2024-12-09 09:56:09.165164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.992 [2024-12-09 09:56:09.165194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.992 qpair failed and we were unable to recover it. 00:38:33.992 [2024-12-09 09:56:09.165522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.992 [2024-12-09 09:56:09.165556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.992 qpair failed and we were unable to recover it. 00:38:33.992 [2024-12-09 09:56:09.165899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.992 [2024-12-09 09:56:09.165929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.992 qpair failed and we were unable to recover it. 00:38:33.992 [2024-12-09 09:56:09.166256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.992 [2024-12-09 09:56:09.166285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.992 qpair failed and we were unable to recover it. 00:38:33.992 [2024-12-09 09:56:09.166654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.992 [2024-12-09 09:56:09.166684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.992 qpair failed and we were unable to recover it. 00:38:33.992 [2024-12-09 09:56:09.167082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.992 [2024-12-09 09:56:09.167112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.992 qpair failed and we were unable to recover it. 00:38:33.992 [2024-12-09 09:56:09.167458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.992 [2024-12-09 09:56:09.167487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.992 qpair failed and we were unable to recover it. 00:38:33.992 [2024-12-09 09:56:09.167944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.992 [2024-12-09 09:56:09.167975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.992 qpair failed and we were unable to recover it. 00:38:33.992 [2024-12-09 09:56:09.168301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.992 [2024-12-09 09:56:09.168330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.992 qpair failed and we were unable to recover it. 00:38:33.992 [2024-12-09 09:56:09.168594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.992 [2024-12-09 09:56:09.168623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.992 qpair failed and we were unable to recover it. 00:38:33.992 [2024-12-09 09:56:09.169014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.992 [2024-12-09 09:56:09.169043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.992 qpair failed and we were unable to recover it. 00:38:33.992 [2024-12-09 09:56:09.169374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.992 [2024-12-09 09:56:09.169404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.992 qpair failed and we were unable to recover it. 00:38:33.992 [2024-12-09 09:56:09.169715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.992 [2024-12-09 09:56:09.169746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.992 qpair failed and we were unable to recover it. 00:38:33.992 [2024-12-09 09:56:09.169972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.992 [2024-12-09 09:56:09.170001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.992 qpair failed and we were unable to recover it. 00:38:33.992 [2024-12-09 09:56:09.170221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.992 [2024-12-09 09:56:09.170253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.992 qpair failed and we were unable to recover it. 00:38:33.992 [2024-12-09 09:56:09.170657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.992 [2024-12-09 09:56:09.170688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.992 qpair failed and we were unable to recover it. 00:38:33.992 [2024-12-09 09:56:09.171009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.992 [2024-12-09 09:56:09.171039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.992 qpair failed and we were unable to recover it. 00:38:33.992 [2024-12-09 09:56:09.171409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.992 [2024-12-09 09:56:09.171437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.992 qpair failed and we were unable to recover it. 00:38:33.992 [2024-12-09 09:56:09.171742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.992 [2024-12-09 09:56:09.171772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.992 qpair failed and we were unable to recover it. 00:38:33.993 [2024-12-09 09:56:09.172128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.993 [2024-12-09 09:56:09.172157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.993 qpair failed and we were unable to recover it. 00:38:33.993 [2024-12-09 09:56:09.172497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.993 [2024-12-09 09:56:09.172525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.993 qpair failed and we were unable to recover it. 00:38:33.993 [2024-12-09 09:56:09.172942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.993 [2024-12-09 09:56:09.172972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.993 qpair failed and we were unable to recover it. 00:38:33.993 [2024-12-09 09:56:09.173123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.993 [2024-12-09 09:56:09.173152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.993 qpair failed and we were unable to recover it. 00:38:33.993 [2024-12-09 09:56:09.173480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.993 [2024-12-09 09:56:09.173508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.993 qpair failed and we were unable to recover it. 00:38:33.993 [2024-12-09 09:56:09.173881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.993 [2024-12-09 09:56:09.173911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.993 qpair failed and we were unable to recover it. 00:38:33.993 [2024-12-09 09:56:09.174148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.993 [2024-12-09 09:56:09.174176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.993 qpair failed and we were unable to recover it. 00:38:33.993 [2024-12-09 09:56:09.174495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.993 [2024-12-09 09:56:09.174524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.993 qpair failed and we were unable to recover it. 00:38:33.993 [2024-12-09 09:56:09.174954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.993 [2024-12-09 09:56:09.174984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.993 qpair failed and we were unable to recover it. 00:38:33.993 [2024-12-09 09:56:09.175230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.993 [2024-12-09 09:56:09.175258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.993 qpair failed and we were unable to recover it. 00:38:33.993 [2024-12-09 09:56:09.175497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.993 [2024-12-09 09:56:09.175527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.993 qpair failed and we were unable to recover it. 00:38:33.993 [2024-12-09 09:56:09.175899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.993 [2024-12-09 09:56:09.175929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.993 qpair failed and we were unable to recover it. 00:38:33.993 [2024-12-09 09:56:09.176272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.993 [2024-12-09 09:56:09.176301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.993 qpair failed and we were unable to recover it. 00:38:33.993 [2024-12-09 09:56:09.176605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.993 [2024-12-09 09:56:09.176635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.993 qpair failed and we were unable to recover it. 00:38:33.993 [2024-12-09 09:56:09.177021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.993 [2024-12-09 09:56:09.177051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.993 qpair failed and we were unable to recover it. 00:38:33.993 [2024-12-09 09:56:09.177370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.993 [2024-12-09 09:56:09.177398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.993 qpair failed and we were unable to recover it. 00:38:33.993 [2024-12-09 09:56:09.177628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.993 [2024-12-09 09:56:09.177671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.993 qpair failed and we were unable to recover it. 00:38:33.993 [2024-12-09 09:56:09.177993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.993 [2024-12-09 09:56:09.178023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.993 qpair failed and we were unable to recover it. 00:38:33.993 [2024-12-09 09:56:09.178370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.993 [2024-12-09 09:56:09.178399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.993 qpair failed and we were unable to recover it. 00:38:33.993 [2024-12-09 09:56:09.178665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.993 [2024-12-09 09:56:09.178695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.993 qpair failed and we were unable to recover it. 00:38:33.993 [2024-12-09 09:56:09.178936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.993 [2024-12-09 09:56:09.178966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.993 qpair failed and we were unable to recover it. 00:38:33.993 [2024-12-09 09:56:09.179314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.993 [2024-12-09 09:56:09.179342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.993 qpair failed and we were unable to recover it. 00:38:33.993 [2024-12-09 09:56:09.179714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.993 [2024-12-09 09:56:09.179744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.993 qpair failed and we were unable to recover it. 00:38:33.993 [2024-12-09 09:56:09.180093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.993 [2024-12-09 09:56:09.180127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.993 qpair failed and we were unable to recover it. 00:38:33.993 [2024-12-09 09:56:09.180531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.993 [2024-12-09 09:56:09.180559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.993 qpair failed and we were unable to recover it. 00:38:33.993 [2024-12-09 09:56:09.180914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.993 [2024-12-09 09:56:09.180944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.993 qpair failed and we were unable to recover it. 00:38:33.993 [2024-12-09 09:56:09.181307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.993 [2024-12-09 09:56:09.181336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.993 qpair failed and we were unable to recover it. 00:38:33.993 [2024-12-09 09:56:09.181691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.993 [2024-12-09 09:56:09.181721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.993 qpair failed and we were unable to recover it. 00:38:33.993 [2024-12-09 09:56:09.182045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.993 [2024-12-09 09:56:09.182073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.993 qpair failed and we were unable to recover it. 00:38:33.993 [2024-12-09 09:56:09.182477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.993 [2024-12-09 09:56:09.182505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.993 qpair failed and we were unable to recover it. 00:38:33.993 [2024-12-09 09:56:09.182855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.993 [2024-12-09 09:56:09.182886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.993 qpair failed and we were unable to recover it. 00:38:33.993 [2024-12-09 09:56:09.183136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.993 [2024-12-09 09:56:09.183164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.993 qpair failed and we were unable to recover it. 00:38:33.993 [2024-12-09 09:56:09.183494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.993 [2024-12-09 09:56:09.183522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.993 qpair failed and we were unable to recover it. 00:38:33.993 [2024-12-09 09:56:09.183944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.993 [2024-12-09 09:56:09.183974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.993 qpair failed and we were unable to recover it. 00:38:33.993 [2024-12-09 09:56:09.184308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.993 [2024-12-09 09:56:09.184337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.993 qpair failed and we were unable to recover it. 00:38:33.993 [2024-12-09 09:56:09.184596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.993 [2024-12-09 09:56:09.184625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.993 qpair failed and we were unable to recover it. 00:38:33.993 [2024-12-09 09:56:09.184922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.993 [2024-12-09 09:56:09.184953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.993 qpair failed and we were unable to recover it. 00:38:33.993 [2024-12-09 09:56:09.185324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.993 [2024-12-09 09:56:09.185353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.993 qpair failed and we were unable to recover it. 00:38:33.994 [2024-12-09 09:56:09.185503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.994 [2024-12-09 09:56:09.185533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.994 qpair failed and we were unable to recover it. 00:38:33.994 [2024-12-09 09:56:09.185820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.994 [2024-12-09 09:56:09.185851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.994 qpair failed and we were unable to recover it. 00:38:33.994 [2024-12-09 09:56:09.186099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.994 [2024-12-09 09:56:09.186128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.994 qpair failed and we were unable to recover it. 00:38:33.994 [2024-12-09 09:56:09.186460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.994 [2024-12-09 09:56:09.186489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.994 qpair failed and we were unable to recover it. 00:38:33.994 [2024-12-09 09:56:09.186832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.994 [2024-12-09 09:56:09.186863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.994 qpair failed and we were unable to recover it. 00:38:33.994 [2024-12-09 09:56:09.187220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.994 [2024-12-09 09:56:09.187249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.994 qpair failed and we were unable to recover it. 00:38:33.994 [2024-12-09 09:56:09.187609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.994 [2024-12-09 09:56:09.187645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.994 qpair failed and we were unable to recover it. 00:38:33.994 [2024-12-09 09:56:09.187954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.994 [2024-12-09 09:56:09.187983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.994 qpair failed and we were unable to recover it. 00:38:33.994 [2024-12-09 09:56:09.188258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.994 [2024-12-09 09:56:09.188286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.994 qpair failed and we were unable to recover it. 00:38:33.994 [2024-12-09 09:56:09.188443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.994 [2024-12-09 09:56:09.188472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.994 qpair failed and we were unable to recover it. 00:38:33.994 [2024-12-09 09:56:09.188838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.994 [2024-12-09 09:56:09.188868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.994 qpair failed and we were unable to recover it. 00:38:33.994 [2024-12-09 09:56:09.189223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.994 [2024-12-09 09:56:09.189251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.994 qpair failed and we were unable to recover it. 00:38:33.994 [2024-12-09 09:56:09.189593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.994 [2024-12-09 09:56:09.189627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.994 qpair failed and we were unable to recover it. 00:38:33.994 [2024-12-09 09:56:09.189905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.994 [2024-12-09 09:56:09.189935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.994 qpair failed and we were unable to recover it. 00:38:33.994 [2024-12-09 09:56:09.190254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.994 [2024-12-09 09:56:09.190284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.994 qpair failed and we were unable to recover it. 00:38:33.994 [2024-12-09 09:56:09.190622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.994 [2024-12-09 09:56:09.190662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.994 qpair failed and we were unable to recover it. 00:38:33.994 [2024-12-09 09:56:09.190979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.994 [2024-12-09 09:56:09.191008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.994 qpair failed and we were unable to recover it. 00:38:33.994 [2024-12-09 09:56:09.191341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.994 [2024-12-09 09:56:09.191369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.994 qpair failed and we were unable to recover it. 00:38:33.994 [2024-12-09 09:56:09.191707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.994 [2024-12-09 09:56:09.191737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.994 qpair failed and we were unable to recover it. 00:38:33.994 [2024-12-09 09:56:09.192107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.994 [2024-12-09 09:56:09.192135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.994 qpair failed and we were unable to recover it. 00:38:33.994 [2024-12-09 09:56:09.192485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.994 [2024-12-09 09:56:09.192514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.994 qpair failed and we were unable to recover it. 00:38:33.994 [2024-12-09 09:56:09.192948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.994 [2024-12-09 09:56:09.192979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.994 qpair failed and we were unable to recover it. 00:38:33.994 [2024-12-09 09:56:09.193333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.994 [2024-12-09 09:56:09.193362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.994 qpair failed and we were unable to recover it. 00:38:33.994 [2024-12-09 09:56:09.193719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.994 [2024-12-09 09:56:09.193749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.994 qpair failed and we were unable to recover it. 00:38:33.994 [2024-12-09 09:56:09.194115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.994 [2024-12-09 09:56:09.194143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.994 qpair failed and we were unable to recover it. 00:38:33.994 [2024-12-09 09:56:09.194541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.994 [2024-12-09 09:56:09.194570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.994 qpair failed and we were unable to recover it. 00:38:33.994 [2024-12-09 09:56:09.194860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.994 [2024-12-09 09:56:09.194889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.994 qpair failed and we were unable to recover it. 00:38:33.994 [2024-12-09 09:56:09.195245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.994 [2024-12-09 09:56:09.195274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.994 qpair failed and we were unable to recover it. 00:38:33.994 [2024-12-09 09:56:09.195626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.994 [2024-12-09 09:56:09.195665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.994 qpair failed and we were unable to recover it. 00:38:33.994 [2024-12-09 09:56:09.196071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.994 [2024-12-09 09:56:09.196100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.994 qpair failed and we were unable to recover it. 00:38:33.994 [2024-12-09 09:56:09.196452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.994 [2024-12-09 09:56:09.196481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.994 qpair failed and we were unable to recover it. 00:38:33.994 [2024-12-09 09:56:09.196835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.995 [2024-12-09 09:56:09.196865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.995 qpair failed and we were unable to recover it. 00:38:33.995 [2024-12-09 09:56:09.197220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.995 [2024-12-09 09:56:09.197249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.995 qpair failed and we were unable to recover it. 00:38:33.995 [2024-12-09 09:56:09.197609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.995 [2024-12-09 09:56:09.197646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.995 qpair failed and we were unable to recover it. 00:38:33.995 [2024-12-09 09:56:09.198013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.995 [2024-12-09 09:56:09.198042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.995 qpair failed and we were unable to recover it. 00:38:33.995 [2024-12-09 09:56:09.198282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.995 [2024-12-09 09:56:09.198310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.995 qpair failed and we were unable to recover it. 00:38:33.995 [2024-12-09 09:56:09.198558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.995 [2024-12-09 09:56:09.198587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.995 qpair failed and we were unable to recover it. 00:38:33.995 [2024-12-09 09:56:09.198982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.995 [2024-12-09 09:56:09.199012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.995 qpair failed and we were unable to recover it. 00:38:33.995 [2024-12-09 09:56:09.199345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.995 [2024-12-09 09:56:09.199374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.995 qpair failed and we were unable to recover it. 00:38:33.995 [2024-12-09 09:56:09.199708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.995 [2024-12-09 09:56:09.199738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.995 qpair failed and we were unable to recover it. 00:38:33.995 [2024-12-09 09:56:09.199974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.995 [2024-12-09 09:56:09.200002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.995 qpair failed and we were unable to recover it. 00:38:33.995 [2024-12-09 09:56:09.200328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.995 [2024-12-09 09:56:09.200356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.995 qpair failed and we were unable to recover it. 00:38:33.995 [2024-12-09 09:56:09.200685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.995 [2024-12-09 09:56:09.200716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.995 qpair failed and we were unable to recover it. 00:38:33.995 [2024-12-09 09:56:09.201087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.995 [2024-12-09 09:56:09.201116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.995 qpair failed and we were unable to recover it. 00:38:33.995 [2024-12-09 09:56:09.201443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.995 [2024-12-09 09:56:09.201472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.995 qpair failed and we were unable to recover it. 00:38:33.995 [2024-12-09 09:56:09.201861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.995 [2024-12-09 09:56:09.201891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.995 qpair failed and we were unable to recover it. 00:38:33.995 [2024-12-09 09:56:09.202262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.995 [2024-12-09 09:56:09.202290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.995 qpair failed and we were unable to recover it. 00:38:33.995 [2024-12-09 09:56:09.202635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.995 [2024-12-09 09:56:09.202687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.995 qpair failed and we were unable to recover it. 00:38:33.995 [2024-12-09 09:56:09.203020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.995 [2024-12-09 09:56:09.203049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.995 qpair failed and we were unable to recover it. 00:38:33.995 [2024-12-09 09:56:09.203392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.995 [2024-12-09 09:56:09.203420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.995 qpair failed and we were unable to recover it. 00:38:33.995 [2024-12-09 09:56:09.203674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.995 [2024-12-09 09:56:09.203703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.995 qpair failed and we were unable to recover it. 00:38:33.995 [2024-12-09 09:56:09.204074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.995 [2024-12-09 09:56:09.204103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.995 qpair failed and we were unable to recover it. 00:38:33.995 [2024-12-09 09:56:09.204451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.995 [2024-12-09 09:56:09.204480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.995 qpair failed and we were unable to recover it. 00:38:33.995 [2024-12-09 09:56:09.204767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.995 [2024-12-09 09:56:09.204807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.995 qpair failed and we were unable to recover it. 00:38:33.995 [2024-12-09 09:56:09.205155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.995 [2024-12-09 09:56:09.205185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.995 qpair failed and we were unable to recover it. 00:38:33.995 [2024-12-09 09:56:09.205530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.995 [2024-12-09 09:56:09.205559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.995 qpair failed and we were unable to recover it. 00:38:33.995 [2024-12-09 09:56:09.205878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.995 [2024-12-09 09:56:09.205908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.995 qpair failed and we were unable to recover it. 00:38:33.995 [2024-12-09 09:56:09.206255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.995 [2024-12-09 09:56:09.206284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.995 qpair failed and we were unable to recover it. 00:38:33.995 [2024-12-09 09:56:09.206626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.995 [2024-12-09 09:56:09.206664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.995 qpair failed and we were unable to recover it. 00:38:33.995 [2024-12-09 09:56:09.207014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.995 [2024-12-09 09:56:09.207043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.995 qpair failed and we were unable to recover it. 00:38:33.995 [2024-12-09 09:56:09.207389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.995 [2024-12-09 09:56:09.207417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.995 qpair failed and we were unable to recover it. 00:38:33.995 [2024-12-09 09:56:09.207782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.995 [2024-12-09 09:56:09.207812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.995 qpair failed and we were unable to recover it. 00:38:33.995 [2024-12-09 09:56:09.208132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.995 [2024-12-09 09:56:09.208161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.995 qpair failed and we were unable to recover it. 00:38:33.995 [2024-12-09 09:56:09.208492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.995 [2024-12-09 09:56:09.208521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.995 qpair failed and we were unable to recover it. 00:38:33.995 [2024-12-09 09:56:09.208930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.995 [2024-12-09 09:56:09.208961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.995 qpair failed and we were unable to recover it. 00:38:33.995 [2024-12-09 09:56:09.209298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.995 [2024-12-09 09:56:09.209327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.995 qpair failed and we were unable to recover it. 00:38:33.995 [2024-12-09 09:56:09.209671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.995 [2024-12-09 09:56:09.209700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.995 qpair failed and we were unable to recover it. 00:38:33.995 [2024-12-09 09:56:09.210060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.995 [2024-12-09 09:56:09.210088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.995 qpair failed and we were unable to recover it. 00:38:33.995 [2024-12-09 09:56:09.210425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.995 [2024-12-09 09:56:09.210455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.995 qpair failed and we were unable to recover it. 00:38:33.995 [2024-12-09 09:56:09.210854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.996 [2024-12-09 09:56:09.210884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.996 qpair failed and we were unable to recover it. 00:38:33.996 [2024-12-09 09:56:09.211228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.996 [2024-12-09 09:56:09.211256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.996 qpair failed and we were unable to recover it. 00:38:33.996 [2024-12-09 09:56:09.211587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.996 [2024-12-09 09:56:09.211617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.996 qpair failed and we were unable to recover it. 00:38:33.996 [2024-12-09 09:56:09.211957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.996 [2024-12-09 09:56:09.211987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.996 qpair failed and we were unable to recover it. 00:38:33.996 [2024-12-09 09:56:09.212329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.996 [2024-12-09 09:56:09.212358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.996 qpair failed and we were unable to recover it. 00:38:33.996 [2024-12-09 09:56:09.212686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.996 [2024-12-09 09:56:09.212716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.996 qpair failed and we were unable to recover it. 00:38:33.996 [2024-12-09 09:56:09.213078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.996 [2024-12-09 09:56:09.213107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.996 qpair failed and we were unable to recover it. 00:38:33.996 [2024-12-09 09:56:09.213462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.996 [2024-12-09 09:56:09.213491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.996 qpair failed and we were unable to recover it. 00:38:33.996 [2024-12-09 09:56:09.213830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.996 [2024-12-09 09:56:09.213861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.996 qpair failed and we were unable to recover it. 00:38:33.996 [2024-12-09 09:56:09.214176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.996 [2024-12-09 09:56:09.214205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.996 qpair failed and we were unable to recover it. 00:38:33.996 [2024-12-09 09:56:09.214563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.996 [2024-12-09 09:56:09.214592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.996 qpair failed and we were unable to recover it. 00:38:33.996 [2024-12-09 09:56:09.214950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.996 [2024-12-09 09:56:09.214985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.996 qpair failed and we were unable to recover it. 00:38:33.996 [2024-12-09 09:56:09.215336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.996 [2024-12-09 09:56:09.215365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.996 qpair failed and we were unable to recover it. 00:38:33.996 [2024-12-09 09:56:09.215716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.996 [2024-12-09 09:56:09.215746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.996 qpair failed and we were unable to recover it. 00:38:33.996 [2024-12-09 09:56:09.216002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.996 [2024-12-09 09:56:09.216030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.996 qpair failed and we were unable to recover it. 00:38:33.996 [2024-12-09 09:56:09.216379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.996 [2024-12-09 09:56:09.216407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.996 qpair failed and we were unable to recover it. 00:38:33.996 [2024-12-09 09:56:09.216722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.996 [2024-12-09 09:56:09.216752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.996 qpair failed and we were unable to recover it. 00:38:33.996 [2024-12-09 09:56:09.217133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.996 [2024-12-09 09:56:09.217162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.996 qpair failed and we were unable to recover it. 00:38:33.996 [2024-12-09 09:56:09.217517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.996 [2024-12-09 09:56:09.217547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.996 qpair failed and we were unable to recover it. 00:38:33.996 [2024-12-09 09:56:09.217976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.996 [2024-12-09 09:56:09.218006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.996 qpair failed and we were unable to recover it. 00:38:33.996 [2024-12-09 09:56:09.218317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.996 [2024-12-09 09:56:09.218346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.996 qpair failed and we were unable to recover it. 00:38:33.996 [2024-12-09 09:56:09.218702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.996 [2024-12-09 09:56:09.218731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.996 qpair failed and we were unable to recover it. 00:38:33.996 [2024-12-09 09:56:09.219037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.996 [2024-12-09 09:56:09.219066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.996 qpair failed and we were unable to recover it. 00:38:33.996 [2024-12-09 09:56:09.219419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.996 [2024-12-09 09:56:09.219448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.996 qpair failed and we were unable to recover it. 00:38:33.996 [2024-12-09 09:56:09.219833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.996 [2024-12-09 09:56:09.219863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.996 qpair failed and we were unable to recover it. 00:38:33.996 [2024-12-09 09:56:09.220199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.996 [2024-12-09 09:56:09.220228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.996 qpair failed and we were unable to recover it. 00:38:33.996 [2024-12-09 09:56:09.220601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.996 [2024-12-09 09:56:09.220629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.996 qpair failed and we were unable to recover it. 00:38:33.996 [2024-12-09 09:56:09.220981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.996 [2024-12-09 09:56:09.221011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.996 qpair failed and we were unable to recover it. 00:38:33.996 [2024-12-09 09:56:09.221369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.996 [2024-12-09 09:56:09.221397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.996 qpair failed and we were unable to recover it. 00:38:33.996 [2024-12-09 09:56:09.221762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.996 [2024-12-09 09:56:09.221792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.996 qpair failed and we were unable to recover it. 00:38:33.996 [2024-12-09 09:56:09.222040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.996 [2024-12-09 09:56:09.222068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.996 qpair failed and we were unable to recover it. 00:38:33.996 [2024-12-09 09:56:09.222407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.996 [2024-12-09 09:56:09.222436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.996 qpair failed and we were unable to recover it. 00:38:33.996 [2024-12-09 09:56:09.222782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.996 [2024-12-09 09:56:09.222812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.996 qpair failed and we were unable to recover it. 00:38:33.996 [2024-12-09 09:56:09.223048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.996 [2024-12-09 09:56:09.223076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.996 qpair failed and we were unable to recover it. 00:38:33.996 [2024-12-09 09:56:09.223411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.996 [2024-12-09 09:56:09.223440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.996 qpair failed and we were unable to recover it. 00:38:33.996 [2024-12-09 09:56:09.223794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.996 [2024-12-09 09:56:09.223824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.996 qpair failed and we were unable to recover it. 00:38:33.996 [2024-12-09 09:56:09.224166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.996 [2024-12-09 09:56:09.224194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.996 qpair failed and we were unable to recover it. 00:38:33.996 [2024-12-09 09:56:09.224545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.996 [2024-12-09 09:56:09.224574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.996 qpair failed and we were unable to recover it. 00:38:33.997 [2024-12-09 09:56:09.224793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.997 [2024-12-09 09:56:09.224823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.997 qpair failed and we were unable to recover it. 00:38:33.997 [2024-12-09 09:56:09.225153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.997 [2024-12-09 09:56:09.225183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.997 qpair failed and we were unable to recover it. 00:38:33.997 [2024-12-09 09:56:09.225539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.997 [2024-12-09 09:56:09.225568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.997 qpair failed and we were unable to recover it. 00:38:33.997 [2024-12-09 09:56:09.225930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.997 [2024-12-09 09:56:09.225960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.997 qpair failed and we were unable to recover it. 00:38:33.997 [2024-12-09 09:56:09.226323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.997 [2024-12-09 09:56:09.226352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.997 qpair failed and we were unable to recover it. 00:38:33.997 [2024-12-09 09:56:09.226703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.997 [2024-12-09 09:56:09.226732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.997 qpair failed and we were unable to recover it. 00:38:33.997 [2024-12-09 09:56:09.226991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.997 [2024-12-09 09:56:09.227019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.997 qpair failed and we were unable to recover it. 00:38:33.997 [2024-12-09 09:56:09.227288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.997 [2024-12-09 09:56:09.227316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.997 qpair failed and we were unable to recover it. 00:38:33.997 [2024-12-09 09:56:09.227661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.997 [2024-12-09 09:56:09.227691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.997 qpair failed and we were unable to recover it. 00:38:33.997 [2024-12-09 09:56:09.228026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.997 [2024-12-09 09:56:09.228055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.997 qpair failed and we were unable to recover it. 00:38:33.997 [2024-12-09 09:56:09.228418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.997 [2024-12-09 09:56:09.228447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.997 qpair failed and we were unable to recover it. 00:38:33.997 [2024-12-09 09:56:09.228790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.997 [2024-12-09 09:56:09.228821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.997 qpair failed and we were unable to recover it. 00:38:33.997 [2024-12-09 09:56:09.229166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.997 [2024-12-09 09:56:09.229195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.997 qpair failed and we were unable to recover it. 00:38:33.997 [2024-12-09 09:56:09.229449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.997 [2024-12-09 09:56:09.229477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.997 qpair failed and we were unable to recover it. 00:38:33.997 [2024-12-09 09:56:09.229832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.997 [2024-12-09 09:56:09.229870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.997 qpair failed and we were unable to recover it. 00:38:33.997 [2024-12-09 09:56:09.230111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.997 [2024-12-09 09:56:09.230139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.997 qpair failed and we were unable to recover it. 00:38:33.997 [2024-12-09 09:56:09.230498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.997 [2024-12-09 09:56:09.230528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.997 qpair failed and we were unable to recover it. 00:38:33.997 [2024-12-09 09:56:09.230828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.997 [2024-12-09 09:56:09.230858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.997 qpair failed and we were unable to recover it. 00:38:33.997 [2024-12-09 09:56:09.231214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.997 [2024-12-09 09:56:09.231242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.997 qpair failed and we were unable to recover it. 00:38:33.997 [2024-12-09 09:56:09.231493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.997 [2024-12-09 09:56:09.231521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.997 qpair failed and we were unable to recover it. 00:38:33.997 [2024-12-09 09:56:09.231859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.997 [2024-12-09 09:56:09.231889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.997 qpair failed and we were unable to recover it. 00:38:33.997 [2024-12-09 09:56:09.232252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.997 [2024-12-09 09:56:09.232282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.997 qpair failed and we were unable to recover it. 00:38:33.997 [2024-12-09 09:56:09.232659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.997 [2024-12-09 09:56:09.232689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.997 qpair failed and we were unable to recover it. 00:38:33.997 [2024-12-09 09:56:09.233040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.997 [2024-12-09 09:56:09.233068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.997 qpair failed and we were unable to recover it. 00:38:33.997 [2024-12-09 09:56:09.233411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.997 [2024-12-09 09:56:09.233440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.997 qpair failed and we were unable to recover it. 00:38:33.997 [2024-12-09 09:56:09.233774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.997 [2024-12-09 09:56:09.233805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.997 qpair failed and we were unable to recover it. 00:38:33.997 [2024-12-09 09:56:09.234154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.997 [2024-12-09 09:56:09.234182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.997 qpair failed and we were unable to recover it. 00:38:33.997 [2024-12-09 09:56:09.234537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.997 [2024-12-09 09:56:09.234565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.997 qpair failed and we were unable to recover it. 00:38:33.997 [2024-12-09 09:56:09.234932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.997 [2024-12-09 09:56:09.234961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.997 qpair failed and we were unable to recover it. 00:38:33.997 [2024-12-09 09:56:09.235375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.997 [2024-12-09 09:56:09.235404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.997 qpair failed and we were unable to recover it. 00:38:33.997 [2024-12-09 09:56:09.235744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.997 [2024-12-09 09:56:09.235773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.997 qpair failed and we were unable to recover it. 00:38:33.997 [2024-12-09 09:56:09.236133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.997 [2024-12-09 09:56:09.236162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.997 qpair failed and we were unable to recover it. 00:38:33.997 [2024-12-09 09:56:09.236526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.997 [2024-12-09 09:56:09.236555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.997 qpair failed and we were unable to recover it. 00:38:33.997 [2024-12-09 09:56:09.236915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.997 [2024-12-09 09:56:09.236944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.997 qpair failed and we were unable to recover it. 00:38:33.997 [2024-12-09 09:56:09.237290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.997 [2024-12-09 09:56:09.237319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.997 qpair failed and we were unable to recover it. 00:38:33.997 [2024-12-09 09:56:09.237680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.997 [2024-12-09 09:56:09.237711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.997 qpair failed and we were unable to recover it. 00:38:33.997 [2024-12-09 09:56:09.238072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.997 [2024-12-09 09:56:09.238101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.997 qpair failed and we were unable to recover it. 00:38:33.997 [2024-12-09 09:56:09.238433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.997 [2024-12-09 09:56:09.238461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.998 qpair failed and we were unable to recover it. 00:38:33.998 [2024-12-09 09:56:09.238805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.998 [2024-12-09 09:56:09.238835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.998 qpair failed and we were unable to recover it. 00:38:33.998 [2024-12-09 09:56:09.239079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.998 [2024-12-09 09:56:09.239107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.998 qpair failed and we were unable to recover it. 00:38:33.998 [2024-12-09 09:56:09.239472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.998 [2024-12-09 09:56:09.239501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.998 qpair failed and we were unable to recover it. 00:38:33.998 [2024-12-09 09:56:09.239845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.998 [2024-12-09 09:56:09.239881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.998 qpair failed and we were unable to recover it. 00:38:33.998 [2024-12-09 09:56:09.240244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.998 [2024-12-09 09:56:09.240271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.998 qpair failed and we were unable to recover it. 00:38:33.998 [2024-12-09 09:56:09.240631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.998 [2024-12-09 09:56:09.240675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.998 qpair failed and we were unable to recover it. 00:38:33.998 [2024-12-09 09:56:09.241034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.998 [2024-12-09 09:56:09.241063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.998 qpair failed and we were unable to recover it. 00:38:33.998 [2024-12-09 09:56:09.241414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.998 [2024-12-09 09:56:09.241442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.998 qpair failed and we were unable to recover it. 00:38:33.998 [2024-12-09 09:56:09.241696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.998 [2024-12-09 09:56:09.241727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.998 qpair failed and we were unable to recover it. 00:38:33.998 [2024-12-09 09:56:09.242066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.998 [2024-12-09 09:56:09.242094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.998 qpair failed and we were unable to recover it. 00:38:33.998 [2024-12-09 09:56:09.242444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.998 [2024-12-09 09:56:09.242473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.998 qpair failed and we were unable to recover it. 00:38:33.998 [2024-12-09 09:56:09.242826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.998 [2024-12-09 09:56:09.242856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.998 qpair failed and we were unable to recover it. 00:38:33.998 [2024-12-09 09:56:09.243230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.998 [2024-12-09 09:56:09.243259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.998 qpair failed and we were unable to recover it. 00:38:33.998 [2024-12-09 09:56:09.243618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.998 [2024-12-09 09:56:09.243655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.998 qpair failed and we were unable to recover it. 00:38:33.998 [2024-12-09 09:56:09.243898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.998 [2024-12-09 09:56:09.243928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.998 qpair failed and we were unable to recover it. 00:38:33.998 [2024-12-09 09:56:09.244305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.998 [2024-12-09 09:56:09.244334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.998 qpair failed and we were unable to recover it. 00:38:33.998 [2024-12-09 09:56:09.244696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.998 [2024-12-09 09:56:09.244726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.998 qpair failed and we were unable to recover it. 00:38:33.998 [2024-12-09 09:56:09.244978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.998 [2024-12-09 09:56:09.245007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.998 qpair failed and we were unable to recover it. 00:38:33.998 [2024-12-09 09:56:09.245348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.998 [2024-12-09 09:56:09.245377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.998 qpair failed and we were unable to recover it. 00:38:33.998 [2024-12-09 09:56:09.245741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.998 [2024-12-09 09:56:09.245771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.998 qpair failed and we were unable to recover it. 00:38:33.998 [2024-12-09 09:56:09.246142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.998 [2024-12-09 09:56:09.246172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.998 qpair failed and we were unable to recover it. 00:38:33.998 [2024-12-09 09:56:09.246514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.998 [2024-12-09 09:56:09.246543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.998 qpair failed and we were unable to recover it. 00:38:33.998 [2024-12-09 09:56:09.246943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.998 [2024-12-09 09:56:09.246973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.998 qpair failed and we were unable to recover it. 00:38:33.998 [2024-12-09 09:56:09.247297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.998 [2024-12-09 09:56:09.247326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.998 qpair failed and we were unable to recover it. 00:38:33.998 [2024-12-09 09:56:09.247686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.998 [2024-12-09 09:56:09.247716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.998 qpair failed and we were unable to recover it. 00:38:33.998 [2024-12-09 09:56:09.248076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.998 [2024-12-09 09:56:09.248105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.998 qpair failed and we were unable to recover it. 00:38:33.998 [2024-12-09 09:56:09.248452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.998 [2024-12-09 09:56:09.248482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.998 qpair failed and we were unable to recover it. 00:38:33.998 [2024-12-09 09:56:09.248823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.998 [2024-12-09 09:56:09.248853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.998 qpair failed and we were unable to recover it. 00:38:33.998 [2024-12-09 09:56:09.249215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.998 [2024-12-09 09:56:09.249243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.998 qpair failed and we were unable to recover it. 00:38:33.998 [2024-12-09 09:56:09.249573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.998 [2024-12-09 09:56:09.249602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.998 qpair failed and we were unable to recover it. 00:38:33.998 [2024-12-09 09:56:09.249990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.998 [2024-12-09 09:56:09.250020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.998 qpair failed and we were unable to recover it. 00:38:33.998 [2024-12-09 09:56:09.250328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.998 [2024-12-09 09:56:09.250358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.998 qpair failed and we were unable to recover it. 00:38:33.998 [2024-12-09 09:56:09.250719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.998 [2024-12-09 09:56:09.250750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.998 qpair failed and we were unable to recover it. 00:38:33.998 [2024-12-09 09:56:09.251078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.998 [2024-12-09 09:56:09.251107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.998 qpair failed and we were unable to recover it. 00:38:33.998 [2024-12-09 09:56:09.251437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.998 [2024-12-09 09:56:09.251466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.998 qpair failed and we were unable to recover it. 00:38:33.998 [2024-12-09 09:56:09.251720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.998 [2024-12-09 09:56:09.251750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.998 qpair failed and we were unable to recover it. 00:38:33.999 [2024-12-09 09:56:09.252099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.999 [2024-12-09 09:56:09.252128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.999 qpair failed and we were unable to recover it. 00:38:33.999 [2024-12-09 09:56:09.252479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.999 [2024-12-09 09:56:09.252508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.999 qpair failed and we were unable to recover it. 00:38:33.999 [2024-12-09 09:56:09.252878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.999 [2024-12-09 09:56:09.252907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.999 qpair failed and we were unable to recover it. 00:38:33.999 [2024-12-09 09:56:09.253233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.999 [2024-12-09 09:56:09.253262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.999 qpair failed and we were unable to recover it. 00:38:33.999 [2024-12-09 09:56:09.253631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.999 [2024-12-09 09:56:09.253670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.999 qpair failed and we were unable to recover it. 00:38:33.999 [2024-12-09 09:56:09.253987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.999 [2024-12-09 09:56:09.254021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.999 qpair failed and we were unable to recover it. 00:38:33.999 [2024-12-09 09:56:09.254375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.999 [2024-12-09 09:56:09.254404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.999 qpair failed and we were unable to recover it. 00:38:33.999 [2024-12-09 09:56:09.254740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.999 [2024-12-09 09:56:09.254771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.999 qpair failed and we were unable to recover it. 00:38:33.999 [2024-12-09 09:56:09.255104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.999 [2024-12-09 09:56:09.255138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.999 qpair failed and we were unable to recover it. 00:38:33.999 [2024-12-09 09:56:09.255384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.999 [2024-12-09 09:56:09.255412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.999 qpair failed and we were unable to recover it. 00:38:33.999 [2024-12-09 09:56:09.257126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.999 [2024-12-09 09:56:09.257178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.999 qpair failed and we were unable to recover it. 00:38:33.999 [2024-12-09 09:56:09.257542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.999 [2024-12-09 09:56:09.257573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.999 qpair failed and we were unable to recover it. 00:38:33.999 [2024-12-09 09:56:09.257933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.999 [2024-12-09 09:56:09.257964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.999 qpair failed and we were unable to recover it. 00:38:33.999 [2024-12-09 09:56:09.258305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.999 [2024-12-09 09:56:09.258332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.999 qpair failed and we were unable to recover it. 00:38:33.999 [2024-12-09 09:56:09.258677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.999 [2024-12-09 09:56:09.258708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.999 qpair failed and we were unable to recover it. 00:38:33.999 [2024-12-09 09:56:09.259091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.999 [2024-12-09 09:56:09.259120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.999 qpair failed and we were unable to recover it. 00:38:33.999 [2024-12-09 09:56:09.259459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.999 [2024-12-09 09:56:09.259487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.999 qpair failed and we were unable to recover it. 00:38:33.999 [2024-12-09 09:56:09.259876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.999 [2024-12-09 09:56:09.259906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.999 qpair failed and we were unable to recover it. 00:38:33.999 [2024-12-09 09:56:09.260229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.999 [2024-12-09 09:56:09.260258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.999 qpair failed and we were unable to recover it. 00:38:33.999 [2024-12-09 09:56:09.260619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.999 [2024-12-09 09:56:09.260657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.999 qpair failed and we were unable to recover it. 00:38:33.999 [2024-12-09 09:56:09.261042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.999 [2024-12-09 09:56:09.261071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.999 qpair failed and we were unable to recover it. 00:38:33.999 [2024-12-09 09:56:09.261385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.999 [2024-12-09 09:56:09.261414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.999 qpair failed and we were unable to recover it. 00:38:33.999 [2024-12-09 09:56:09.261764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.999 [2024-12-09 09:56:09.261795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.999 qpair failed and we were unable to recover it. 00:38:33.999 [2024-12-09 09:56:09.262116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.999 [2024-12-09 09:56:09.262145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.999 qpair failed and we were unable to recover it. 00:38:33.999 [2024-12-09 09:56:09.262500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.999 [2024-12-09 09:56:09.262529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.999 qpair failed and we were unable to recover it. 00:38:33.999 [2024-12-09 09:56:09.262899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.999 [2024-12-09 09:56:09.262929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.999 qpair failed and we were unable to recover it. 00:38:33.999 [2024-12-09 09:56:09.263239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.999 [2024-12-09 09:56:09.263274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.999 qpair failed and we were unable to recover it. 00:38:33.999 [2024-12-09 09:56:09.263681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.999 [2024-12-09 09:56:09.263711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.999 qpair failed and we were unable to recover it. 00:38:33.999 [2024-12-09 09:56:09.264065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.999 [2024-12-09 09:56:09.264094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.999 qpair failed and we were unable to recover it. 00:38:33.999 [2024-12-09 09:56:09.264446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.999 [2024-12-09 09:56:09.264474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.999 qpair failed and we were unable to recover it. 00:38:33.999 [2024-12-09 09:56:09.264877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.999 [2024-12-09 09:56:09.264907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.999 qpair failed and we were unable to recover it. 00:38:33.999 [2024-12-09 09:56:09.265230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.999 [2024-12-09 09:56:09.265259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:33.999 qpair failed and we were unable to recover it. 00:38:33.999 [2024-12-09 09:56:09.265619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:33.999 [2024-12-09 09:56:09.265658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.000 qpair failed and we were unable to recover it. 00:38:34.000 [2024-12-09 09:56:09.265992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.000 [2024-12-09 09:56:09.266021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.000 qpair failed and we were unable to recover it. 00:38:34.000 [2024-12-09 09:56:09.266358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.000 [2024-12-09 09:56:09.266387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.000 qpair failed and we were unable to recover it. 00:38:34.000 [2024-12-09 09:56:09.266758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.000 [2024-12-09 09:56:09.266789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.000 qpair failed and we were unable to recover it. 00:38:34.000 [2024-12-09 09:56:09.267217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.000 [2024-12-09 09:56:09.267245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.000 qpair failed and we were unable to recover it. 00:38:34.000 [2024-12-09 09:56:09.267596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.000 [2024-12-09 09:56:09.267624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.000 qpair failed and we were unable to recover it. 00:38:34.000 [2024-12-09 09:56:09.267968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.000 [2024-12-09 09:56:09.267997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.000 qpair failed and we were unable to recover it. 00:38:34.000 [2024-12-09 09:56:09.268334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.000 [2024-12-09 09:56:09.268362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.000 qpair failed and we were unable to recover it. 00:38:34.000 [2024-12-09 09:56:09.268720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.000 [2024-12-09 09:56:09.268750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.000 qpair failed and we were unable to recover it. 00:38:34.000 [2024-12-09 09:56:09.269090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.000 [2024-12-09 09:56:09.269119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.000 qpair failed and we were unable to recover it. 00:38:34.000 [2024-12-09 09:56:09.269453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.000 [2024-12-09 09:56:09.269481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.000 qpair failed and we were unable to recover it. 00:38:34.000 [2024-12-09 09:56:09.269841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.000 [2024-12-09 09:56:09.269871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.000 qpair failed and we were unable to recover it. 00:38:34.000 [2024-12-09 09:56:09.270214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.000 [2024-12-09 09:56:09.270243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.000 qpair failed and we were unable to recover it. 00:38:34.000 [2024-12-09 09:56:09.270677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.000 [2024-12-09 09:56:09.270708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.000 qpair failed and we were unable to recover it. 00:38:34.000 [2024-12-09 09:56:09.271025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.000 [2024-12-09 09:56:09.271055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.000 qpair failed and we were unable to recover it. 00:38:34.000 [2024-12-09 09:56:09.271410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.000 [2024-12-09 09:56:09.271439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.000 qpair failed and we were unable to recover it. 00:38:34.000 [2024-12-09 09:56:09.271846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.000 [2024-12-09 09:56:09.271876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.000 qpair failed and we were unable to recover it. 00:38:34.000 [2024-12-09 09:56:09.272192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.000 [2024-12-09 09:56:09.272222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.000 qpair failed and we were unable to recover it. 00:38:34.000 [2024-12-09 09:56:09.272574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.000 [2024-12-09 09:56:09.272603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.000 qpair failed and we were unable to recover it. 00:38:34.000 [2024-12-09 09:56:09.272838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.000 [2024-12-09 09:56:09.272867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.000 qpair failed and we were unable to recover it. 00:38:34.000 [2024-12-09 09:56:09.273226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.000 [2024-12-09 09:56:09.273255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.000 qpair failed and we were unable to recover it. 00:38:34.000 [2024-12-09 09:56:09.273597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.000 [2024-12-09 09:56:09.273627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.000 qpair failed and we were unable to recover it. 00:38:34.000 [2024-12-09 09:56:09.273994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.000 [2024-12-09 09:56:09.274024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.000 qpair failed and we were unable to recover it. 00:38:34.000 [2024-12-09 09:56:09.274353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.000 [2024-12-09 09:56:09.274381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.000 qpair failed and we were unable to recover it. 00:38:34.000 [2024-12-09 09:56:09.274740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.000 [2024-12-09 09:56:09.274771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.000 qpair failed and we were unable to recover it. 00:38:34.000 [2024-12-09 09:56:09.275127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.000 [2024-12-09 09:56:09.275156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.000 qpair failed and we were unable to recover it. 00:38:34.000 [2024-12-09 09:56:09.275494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.000 [2024-12-09 09:56:09.275521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.000 qpair failed and we were unable to recover it. 00:38:34.000 [2024-12-09 09:56:09.275915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.000 [2024-12-09 09:56:09.275946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.000 qpair failed and we were unable to recover it. 00:38:34.000 [2024-12-09 09:56:09.276270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.000 [2024-12-09 09:56:09.276298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.000 qpair failed and we were unable to recover it. 00:38:34.000 [2024-12-09 09:56:09.276515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.000 [2024-12-09 09:56:09.276543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.000 qpair failed and we were unable to recover it. 00:38:34.000 [2024-12-09 09:56:09.276900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.000 [2024-12-09 09:56:09.276930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.000 qpair failed and we were unable to recover it. 00:38:34.000 [2024-12-09 09:56:09.277249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.000 [2024-12-09 09:56:09.277278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.000 qpair failed and we were unable to recover it. 00:38:34.000 [2024-12-09 09:56:09.277648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.000 [2024-12-09 09:56:09.277678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.000 qpair failed and we were unable to recover it. 00:38:34.000 [2024-12-09 09:56:09.278036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.000 [2024-12-09 09:56:09.278065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.000 qpair failed and we were unable to recover it. 00:38:34.000 [2024-12-09 09:56:09.278411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.000 [2024-12-09 09:56:09.278439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.000 qpair failed and we were unable to recover it. 00:38:34.000 [2024-12-09 09:56:09.278719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.000 [2024-12-09 09:56:09.278750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.000 qpair failed and we were unable to recover it. 00:38:34.000 [2024-12-09 09:56:09.279107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.000 [2024-12-09 09:56:09.279136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.000 qpair failed and we were unable to recover it. 00:38:34.000 [2024-12-09 09:56:09.279480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.000 [2024-12-09 09:56:09.279508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.000 qpair failed and we were unable to recover it. 00:38:34.000 [2024-12-09 09:56:09.279870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.000 [2024-12-09 09:56:09.279900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.001 qpair failed and we were unable to recover it. 00:38:34.001 [2024-12-09 09:56:09.280252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.001 [2024-12-09 09:56:09.280281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.001 qpair failed and we were unable to recover it. 00:38:34.001 [2024-12-09 09:56:09.280598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.001 [2024-12-09 09:56:09.280627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.001 qpair failed and we were unable to recover it. 00:38:34.001 [2024-12-09 09:56:09.280982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.001 [2024-12-09 09:56:09.281011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.001 qpair failed and we were unable to recover it. 00:38:34.001 [2024-12-09 09:56:09.281341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.001 [2024-12-09 09:56:09.281369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.001 qpair failed and we were unable to recover it. 00:38:34.001 [2024-12-09 09:56:09.281604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.001 [2024-12-09 09:56:09.281633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.001 qpair failed and we were unable to recover it. 00:38:34.001 [2024-12-09 09:56:09.281982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.001 [2024-12-09 09:56:09.282016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.001 qpair failed and we were unable to recover it. 00:38:34.001 [2024-12-09 09:56:09.282366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.001 [2024-12-09 09:56:09.282394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.001 qpair failed and we were unable to recover it. 00:38:34.001 [2024-12-09 09:56:09.282754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.001 [2024-12-09 09:56:09.282787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.001 qpair failed and we were unable to recover it. 00:38:34.001 [2024-12-09 09:56:09.283142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.001 [2024-12-09 09:56:09.283172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.001 qpair failed and we were unable to recover it. 00:38:34.001 [2024-12-09 09:56:09.283491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.001 [2024-12-09 09:56:09.283520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.001 qpair failed and we were unable to recover it. 00:38:34.001 [2024-12-09 09:56:09.283739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.001 [2024-12-09 09:56:09.283770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.001 qpair failed and we were unable to recover it. 00:38:34.001 [2024-12-09 09:56:09.284146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.001 [2024-12-09 09:56:09.284175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.001 qpair failed and we were unable to recover it. 00:38:34.001 [2024-12-09 09:56:09.284390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.001 [2024-12-09 09:56:09.284419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.001 qpair failed and we were unable to recover it. 00:38:34.001 [2024-12-09 09:56:09.286604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.001 [2024-12-09 09:56:09.286698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.001 qpair failed and we were unable to recover it. 00:38:34.001 [2024-12-09 09:56:09.287073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.001 [2024-12-09 09:56:09.287107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.001 qpair failed and we were unable to recover it. 00:38:34.001 [2024-12-09 09:56:09.287455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.001 [2024-12-09 09:56:09.287485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.001 qpair failed and we were unable to recover it. 00:38:34.001 [2024-12-09 09:56:09.287856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.001 [2024-12-09 09:56:09.287887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.001 qpair failed and we were unable to recover it. 00:38:34.001 [2024-12-09 09:56:09.288237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.001 [2024-12-09 09:56:09.288266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.001 qpair failed and we were unable to recover it. 00:38:34.001 [2024-12-09 09:56:09.288663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.001 [2024-12-09 09:56:09.288693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.001 qpair failed and we were unable to recover it. 00:38:34.001 [2024-12-09 09:56:09.289007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.001 [2024-12-09 09:56:09.289038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.001 qpair failed and we were unable to recover it. 00:38:34.001 [2024-12-09 09:56:09.289401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.001 [2024-12-09 09:56:09.289430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.001 qpair failed and we were unable to recover it. 00:38:34.001 [2024-12-09 09:56:09.289777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.001 [2024-12-09 09:56:09.289807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.001 qpair failed and we were unable to recover it. 00:38:34.001 [2024-12-09 09:56:09.290142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.001 [2024-12-09 09:56:09.290173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.001 qpair failed and we were unable to recover it. 00:38:34.001 [2024-12-09 09:56:09.290508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.001 [2024-12-09 09:56:09.290537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.001 qpair failed and we were unable to recover it. 00:38:34.001 [2024-12-09 09:56:09.290905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.001 [2024-12-09 09:56:09.290935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.001 qpair failed and we were unable to recover it. 00:38:34.001 [2024-12-09 09:56:09.291319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.001 [2024-12-09 09:56:09.291348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.001 qpair failed and we were unable to recover it. 00:38:34.001 [2024-12-09 09:56:09.291671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.001 [2024-12-09 09:56:09.291703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.001 qpair failed and we were unable to recover it. 00:38:34.001 [2024-12-09 09:56:09.292059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.001 [2024-12-09 09:56:09.292088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.001 qpair failed and we were unable to recover it. 00:38:34.001 [2024-12-09 09:56:09.292424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.001 [2024-12-09 09:56:09.292453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.001 qpair failed and we were unable to recover it. 00:38:34.001 [2024-12-09 09:56:09.292803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.001 [2024-12-09 09:56:09.292834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.001 qpair failed and we were unable to recover it. 00:38:34.001 [2024-12-09 09:56:09.293178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.001 [2024-12-09 09:56:09.293206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.001 qpair failed and we were unable to recover it. 00:38:34.001 [2024-12-09 09:56:09.293562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.001 [2024-12-09 09:56:09.293592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.001 qpair failed and we were unable to recover it. 00:38:34.001 [2024-12-09 09:56:09.293933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.001 [2024-12-09 09:56:09.293963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.001 qpair failed and we were unable to recover it. 00:38:34.001 [2024-12-09 09:56:09.294380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.001 [2024-12-09 09:56:09.294409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.001 qpair failed and we were unable to recover it. 00:38:34.001 [2024-12-09 09:56:09.294757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.001 [2024-12-09 09:56:09.294789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.001 qpair failed and we were unable to recover it. 00:38:34.001 [2024-12-09 09:56:09.295038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.001 [2024-12-09 09:56:09.295067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.001 qpair failed and we were unable to recover it. 00:38:34.001 [2024-12-09 09:56:09.295376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.001 [2024-12-09 09:56:09.295413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.001 qpair failed and we were unable to recover it. 00:38:34.001 [2024-12-09 09:56:09.295780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.002 [2024-12-09 09:56:09.295811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.002 qpair failed and we were unable to recover it. 00:38:34.002 [2024-12-09 09:56:09.296176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.002 [2024-12-09 09:56:09.296205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.002 qpair failed and we were unable to recover it. 00:38:34.002 [2024-12-09 09:56:09.296573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.002 [2024-12-09 09:56:09.296603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.002 qpair failed and we were unable to recover it. 00:38:34.002 [2024-12-09 09:56:09.296928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.002 [2024-12-09 09:56:09.296958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.002 qpair failed and we were unable to recover it. 00:38:34.002 [2024-12-09 09:56:09.297316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.002 [2024-12-09 09:56:09.297344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.002 qpair failed and we were unable to recover it. 00:38:34.002 [2024-12-09 09:56:09.297691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.002 [2024-12-09 09:56:09.297722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.002 qpair failed and we were unable to recover it. 00:38:34.002 [2024-12-09 09:56:09.297986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.002 [2024-12-09 09:56:09.298014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.002 qpair failed and we were unable to recover it. 00:38:34.002 [2024-12-09 09:56:09.298356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.002 [2024-12-09 09:56:09.298385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.002 qpair failed and we were unable to recover it. 00:38:34.002 [2024-12-09 09:56:09.298740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.002 [2024-12-09 09:56:09.298776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.002 qpair failed and we were unable to recover it. 00:38:34.002 [2024-12-09 09:56:09.299133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.002 [2024-12-09 09:56:09.299163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.002 qpair failed and we were unable to recover it. 00:38:34.002 [2024-12-09 09:56:09.299472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.002 [2024-12-09 09:56:09.299501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.002 qpair failed and we were unable to recover it. 00:38:34.002 [2024-12-09 09:56:09.299867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.002 [2024-12-09 09:56:09.299898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.002 qpair failed and we were unable to recover it. 00:38:34.002 [2024-12-09 09:56:09.300331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.002 [2024-12-09 09:56:09.300361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.002 qpair failed and we were unable to recover it. 00:38:34.002 [2024-12-09 09:56:09.300632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.002 [2024-12-09 09:56:09.300672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.002 qpair failed and we were unable to recover it. 00:38:34.002 [2024-12-09 09:56:09.301038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.002 [2024-12-09 09:56:09.301068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.002 qpair failed and we were unable to recover it. 00:38:34.002 [2024-12-09 09:56:09.301416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.002 [2024-12-09 09:56:09.301446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.002 qpair failed and we were unable to recover it. 00:38:34.002 [2024-12-09 09:56:09.301789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.002 [2024-12-09 09:56:09.301818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.002 qpair failed and we were unable to recover it. 00:38:34.002 [2024-12-09 09:56:09.302162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.002 [2024-12-09 09:56:09.302191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.002 qpair failed and we were unable to recover it. 00:38:34.002 [2024-12-09 09:56:09.302546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.002 [2024-12-09 09:56:09.302574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.002 qpair failed and we were unable to recover it. 00:38:34.002 [2024-12-09 09:56:09.302925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.002 [2024-12-09 09:56:09.302957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.002 qpair failed and we were unable to recover it. 00:38:34.002 [2024-12-09 09:56:09.303200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.002 [2024-12-09 09:56:09.303234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.002 qpair failed and we were unable to recover it. 00:38:34.002 [2024-12-09 09:56:09.303568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.002 [2024-12-09 09:56:09.303597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.002 qpair failed and we were unable to recover it. 00:38:34.002 [2024-12-09 09:56:09.303888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.002 [2024-12-09 09:56:09.303918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.002 qpair failed and we were unable to recover it. 00:38:34.002 [2024-12-09 09:56:09.304289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.002 [2024-12-09 09:56:09.304319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.002 qpair failed and we were unable to recover it. 00:38:34.002 [2024-12-09 09:56:09.304557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.002 [2024-12-09 09:56:09.304588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.002 qpair failed and we were unable to recover it. 00:38:34.002 [2024-12-09 09:56:09.304967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.002 [2024-12-09 09:56:09.304998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.002 qpair failed and we were unable to recover it. 00:38:34.002 [2024-12-09 09:56:09.305342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.002 [2024-12-09 09:56:09.305371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.002 qpair failed and we were unable to recover it. 00:38:34.002 [2024-12-09 09:56:09.305601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.002 [2024-12-09 09:56:09.305634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.002 qpair failed and we were unable to recover it. 00:38:34.002 [2024-12-09 09:56:09.305991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.002 [2024-12-09 09:56:09.306021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.002 qpair failed and we were unable to recover it. 00:38:34.002 [2024-12-09 09:56:09.306365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.002 [2024-12-09 09:56:09.306395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.002 qpair failed and we were unable to recover it. 00:38:34.002 [2024-12-09 09:56:09.306657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.002 [2024-12-09 09:56:09.306687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.002 qpair failed and we were unable to recover it. 00:38:34.002 [2024-12-09 09:56:09.307043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.002 [2024-12-09 09:56:09.307072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.002 qpair failed and we were unable to recover it. 00:38:34.002 [2024-12-09 09:56:09.307422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.002 [2024-12-09 09:56:09.307450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.002 qpair failed and we were unable to recover it. 00:38:34.002 [2024-12-09 09:56:09.307803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.002 [2024-12-09 09:56:09.307833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.002 qpair failed and we were unable to recover it. 00:38:34.002 [2024-12-09 09:56:09.308180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.002 [2024-12-09 09:56:09.308209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.002 qpair failed and we were unable to recover it. 00:38:34.002 [2024-12-09 09:56:09.308554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.002 [2024-12-09 09:56:09.308583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.002 qpair failed and we were unable to recover it. 00:38:34.002 [2024-12-09 09:56:09.308942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.002 [2024-12-09 09:56:09.308980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.002 qpair failed and we were unable to recover it. 00:38:34.002 [2024-12-09 09:56:09.309324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.002 [2024-12-09 09:56:09.309352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.002 qpair failed and we were unable to recover it. 00:38:34.002 [2024-12-09 09:56:09.309709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.003 [2024-12-09 09:56:09.309739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.003 qpair failed and we were unable to recover it. 00:38:34.003 [2024-12-09 09:56:09.310058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.003 [2024-12-09 09:56:09.310088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.003 qpair failed and we were unable to recover it. 00:38:34.003 [2024-12-09 09:56:09.310488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.003 [2024-12-09 09:56:09.310517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.003 qpair failed and we were unable to recover it. 00:38:34.003 [2024-12-09 09:56:09.310861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.003 [2024-12-09 09:56:09.310891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.003 qpair failed and we were unable to recover it. 00:38:34.003 [2024-12-09 09:56:09.311225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.003 [2024-12-09 09:56:09.311253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.003 qpair failed and we were unable to recover it. 00:38:34.003 [2024-12-09 09:56:09.311603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.003 [2024-12-09 09:56:09.311632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.003 qpair failed and we were unable to recover it. 00:38:34.003 [2024-12-09 09:56:09.311989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.003 [2024-12-09 09:56:09.312018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.003 qpair failed and we were unable to recover it. 00:38:34.003 [2024-12-09 09:56:09.312375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.003 [2024-12-09 09:56:09.312403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.003 qpair failed and we were unable to recover it. 00:38:34.003 [2024-12-09 09:56:09.312752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.003 [2024-12-09 09:56:09.312783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.003 qpair failed and we were unable to recover it. 00:38:34.003 [2024-12-09 09:56:09.312994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.003 [2024-12-09 09:56:09.313026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.003 qpair failed and we were unable to recover it. 00:38:34.003 [2024-12-09 09:56:09.313401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.003 [2024-12-09 09:56:09.313430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.003 qpair failed and we were unable to recover it. 00:38:34.003 [2024-12-09 09:56:09.313770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.003 [2024-12-09 09:56:09.313800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.003 qpair failed and we were unable to recover it. 00:38:34.003 [2024-12-09 09:56:09.314142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.003 [2024-12-09 09:56:09.314171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.003 qpair failed and we were unable to recover it. 00:38:34.003 [2024-12-09 09:56:09.314512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.003 [2024-12-09 09:56:09.314541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.003 qpair failed and we were unable to recover it. 00:38:34.003 [2024-12-09 09:56:09.314876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.003 [2024-12-09 09:56:09.314906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.003 qpair failed and we were unable to recover it. 00:38:34.003 [2024-12-09 09:56:09.315150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.003 [2024-12-09 09:56:09.315179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.003 qpair failed and we were unable to recover it. 00:38:34.003 [2024-12-09 09:56:09.315501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.003 [2024-12-09 09:56:09.315530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.003 qpair failed and we were unable to recover it. 00:38:34.003 [2024-12-09 09:56:09.315893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.003 [2024-12-09 09:56:09.315923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.003 qpair failed and we were unable to recover it. 00:38:34.003 [2024-12-09 09:56:09.316282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.003 [2024-12-09 09:56:09.316311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.003 qpair failed and we were unable to recover it. 00:38:34.003 [2024-12-09 09:56:09.316672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.003 [2024-12-09 09:56:09.316702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.003 qpair failed and we were unable to recover it. 00:38:34.003 [2024-12-09 09:56:09.317046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.003 [2024-12-09 09:56:09.317074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.003 qpair failed and we were unable to recover it. 00:38:34.003 [2024-12-09 09:56:09.317424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.003 [2024-12-09 09:56:09.317452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.003 qpair failed and we were unable to recover it. 00:38:34.003 [2024-12-09 09:56:09.317822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.003 [2024-12-09 09:56:09.317851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.003 qpair failed and we were unable to recover it. 00:38:34.003 [2024-12-09 09:56:09.318195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.003 [2024-12-09 09:56:09.318222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.003 qpair failed and we were unable to recover it. 00:38:34.003 [2024-12-09 09:56:09.318548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.003 [2024-12-09 09:56:09.318576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.003 qpair failed and we were unable to recover it. 00:38:34.003 [2024-12-09 09:56:09.318929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.003 [2024-12-09 09:56:09.318958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.003 qpair failed and we were unable to recover it. 00:38:34.003 [2024-12-09 09:56:09.319312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.003 [2024-12-09 09:56:09.319341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.003 qpair failed and we were unable to recover it. 00:38:34.003 [2024-12-09 09:56:09.319689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.003 [2024-12-09 09:56:09.319718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.003 qpair failed and we were unable to recover it. 00:38:34.003 [2024-12-09 09:56:09.320077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.003 [2024-12-09 09:56:09.320106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.003 qpair failed and we were unable to recover it. 00:38:34.003 [2024-12-09 09:56:09.320454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.003 [2024-12-09 09:56:09.320482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.003 qpair failed and we were unable to recover it. 00:38:34.003 [2024-12-09 09:56:09.320889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.003 [2024-12-09 09:56:09.320919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.003 qpair failed and we were unable to recover it. 00:38:34.003 [2024-12-09 09:56:09.321240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.003 [2024-12-09 09:56:09.321270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.003 qpair failed and we were unable to recover it. 00:38:34.003 [2024-12-09 09:56:09.321601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.003 [2024-12-09 09:56:09.321630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.003 qpair failed and we were unable to recover it. 00:38:34.003 [2024-12-09 09:56:09.321977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.003 [2024-12-09 09:56:09.322007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.003 qpair failed and we were unable to recover it. 00:38:34.003 [2024-12-09 09:56:09.322338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.003 [2024-12-09 09:56:09.322365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.003 qpair failed and we were unable to recover it. 00:38:34.003 [2024-12-09 09:56:09.322708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.003 [2024-12-09 09:56:09.322739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.003 qpair failed and we were unable to recover it. 00:38:34.003 [2024-12-09 09:56:09.323099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.003 [2024-12-09 09:56:09.323127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.003 qpair failed and we were unable to recover it. 00:38:34.003 [2024-12-09 09:56:09.323476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.003 [2024-12-09 09:56:09.323504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.003 qpair failed and we were unable to recover it. 00:38:34.004 [2024-12-09 09:56:09.323852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.004 [2024-12-09 09:56:09.323882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.004 qpair failed and we were unable to recover it. 00:38:34.004 [2024-12-09 09:56:09.324299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.004 [2024-12-09 09:56:09.324334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.004 qpair failed and we were unable to recover it. 00:38:34.004 [2024-12-09 09:56:09.324660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.004 [2024-12-09 09:56:09.324691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.004 qpair failed and we were unable to recover it. 00:38:34.004 [2024-12-09 09:56:09.325038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.004 [2024-12-09 09:56:09.325067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.004 qpair failed and we were unable to recover it. 00:38:34.004 [2024-12-09 09:56:09.325419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.004 [2024-12-09 09:56:09.325448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.004 qpair failed and we were unable to recover it. 00:38:34.004 [2024-12-09 09:56:09.325794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.004 [2024-12-09 09:56:09.325825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.004 qpair failed and we were unable to recover it. 00:38:34.004 [2024-12-09 09:56:09.326152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.004 [2024-12-09 09:56:09.326181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.004 qpair failed and we were unable to recover it. 00:38:34.004 [2024-12-09 09:56:09.326509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.004 [2024-12-09 09:56:09.326538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.004 qpair failed and we were unable to recover it. 00:38:34.004 [2024-12-09 09:56:09.326902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.004 [2024-12-09 09:56:09.326932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.004 qpair failed and we were unable to recover it. 00:38:34.004 [2024-12-09 09:56:09.327281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.004 [2024-12-09 09:56:09.327310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.004 qpair failed and we were unable to recover it. 00:38:34.004 [2024-12-09 09:56:09.327656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.004 [2024-12-09 09:56:09.327685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.004 qpair failed and we were unable to recover it. 00:38:34.004 [2024-12-09 09:56:09.327906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.004 [2024-12-09 09:56:09.327939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.004 qpair failed and we were unable to recover it. 00:38:34.004 [2024-12-09 09:56:09.328299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.004 [2024-12-09 09:56:09.328328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.004 qpair failed and we were unable to recover it. 00:38:34.004 [2024-12-09 09:56:09.328662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.004 [2024-12-09 09:56:09.328692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.004 qpair failed and we were unable to recover it. 00:38:34.004 [2024-12-09 09:56:09.329012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.004 [2024-12-09 09:56:09.329042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.004 qpair failed and we were unable to recover it. 00:38:34.004 [2024-12-09 09:56:09.329397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.004 [2024-12-09 09:56:09.329426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.004 qpair failed and we were unable to recover it. 00:38:34.004 [2024-12-09 09:56:09.329776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.004 [2024-12-09 09:56:09.329807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.004 qpair failed and we were unable to recover it. 00:38:34.004 [2024-12-09 09:56:09.330169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.004 [2024-12-09 09:56:09.330197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.004 qpair failed and we were unable to recover it. 00:38:34.004 [2024-12-09 09:56:09.330614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.004 [2024-12-09 09:56:09.330653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.004 qpair failed and we were unable to recover it. 00:38:34.004 [2024-12-09 09:56:09.330967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.004 [2024-12-09 09:56:09.330997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.004 qpair failed and we were unable to recover it. 00:38:34.004 [2024-12-09 09:56:09.331229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.004 [2024-12-09 09:56:09.331261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.004 qpair failed and we were unable to recover it. 00:38:34.004 [2024-12-09 09:56:09.331632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.004 [2024-12-09 09:56:09.331685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.004 qpair failed and we were unable to recover it. 00:38:34.004 [2024-12-09 09:56:09.332016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.004 [2024-12-09 09:56:09.332044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.004 qpair failed and we were unable to recover it. 00:38:34.004 [2024-12-09 09:56:09.332395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.004 [2024-12-09 09:56:09.332425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.004 qpair failed and we were unable to recover it. 00:38:34.004 [2024-12-09 09:56:09.332783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.004 [2024-12-09 09:56:09.332814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.004 qpair failed and we were unable to recover it. 00:38:34.004 [2024-12-09 09:56:09.333154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.004 [2024-12-09 09:56:09.333183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.004 qpair failed and we were unable to recover it. 00:38:34.004 [2024-12-09 09:56:09.333415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.004 [2024-12-09 09:56:09.333446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.004 qpair failed and we were unable to recover it. 00:38:34.004 [2024-12-09 09:56:09.333787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.004 [2024-12-09 09:56:09.333819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.004 qpair failed and we were unable to recover it. 00:38:34.004 [2024-12-09 09:56:09.334045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.004 [2024-12-09 09:56:09.334079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.004 qpair failed and we were unable to recover it. 00:38:34.004 [2024-12-09 09:56:09.334421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.004 [2024-12-09 09:56:09.334450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.004 qpair failed and we were unable to recover it. 00:38:34.004 [2024-12-09 09:56:09.334811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.004 [2024-12-09 09:56:09.334842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.004 qpair failed and we were unable to recover it. 00:38:34.004 [2024-12-09 09:56:09.335187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.004 [2024-12-09 09:56:09.335215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.004 qpair failed and we were unable to recover it. 00:38:34.004 [2024-12-09 09:56:09.335569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.004 [2024-12-09 09:56:09.335598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.004 qpair failed and we were unable to recover it. 00:38:34.004 [2024-12-09 09:56:09.335974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.005 [2024-12-09 09:56:09.336004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.005 qpair failed and we were unable to recover it. 00:38:34.005 [2024-12-09 09:56:09.336405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.005 [2024-12-09 09:56:09.336434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.005 qpair failed and we were unable to recover it. 00:38:34.005 [2024-12-09 09:56:09.336772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.005 [2024-12-09 09:56:09.336802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.005 qpair failed and we were unable to recover it. 00:38:34.005 [2024-12-09 09:56:09.337163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.005 [2024-12-09 09:56:09.337191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.005 qpair failed and we were unable to recover it. 00:38:34.005 [2024-12-09 09:56:09.337426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.005 [2024-12-09 09:56:09.337454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.005 qpair failed and we were unable to recover it. 00:38:34.005 [2024-12-09 09:56:09.337785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.005 [2024-12-09 09:56:09.337815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.005 qpair failed and we were unable to recover it. 00:38:34.005 [2024-12-09 09:56:09.338160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.005 [2024-12-09 09:56:09.338189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.005 qpair failed and we were unable to recover it. 00:38:34.005 [2024-12-09 09:56:09.338430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.005 [2024-12-09 09:56:09.338460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.005 qpair failed and we were unable to recover it. 00:38:34.005 [2024-12-09 09:56:09.338705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.005 [2024-12-09 09:56:09.338735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.005 qpair failed and we were unable to recover it. 00:38:34.005 [2024-12-09 09:56:09.339097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.005 [2024-12-09 09:56:09.339127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.005 qpair failed and we were unable to recover it. 00:38:34.005 [2024-12-09 09:56:09.339486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.005 [2024-12-09 09:56:09.339514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.005 qpair failed and we were unable to recover it. 00:38:34.005 [2024-12-09 09:56:09.339897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.005 [2024-12-09 09:56:09.339927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.005 qpair failed and we were unable to recover it. 00:38:34.005 [2024-12-09 09:56:09.340251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.005 [2024-12-09 09:56:09.340281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.005 qpair failed and we were unable to recover it. 00:38:34.005 [2024-12-09 09:56:09.340648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.005 [2024-12-09 09:56:09.340678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.005 qpair failed and we were unable to recover it. 00:38:34.005 [2024-12-09 09:56:09.341016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.005 [2024-12-09 09:56:09.341045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.005 qpair failed and we were unable to recover it. 00:38:34.005 [2024-12-09 09:56:09.341394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.005 [2024-12-09 09:56:09.341422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.005 qpair failed and we were unable to recover it. 00:38:34.005 [2024-12-09 09:56:09.341738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.005 [2024-12-09 09:56:09.341769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.005 qpair failed and we were unable to recover it. 00:38:34.005 [2024-12-09 09:56:09.342121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.005 [2024-12-09 09:56:09.342150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.005 qpair failed and we were unable to recover it. 00:38:34.005 [2024-12-09 09:56:09.342494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.005 [2024-12-09 09:56:09.342522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.005 qpair failed and we were unable to recover it. 00:38:34.005 [2024-12-09 09:56:09.342941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.005 [2024-12-09 09:56:09.342971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.005 qpair failed and we were unable to recover it. 00:38:34.005 [2024-12-09 09:56:09.343293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.005 [2024-12-09 09:56:09.343323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.005 qpair failed and we were unable to recover it. 00:38:34.005 [2024-12-09 09:56:09.343683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.005 [2024-12-09 09:56:09.343713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.005 qpair failed and we were unable to recover it. 00:38:34.005 [2024-12-09 09:56:09.344034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.005 [2024-12-09 09:56:09.344063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.005 qpair failed and we were unable to recover it. 00:38:34.005 [2024-12-09 09:56:09.344437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.005 [2024-12-09 09:56:09.344466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.005 qpair failed and we were unable to recover it. 00:38:34.005 [2024-12-09 09:56:09.344786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.005 [2024-12-09 09:56:09.344815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.005 qpair failed and we were unable to recover it. 00:38:34.005 [2024-12-09 09:56:09.345168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.005 [2024-12-09 09:56:09.345198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.005 qpair failed and we were unable to recover it. 00:38:34.005 [2024-12-09 09:56:09.345550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.005 [2024-12-09 09:56:09.345579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.005 qpair failed and we were unable to recover it. 00:38:34.005 [2024-12-09 09:56:09.346006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.005 [2024-12-09 09:56:09.346036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.005 qpair failed and we were unable to recover it. 00:38:34.005 [2024-12-09 09:56:09.346383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.005 [2024-12-09 09:56:09.346412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.005 qpair failed and we were unable to recover it. 00:38:34.005 [2024-12-09 09:56:09.346660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.005 [2024-12-09 09:56:09.346690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.005 qpair failed and we were unable to recover it. 00:38:34.005 [2024-12-09 09:56:09.347078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.005 [2024-12-09 09:56:09.347107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.005 qpair failed and we were unable to recover it. 00:38:34.005 [2024-12-09 09:56:09.347445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.005 [2024-12-09 09:56:09.347474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.005 qpair failed and we were unable to recover it. 00:38:34.005 [2024-12-09 09:56:09.347815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.005 [2024-12-09 09:56:09.347845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.005 qpair failed and we were unable to recover it. 00:38:34.005 [2024-12-09 09:56:09.348222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.005 [2024-12-09 09:56:09.348251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.005 qpair failed and we were unable to recover it. 00:38:34.005 [2024-12-09 09:56:09.348590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.005 [2024-12-09 09:56:09.348619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.005 qpair failed and we were unable to recover it. 00:38:34.005 [2024-12-09 09:56:09.349025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.005 [2024-12-09 09:56:09.349055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.005 qpair failed and we were unable to recover it. 00:38:34.005 [2024-12-09 09:56:09.349396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.005 [2024-12-09 09:56:09.349430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.005 qpair failed and we were unable to recover it. 00:38:34.005 [2024-12-09 09:56:09.349796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.005 [2024-12-09 09:56:09.349827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.005 qpair failed and we were unable to recover it. 00:38:34.006 [2024-12-09 09:56:09.350177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.006 [2024-12-09 09:56:09.350206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.006 qpair failed and we were unable to recover it. 00:38:34.006 [2024-12-09 09:56:09.350562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.006 [2024-12-09 09:56:09.350590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.006 qpair failed and we were unable to recover it. 00:38:34.006 [2024-12-09 09:56:09.350922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.006 [2024-12-09 09:56:09.350952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.006 qpair failed and we were unable to recover it. 00:38:34.006 [2024-12-09 09:56:09.351185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.006 [2024-12-09 09:56:09.351213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.006 qpair failed and we were unable to recover it. 00:38:34.006 [2024-12-09 09:56:09.351544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.006 [2024-12-09 09:56:09.351572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.006 qpair failed and we were unable to recover it. 00:38:34.006 [2024-12-09 09:56:09.351916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.006 [2024-12-09 09:56:09.351947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.006 qpair failed and we were unable to recover it. 00:38:34.006 [2024-12-09 09:56:09.352302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.006 [2024-12-09 09:56:09.352331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.006 qpair failed and we were unable to recover it. 00:38:34.006 [2024-12-09 09:56:09.352690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.006 [2024-12-09 09:56:09.352720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.006 qpair failed and we were unable to recover it. 00:38:34.006 [2024-12-09 09:56:09.353064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.006 [2024-12-09 09:56:09.353093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.006 qpair failed and we were unable to recover it. 00:38:34.006 [2024-12-09 09:56:09.353356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.006 [2024-12-09 09:56:09.353384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.006 qpair failed and we were unable to recover it. 00:38:34.006 [2024-12-09 09:56:09.353702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.006 [2024-12-09 09:56:09.353731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.006 qpair failed and we were unable to recover it. 00:38:34.006 [2024-12-09 09:56:09.354084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.006 [2024-12-09 09:56:09.354113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.006 qpair failed and we were unable to recover it. 00:38:34.006 [2024-12-09 09:56:09.354457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.006 [2024-12-09 09:56:09.354487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.006 qpair failed and we were unable to recover it. 00:38:34.006 [2024-12-09 09:56:09.354837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.006 [2024-12-09 09:56:09.354868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.006 qpair failed and we were unable to recover it. 00:38:34.006 [2024-12-09 09:56:09.355212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.006 [2024-12-09 09:56:09.355241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.006 qpair failed and we were unable to recover it. 00:38:34.006 [2024-12-09 09:56:09.355574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.006 [2024-12-09 09:56:09.355603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.006 qpair failed and we were unable to recover it. 00:38:34.006 [2024-12-09 09:56:09.355952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.006 [2024-12-09 09:56:09.355982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.006 qpair failed and we were unable to recover it. 00:38:34.006 [2024-12-09 09:56:09.356320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.006 [2024-12-09 09:56:09.356349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.006 qpair failed and we were unable to recover it. 00:38:34.006 [2024-12-09 09:56:09.356715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.006 [2024-12-09 09:56:09.356744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.006 qpair failed and we were unable to recover it. 00:38:34.006 [2024-12-09 09:56:09.357071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.006 [2024-12-09 09:56:09.357100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.006 qpair failed and we were unable to recover it. 00:38:34.006 [2024-12-09 09:56:09.357440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.006 [2024-12-09 09:56:09.357469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.006 qpair failed and we were unable to recover it. 00:38:34.006 [2024-12-09 09:56:09.357851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.006 [2024-12-09 09:56:09.357881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.006 qpair failed and we were unable to recover it. 00:38:34.006 [2024-12-09 09:56:09.358209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.006 [2024-12-09 09:56:09.358238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.006 qpair failed and we were unable to recover it. 00:38:34.006 [2024-12-09 09:56:09.358583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.006 [2024-12-09 09:56:09.358612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.006 qpair failed and we were unable to recover it. 00:38:34.006 [2024-12-09 09:56:09.358963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.006 [2024-12-09 09:56:09.358993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.006 qpair failed and we were unable to recover it. 00:38:34.006 [2024-12-09 09:56:09.359325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.006 [2024-12-09 09:56:09.359365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.006 qpair failed and we were unable to recover it. 00:38:34.006 [2024-12-09 09:56:09.359725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.006 [2024-12-09 09:56:09.359754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.006 qpair failed and we were unable to recover it. 00:38:34.006 [2024-12-09 09:56:09.360121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.006 [2024-12-09 09:56:09.360149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.006 qpair failed and we were unable to recover it. 00:38:34.006 [2024-12-09 09:56:09.360486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.006 [2024-12-09 09:56:09.360515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.006 qpair failed and we were unable to recover it. 00:38:34.006 [2024-12-09 09:56:09.360869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.006 [2024-12-09 09:56:09.360898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.006 qpair failed and we were unable to recover it. 00:38:34.006 [2024-12-09 09:56:09.361259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.006 [2024-12-09 09:56:09.361288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.006 qpair failed and we were unable to recover it. 00:38:34.006 [2024-12-09 09:56:09.361603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.006 [2024-12-09 09:56:09.361633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.006 qpair failed and we were unable to recover it. 00:38:34.006 [2024-12-09 09:56:09.363204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.006 [2024-12-09 09:56:09.363254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.006 qpair failed and we were unable to recover it. 00:38:34.006 [2024-12-09 09:56:09.363590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.006 [2024-12-09 09:56:09.363622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.006 qpair failed and we were unable to recover it. 00:38:34.006 [2024-12-09 09:56:09.363993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.006 [2024-12-09 09:56:09.364024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.006 qpair failed and we were unable to recover it. 00:38:34.006 [2024-12-09 09:56:09.364368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.006 [2024-12-09 09:56:09.364396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.006 qpair failed and we were unable to recover it. 00:38:34.006 [2024-12-09 09:56:09.364634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.006 [2024-12-09 09:56:09.364675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.006 qpair failed and we were unable to recover it. 00:38:34.006 [2024-12-09 09:56:09.365038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.006 [2024-12-09 09:56:09.365068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.007 qpair failed and we were unable to recover it. 00:38:34.007 [2024-12-09 09:56:09.365404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.007 [2024-12-09 09:56:09.365434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.007 qpair failed and we were unable to recover it. 00:38:34.007 [2024-12-09 09:56:09.365773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.007 [2024-12-09 09:56:09.365806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.007 qpair failed and we were unable to recover it. 00:38:34.007 [2024-12-09 09:56:09.366180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.007 [2024-12-09 09:56:09.366209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.007 qpair failed and we were unable to recover it. 00:38:34.007 [2024-12-09 09:56:09.366443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.007 [2024-12-09 09:56:09.366472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.007 qpair failed and we were unable to recover it. 00:38:34.007 [2024-12-09 09:56:09.366729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.007 [2024-12-09 09:56:09.366760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.007 qpair failed and we were unable to recover it. 00:38:34.007 [2024-12-09 09:56:09.367185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.007 [2024-12-09 09:56:09.367215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.007 qpair failed and we were unable to recover it. 00:38:34.007 [2024-12-09 09:56:09.367562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.007 [2024-12-09 09:56:09.367592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.007 qpair failed and we were unable to recover it. 00:38:34.007 [2024-12-09 09:56:09.367848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.007 [2024-12-09 09:56:09.367877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.007 qpair failed and we were unable to recover it. 00:38:34.007 [2024-12-09 09:56:09.368230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.007 [2024-12-09 09:56:09.368259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.007 qpair failed and we were unable to recover it. 00:38:34.007 [2024-12-09 09:56:09.368584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.007 [2024-12-09 09:56:09.368612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.007 qpair failed and we were unable to recover it. 00:38:34.007 [2024-12-09 09:56:09.368942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.007 [2024-12-09 09:56:09.368972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.007 qpair failed and we were unable to recover it. 00:38:34.007 [2024-12-09 09:56:09.369325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.007 [2024-12-09 09:56:09.369354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.007 qpair failed and we were unable to recover it. 00:38:34.007 [2024-12-09 09:56:09.369716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.007 [2024-12-09 09:56:09.369747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.007 qpair failed and we were unable to recover it. 00:38:34.007 [2024-12-09 09:56:09.370098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.007 [2024-12-09 09:56:09.370128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.007 qpair failed and we were unable to recover it. 00:38:34.007 [2024-12-09 09:56:09.370445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.007 [2024-12-09 09:56:09.370474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.007 qpair failed and we were unable to recover it. 00:38:34.007 [2024-12-09 09:56:09.370834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.007 [2024-12-09 09:56:09.370865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.007 qpair failed and we were unable to recover it. 00:38:34.007 [2024-12-09 09:56:09.371232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.007 [2024-12-09 09:56:09.371262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.007 qpair failed and we were unable to recover it. 00:38:34.007 [2024-12-09 09:56:09.371610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.007 [2024-12-09 09:56:09.371648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.007 qpair failed and we were unable to recover it. 00:38:34.007 [2024-12-09 09:56:09.371989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.007 [2024-12-09 09:56:09.372018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.007 qpair failed and we were unable to recover it. 00:38:34.007 [2024-12-09 09:56:09.372376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.007 [2024-12-09 09:56:09.372405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.007 qpair failed and we were unable to recover it. 00:38:34.007 [2024-12-09 09:56:09.372749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.007 [2024-12-09 09:56:09.372778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.007 qpair failed and we were unable to recover it. 00:38:34.007 [2024-12-09 09:56:09.373139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.007 [2024-12-09 09:56:09.373167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.007 qpair failed and we were unable to recover it. 00:38:34.007 [2024-12-09 09:56:09.373506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.007 [2024-12-09 09:56:09.373536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.007 qpair failed and we were unable to recover it. 00:38:34.007 [2024-12-09 09:56:09.373888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.007 [2024-12-09 09:56:09.373919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.007 qpair failed and we were unable to recover it. 00:38:34.007 [2024-12-09 09:56:09.374164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.007 [2024-12-09 09:56:09.374193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.007 qpair failed and we were unable to recover it. 00:38:34.007 [2024-12-09 09:56:09.374523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.007 [2024-12-09 09:56:09.374552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.007 qpair failed and we were unable to recover it. 00:38:34.007 [2024-12-09 09:56:09.374886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.007 [2024-12-09 09:56:09.374916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.007 qpair failed and we were unable to recover it. 00:38:34.007 [2024-12-09 09:56:09.375258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.007 [2024-12-09 09:56:09.375287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.007 qpair failed and we were unable to recover it. 00:38:34.007 [2024-12-09 09:56:09.375658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.007 [2024-12-09 09:56:09.375695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.007 qpair failed and we were unable to recover it. 00:38:34.007 [2024-12-09 09:56:09.376125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.007 [2024-12-09 09:56:09.376154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.007 qpair failed and we were unable to recover it. 00:38:34.007 [2024-12-09 09:56:09.376473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.007 [2024-12-09 09:56:09.376501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.007 qpair failed and we were unable to recover it. 00:38:34.007 [2024-12-09 09:56:09.376855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.007 [2024-12-09 09:56:09.376889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.007 qpair failed and we were unable to recover it. 00:38:34.007 [2024-12-09 09:56:09.377130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.007 [2024-12-09 09:56:09.377158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.007 qpair failed and we were unable to recover it. 00:38:34.007 [2024-12-09 09:56:09.377329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.007 [2024-12-09 09:56:09.377358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.007 qpair failed and we were unable to recover it. 00:38:34.007 [2024-12-09 09:56:09.377725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.007 [2024-12-09 09:56:09.377755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.007 qpair failed and we were unable to recover it. 00:38:34.007 [2024-12-09 09:56:09.378128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.007 [2024-12-09 09:56:09.378158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.007 qpair failed and we were unable to recover it. 00:38:34.007 [2024-12-09 09:56:09.378499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.007 [2024-12-09 09:56:09.378529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.007 qpair failed and we were unable to recover it. 00:38:34.008 [2024-12-09 09:56:09.378906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.008 [2024-12-09 09:56:09.378937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.008 qpair failed and we were unable to recover it. 00:38:34.008 [2024-12-09 09:56:09.379148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.008 [2024-12-09 09:56:09.379176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.008 qpair failed and we were unable to recover it. 00:38:34.008 [2024-12-09 09:56:09.379525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.008 [2024-12-09 09:56:09.379555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.008 qpair failed and we were unable to recover it. 00:38:34.008 [2024-12-09 09:56:09.379926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.008 [2024-12-09 09:56:09.379957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.008 qpair failed and we were unable to recover it. 00:38:34.008 [2024-12-09 09:56:09.380318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.008 [2024-12-09 09:56:09.380347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.008 qpair failed and we were unable to recover it. 00:38:34.008 [2024-12-09 09:56:09.380723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.008 [2024-12-09 09:56:09.380754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.008 qpair failed and we were unable to recover it. 00:38:34.008 [2024-12-09 09:56:09.381128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.008 [2024-12-09 09:56:09.381157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.008 qpair failed and we were unable to recover it. 00:38:34.008 [2024-12-09 09:56:09.381512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.008 [2024-12-09 09:56:09.381542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.008 qpair failed and we were unable to recover it. 00:38:34.008 [2024-12-09 09:56:09.381927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.008 [2024-12-09 09:56:09.381958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.008 qpair failed and we were unable to recover it. 00:38:34.008 [2024-12-09 09:56:09.382203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.008 [2024-12-09 09:56:09.382232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.008 qpair failed and we were unable to recover it. 00:38:34.008 [2024-12-09 09:56:09.382574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.008 [2024-12-09 09:56:09.382603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.008 qpair failed and we were unable to recover it. 00:38:34.008 [2024-12-09 09:56:09.382937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.008 [2024-12-09 09:56:09.382970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.008 qpair failed and we were unable to recover it. 00:38:34.008 [2024-12-09 09:56:09.383325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.008 [2024-12-09 09:56:09.383354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.008 qpair failed and we were unable to recover it. 00:38:34.008 [2024-12-09 09:56:09.383708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.008 [2024-12-09 09:56:09.383740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.008 qpair failed and we were unable to recover it. 00:38:34.008 [2024-12-09 09:56:09.384078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.008 [2024-12-09 09:56:09.384109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.008 qpair failed and we were unable to recover it. 00:38:34.008 [2024-12-09 09:56:09.384485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.008 [2024-12-09 09:56:09.384514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.008 qpair failed and we were unable to recover it. 00:38:34.008 [2024-12-09 09:56:09.384911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.008 [2024-12-09 09:56:09.384942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.008 qpair failed and we were unable to recover it. 00:38:34.008 [2024-12-09 09:56:09.385294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.008 [2024-12-09 09:56:09.385323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.008 qpair failed and we were unable to recover it. 00:38:34.008 [2024-12-09 09:56:09.385675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.008 [2024-12-09 09:56:09.385712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.008 qpair failed and we were unable to recover it. 00:38:34.008 [2024-12-09 09:56:09.386041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.008 [2024-12-09 09:56:09.386071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.008 qpair failed and we were unable to recover it. 00:38:34.008 [2024-12-09 09:56:09.386442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.008 [2024-12-09 09:56:09.386471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.008 qpair failed and we were unable to recover it. 00:38:34.008 [2024-12-09 09:56:09.386743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.008 [2024-12-09 09:56:09.386774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.008 qpair failed and we were unable to recover it. 00:38:34.008 [2024-12-09 09:56:09.387034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.008 [2024-12-09 09:56:09.387064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.008 qpair failed and we were unable to recover it. 00:38:34.008 [2024-12-09 09:56:09.387429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.008 [2024-12-09 09:56:09.387458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.008 qpair failed and we were unable to recover it. 00:38:34.008 [2024-12-09 09:56:09.387709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.008 [2024-12-09 09:56:09.387740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.008 qpair failed and we were unable to recover it. 00:38:34.008 [2024-12-09 09:56:09.388083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.008 [2024-12-09 09:56:09.388112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.008 qpair failed and we were unable to recover it. 00:38:34.008 [2024-12-09 09:56:09.388350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.008 [2024-12-09 09:56:09.388379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.008 qpair failed and we were unable to recover it. 00:38:34.008 [2024-12-09 09:56:09.388757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.008 [2024-12-09 09:56:09.388788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.008 qpair failed and we were unable to recover it. 00:38:34.008 [2024-12-09 09:56:09.389157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.008 [2024-12-09 09:56:09.389187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.008 qpair failed and we were unable to recover it. 00:38:34.008 [2024-12-09 09:56:09.389523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.008 [2024-12-09 09:56:09.389552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.008 qpair failed and we were unable to recover it. 00:38:34.008 [2024-12-09 09:56:09.389898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.008 [2024-12-09 09:56:09.389929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.008 qpair failed and we were unable to recover it. 00:38:34.008 [2024-12-09 09:56:09.390272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.008 [2024-12-09 09:56:09.390301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.008 qpair failed and we were unable to recover it. 00:38:34.008 [2024-12-09 09:56:09.390670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.008 [2024-12-09 09:56:09.390701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.008 qpair failed and we were unable to recover it. 00:38:34.009 [2024-12-09 09:56:09.391047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.009 [2024-12-09 09:56:09.391077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.009 qpair failed and we were unable to recover it. 00:38:34.009 [2024-12-09 09:56:09.391432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.009 [2024-12-09 09:56:09.391461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.009 qpair failed and we were unable to recover it. 00:38:34.009 [2024-12-09 09:56:09.391826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.009 [2024-12-09 09:56:09.391857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.009 qpair failed and we were unable to recover it. 00:38:34.009 [2024-12-09 09:56:09.392184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.009 [2024-12-09 09:56:09.392214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.009 qpair failed and we were unable to recover it. 00:38:34.009 [2024-12-09 09:56:09.392544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.009 [2024-12-09 09:56:09.392573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.009 qpair failed and we were unable to recover it. 00:38:34.009 [2024-12-09 09:56:09.392929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.009 [2024-12-09 09:56:09.392960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.009 qpair failed and we were unable to recover it. 00:38:34.009 [2024-12-09 09:56:09.393304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.009 [2024-12-09 09:56:09.393333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.009 qpair failed and we were unable to recover it. 00:38:34.009 [2024-12-09 09:56:09.393556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.009 [2024-12-09 09:56:09.393585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.009 qpair failed and we were unable to recover it. 00:38:34.009 [2024-12-09 09:56:09.393944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.009 [2024-12-09 09:56:09.393975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.009 qpair failed and we were unable to recover it. 00:38:34.009 [2024-12-09 09:56:09.394326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.009 [2024-12-09 09:56:09.394355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.009 qpair failed and we were unable to recover it. 00:38:34.009 [2024-12-09 09:56:09.394699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.009 [2024-12-09 09:56:09.394731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.009 qpair failed and we were unable to recover it. 00:38:34.009 [2024-12-09 09:56:09.395109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.009 [2024-12-09 09:56:09.395139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.009 qpair failed and we were unable to recover it. 00:38:34.009 [2024-12-09 09:56:09.395768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.009 [2024-12-09 09:56:09.395807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.009 qpair failed and we were unable to recover it. 00:38:34.009 [2024-12-09 09:56:09.396204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.009 [2024-12-09 09:56:09.396239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.009 qpair failed and we were unable to recover it. 00:38:34.009 [2024-12-09 09:56:09.396573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.009 [2024-12-09 09:56:09.396604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.009 qpair failed and we were unable to recover it. 00:38:34.009 [2024-12-09 09:56:09.396968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.009 [2024-12-09 09:56:09.396999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.009 qpair failed and we were unable to recover it. 00:38:34.009 [2024-12-09 09:56:09.397331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.009 [2024-12-09 09:56:09.397361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.009 qpair failed and we were unable to recover it. 00:38:34.009 [2024-12-09 09:56:09.397706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.009 [2024-12-09 09:56:09.397738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.009 qpair failed and we were unable to recover it. 00:38:34.009 [2024-12-09 09:56:09.398055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.009 [2024-12-09 09:56:09.398085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.009 qpair failed and we were unable to recover it. 00:38:34.009 [2024-12-09 09:56:09.398451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.009 [2024-12-09 09:56:09.398481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.009 qpair failed and we were unable to recover it. 00:38:34.009 [2024-12-09 09:56:09.398828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.009 [2024-12-09 09:56:09.398859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.009 qpair failed and we were unable to recover it. 00:38:34.009 [2024-12-09 09:56:09.399208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.009 [2024-12-09 09:56:09.399238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.009 qpair failed and we were unable to recover it. 00:38:34.009 [2024-12-09 09:56:09.399571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.009 [2024-12-09 09:56:09.399600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.009 qpair failed and we were unable to recover it. 00:38:34.009 [2024-12-09 09:56:09.399972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.009 [2024-12-09 09:56:09.400003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.009 qpair failed and we were unable to recover it. 00:38:34.009 [2024-12-09 09:56:09.400335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.009 [2024-12-09 09:56:09.400364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.009 qpair failed and we were unable to recover it. 00:38:34.009 [2024-12-09 09:56:09.400607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.009 [2024-12-09 09:56:09.400662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.009 qpair failed and we were unable to recover it. 00:38:34.009 [2024-12-09 09:56:09.401027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.009 [2024-12-09 09:56:09.401063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.009 qpair failed and we were unable to recover it. 00:38:34.009 [2024-12-09 09:56:09.401387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.009 [2024-12-09 09:56:09.401416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.009 qpair failed and we were unable to recover it. 00:38:34.009 [2024-12-09 09:56:09.401767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.009 [2024-12-09 09:56:09.401797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.009 qpair failed and we were unable to recover it. 00:38:34.009 [2024-12-09 09:56:09.402151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.009 [2024-12-09 09:56:09.402181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.009 qpair failed and we were unable to recover it. 00:38:34.009 [2024-12-09 09:56:09.402579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.009 [2024-12-09 09:56:09.402608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.009 qpair failed and we were unable to recover it. 00:38:34.009 [2024-12-09 09:56:09.402998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.009 [2024-12-09 09:56:09.403029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.009 qpair failed and we were unable to recover it. 00:38:34.009 [2024-12-09 09:56:09.403381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.009 [2024-12-09 09:56:09.403410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.009 qpair failed and we were unable to recover it. 00:38:34.009 [2024-12-09 09:56:09.403776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.009 [2024-12-09 09:56:09.403806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.009 qpair failed and we were unable to recover it. 00:38:34.009 [2024-12-09 09:56:09.404165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.009 [2024-12-09 09:56:09.404194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.009 qpair failed and we were unable to recover it. 00:38:34.009 [2024-12-09 09:56:09.404538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.009 [2024-12-09 09:56:09.404567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.009 qpair failed and we were unable to recover it. 00:38:34.009 [2024-12-09 09:56:09.404939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.009 [2024-12-09 09:56:09.404969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.010 qpair failed and we were unable to recover it. 00:38:34.010 [2024-12-09 09:56:09.405369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.010 [2024-12-09 09:56:09.405398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.010 qpair failed and we were unable to recover it. 00:38:34.010 [2024-12-09 09:56:09.405744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.010 [2024-12-09 09:56:09.405776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.010 qpair failed and we were unable to recover it. 00:38:34.010 [2024-12-09 09:56:09.406122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.010 [2024-12-09 09:56:09.406151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.010 qpair failed and we were unable to recover it. 00:38:34.010 [2024-12-09 09:56:09.406537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.010 [2024-12-09 09:56:09.406567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.010 qpair failed and we were unable to recover it. 00:38:34.010 [2024-12-09 09:56:09.406778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.010 [2024-12-09 09:56:09.406808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.010 qpair failed and we were unable to recover it. 00:38:34.010 [2024-12-09 09:56:09.407172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.010 [2024-12-09 09:56:09.407201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.010 qpair failed and we were unable to recover it. 00:38:34.010 [2024-12-09 09:56:09.407594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.010 [2024-12-09 09:56:09.407623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.010 qpair failed and we were unable to recover it. 00:38:34.010 [2024-12-09 09:56:09.407977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.010 [2024-12-09 09:56:09.408008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.010 qpair failed and we were unable to recover it. 00:38:34.010 [2024-12-09 09:56:09.408414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.010 [2024-12-09 09:56:09.408442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.010 qpair failed and we were unable to recover it. 00:38:34.010 [2024-12-09 09:56:09.408748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.010 [2024-12-09 09:56:09.408785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.010 qpair failed and we were unable to recover it. 00:38:34.010 [2024-12-09 09:56:09.409169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.010 [2024-12-09 09:56:09.409198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.010 qpair failed and we were unable to recover it. 00:38:34.010 [2024-12-09 09:56:09.409570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.010 [2024-12-09 09:56:09.409600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.010 qpair failed and we were unable to recover it. 00:38:34.010 [2024-12-09 09:56:09.409935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.010 [2024-12-09 09:56:09.409965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.010 qpair failed and we were unable to recover it. 00:38:34.010 [2024-12-09 09:56:09.410307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.010 [2024-12-09 09:56:09.410337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.010 qpair failed and we were unable to recover it. 00:38:34.010 [2024-12-09 09:56:09.410672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.010 [2024-12-09 09:56:09.410703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.010 qpair failed and we were unable to recover it. 00:38:34.010 [2024-12-09 09:56:09.411054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.010 [2024-12-09 09:56:09.411085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.010 qpair failed and we were unable to recover it. 00:38:34.010 [2024-12-09 09:56:09.411496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.010 [2024-12-09 09:56:09.411526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.010 qpair failed and we were unable to recover it. 00:38:34.010 [2024-12-09 09:56:09.411931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.010 [2024-12-09 09:56:09.411961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.010 qpair failed and we were unable to recover it. 00:38:34.010 [2024-12-09 09:56:09.412236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.010 [2024-12-09 09:56:09.412265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.010 qpair failed and we were unable to recover it. 00:38:34.010 [2024-12-09 09:56:09.412626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.010 [2024-12-09 09:56:09.412667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.010 qpair failed and we were unable to recover it. 00:38:34.010 [2024-12-09 09:56:09.413035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.010 [2024-12-09 09:56:09.413065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.010 qpair failed and we were unable to recover it. 00:38:34.010 [2024-12-09 09:56:09.413475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.010 [2024-12-09 09:56:09.413503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.010 qpair failed and we were unable to recover it. 00:38:34.010 [2024-12-09 09:56:09.413907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.010 [2024-12-09 09:56:09.413938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.010 qpair failed and we were unable to recover it. 00:38:34.010 [2024-12-09 09:56:09.414302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.010 [2024-12-09 09:56:09.414332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.010 qpair failed and we were unable to recover it. 00:38:34.010 [2024-12-09 09:56:09.414707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.010 [2024-12-09 09:56:09.414738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.010 qpair failed and we were unable to recover it. 00:38:34.010 [2024-12-09 09:56:09.415086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.010 [2024-12-09 09:56:09.415115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.010 qpair failed and we were unable to recover it. 00:38:34.010 [2024-12-09 09:56:09.415352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.010 [2024-12-09 09:56:09.415381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.010 qpair failed and we were unable to recover it. 00:38:34.010 [2024-12-09 09:56:09.415732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.010 [2024-12-09 09:56:09.415762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.010 qpair failed and we were unable to recover it. 00:38:34.010 [2024-12-09 09:56:09.416086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.010 [2024-12-09 09:56:09.416116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.010 qpair failed and we were unable to recover it. 00:38:34.010 [2024-12-09 09:56:09.416440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.010 [2024-12-09 09:56:09.416469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.010 qpair failed and we were unable to recover it. 00:38:34.010 [2024-12-09 09:56:09.416877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.010 [2024-12-09 09:56:09.416909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.010 qpair failed and we were unable to recover it. 00:38:34.010 [2024-12-09 09:56:09.417237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.010 [2024-12-09 09:56:09.417266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.010 qpair failed and we were unable to recover it. 00:38:34.010 [2024-12-09 09:56:09.417629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.010 [2024-12-09 09:56:09.417681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.010 qpair failed and we were unable to recover it. 00:38:34.010 [2024-12-09 09:56:09.418035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.010 [2024-12-09 09:56:09.418063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.010 qpair failed and we were unable to recover it. 00:38:34.010 [2024-12-09 09:56:09.418434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.010 [2024-12-09 09:56:09.418462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.010 qpair failed and we were unable to recover it. 00:38:34.010 [2024-12-09 09:56:09.418712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.010 [2024-12-09 09:56:09.418743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.010 qpair failed and we were unable to recover it. 00:38:34.010 [2024-12-09 09:56:09.419088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.010 [2024-12-09 09:56:09.419118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.010 qpair failed and we were unable to recover it. 00:38:34.010 [2024-12-09 09:56:09.419466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.011 [2024-12-09 09:56:09.419496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.011 qpair failed and we were unable to recover it. 00:38:34.011 [2024-12-09 09:56:09.419758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.011 [2024-12-09 09:56:09.419788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.011 qpair failed and we were unable to recover it. 00:38:34.011 [2024-12-09 09:56:09.420039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.011 [2024-12-09 09:56:09.420070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.011 qpair failed and we were unable to recover it. 00:38:34.011 [2024-12-09 09:56:09.420421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.011 [2024-12-09 09:56:09.420450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.011 qpair failed and we were unable to recover it. 00:38:34.011 [2024-12-09 09:56:09.420710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.011 [2024-12-09 09:56:09.420740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.011 qpair failed and we were unable to recover it. 00:38:34.011 [2024-12-09 09:56:09.421116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.011 [2024-12-09 09:56:09.421146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.011 qpair failed and we were unable to recover it. 00:38:34.011 [2024-12-09 09:56:09.421565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.011 [2024-12-09 09:56:09.421595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.011 qpair failed and we were unable to recover it. 00:38:34.011 [2024-12-09 09:56:09.421882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.011 [2024-12-09 09:56:09.421912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.011 qpair failed and we were unable to recover it. 00:38:34.011 [2024-12-09 09:56:09.422268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.011 [2024-12-09 09:56:09.422298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.011 qpair failed and we were unable to recover it. 00:38:34.011 [2024-12-09 09:56:09.422532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.011 [2024-12-09 09:56:09.422560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.011 qpair failed and we were unable to recover it. 00:38:34.011 [2024-12-09 09:56:09.422924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.011 [2024-12-09 09:56:09.422955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.011 qpair failed and we were unable to recover it. 00:38:34.011 [2024-12-09 09:56:09.423214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.011 [2024-12-09 09:56:09.423242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.011 qpair failed and we were unable to recover it. 00:38:34.011 [2024-12-09 09:56:09.423595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.011 [2024-12-09 09:56:09.423624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.011 qpair failed and we were unable to recover it. 00:38:34.011 [2024-12-09 09:56:09.424007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.011 [2024-12-09 09:56:09.424038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.011 qpair failed and we were unable to recover it. 00:38:34.011 [2024-12-09 09:56:09.424385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.011 [2024-12-09 09:56:09.424414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.011 qpair failed and we were unable to recover it. 00:38:34.011 [2024-12-09 09:56:09.424782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.011 [2024-12-09 09:56:09.424813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.011 qpair failed and we were unable to recover it. 00:38:34.011 [2024-12-09 09:56:09.425050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.011 [2024-12-09 09:56:09.425078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.011 qpair failed and we were unable to recover it. 00:38:34.011 [2024-12-09 09:56:09.425371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.011 [2024-12-09 09:56:09.425401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.011 qpair failed and we were unable to recover it. 00:38:34.011 [2024-12-09 09:56:09.425665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.011 [2024-12-09 09:56:09.425696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.011 qpair failed and we were unable to recover it. 00:38:34.011 [2024-12-09 09:56:09.426069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.011 [2024-12-09 09:56:09.426099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.011 qpair failed and we were unable to recover it. 00:38:34.011 [2024-12-09 09:56:09.426348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.011 [2024-12-09 09:56:09.426382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.011 qpair failed and we were unable to recover it. 00:38:34.011 [2024-12-09 09:56:09.426680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.011 [2024-12-09 09:56:09.426712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.011 qpair failed and we were unable to recover it. 00:38:34.011 [2024-12-09 09:56:09.426966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.011 [2024-12-09 09:56:09.426996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.011 qpair failed and we were unable to recover it. 00:38:34.011 [2024-12-09 09:56:09.427235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.011 [2024-12-09 09:56:09.427265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.011 qpair failed and we were unable to recover it. 00:38:34.011 [2024-12-09 09:56:09.427510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.011 [2024-12-09 09:56:09.427539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.011 qpair failed and we were unable to recover it. 00:38:34.011 [2024-12-09 09:56:09.427805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.011 [2024-12-09 09:56:09.427835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.011 qpair failed and we were unable to recover it. 00:38:34.011 [2024-12-09 09:56:09.428112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.011 [2024-12-09 09:56:09.428141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.011 qpair failed and we were unable to recover it. 00:38:34.011 [2024-12-09 09:56:09.428491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.011 [2024-12-09 09:56:09.428520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.011 qpair failed and we were unable to recover it. 00:38:34.011 [2024-12-09 09:56:09.428895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.011 [2024-12-09 09:56:09.428925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.011 qpair failed and we were unable to recover it. 00:38:34.011 [2024-12-09 09:56:09.429262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.011 [2024-12-09 09:56:09.429291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.011 qpair failed and we were unable to recover it. 00:38:34.305 [2024-12-09 09:56:09.429654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.305 [2024-12-09 09:56:09.429686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.305 qpair failed and we were unable to recover it. 00:38:34.306 [2024-12-09 09:56:09.430046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.306 [2024-12-09 09:56:09.430075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.306 qpair failed and we were unable to recover it. 00:38:34.306 [2024-12-09 09:56:09.430432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.306 [2024-12-09 09:56:09.430461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.306 qpair failed and we were unable to recover it. 00:38:34.306 [2024-12-09 09:56:09.430724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.306 [2024-12-09 09:56:09.430754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.306 qpair failed and we were unable to recover it. 00:38:34.306 [2024-12-09 09:56:09.431113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.306 [2024-12-09 09:56:09.431143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.306 qpair failed and we were unable to recover it. 00:38:34.306 [2024-12-09 09:56:09.431488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.306 [2024-12-09 09:56:09.431517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.306 qpair failed and we were unable to recover it. 00:38:34.306 [2024-12-09 09:56:09.431723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.306 [2024-12-09 09:56:09.431753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.306 qpair failed and we were unable to recover it. 00:38:34.306 [2024-12-09 09:56:09.432015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.306 [2024-12-09 09:56:09.432043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.306 qpair failed and we were unable to recover it. 00:38:34.306 [2024-12-09 09:56:09.432483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.306 [2024-12-09 09:56:09.432512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.306 qpair failed and we were unable to recover it. 00:38:34.306 [2024-12-09 09:56:09.432752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.306 [2024-12-09 09:56:09.432782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.306 qpair failed and we were unable to recover it. 00:38:34.306 [2024-12-09 09:56:09.433162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.306 [2024-12-09 09:56:09.433192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.306 qpair failed and we were unable to recover it. 00:38:34.306 [2024-12-09 09:56:09.433533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.306 [2024-12-09 09:56:09.433562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.306 qpair failed and we were unable to recover it. 00:38:34.306 [2024-12-09 09:56:09.433953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.306 [2024-12-09 09:56:09.433983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.306 qpair failed and we were unable to recover it. 00:38:34.306 [2024-12-09 09:56:09.434304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.306 [2024-12-09 09:56:09.434334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.306 qpair failed and we were unable to recover it. 00:38:34.306 [2024-12-09 09:56:09.434701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.306 [2024-12-09 09:56:09.434731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.306 qpair failed and we were unable to recover it. 00:38:34.306 [2024-12-09 09:56:09.435076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.306 [2024-12-09 09:56:09.435105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.306 qpair failed and we were unable to recover it. 00:38:34.306 [2024-12-09 09:56:09.435450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.306 [2024-12-09 09:56:09.435480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.306 qpair failed and we were unable to recover it. 00:38:34.306 [2024-12-09 09:56:09.435916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.306 [2024-12-09 09:56:09.435946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.306 qpair failed and we were unable to recover it. 00:38:34.306 [2024-12-09 09:56:09.436319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.306 [2024-12-09 09:56:09.436349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.306 qpair failed and we were unable to recover it. 00:38:34.306 [2024-12-09 09:56:09.436690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.306 [2024-12-09 09:56:09.436721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.306 qpair failed and we were unable to recover it. 00:38:34.306 [2024-12-09 09:56:09.437093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.306 [2024-12-09 09:56:09.437122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.306 qpair failed and we were unable to recover it. 00:38:34.306 [2024-12-09 09:56:09.437472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.306 [2024-12-09 09:56:09.437500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.306 qpair failed and we were unable to recover it. 00:38:34.306 [2024-12-09 09:56:09.437844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.306 [2024-12-09 09:56:09.437873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.306 qpair failed and we were unable to recover it. 00:38:34.306 [2024-12-09 09:56:09.438217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.306 [2024-12-09 09:56:09.438246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.306 qpair failed and we were unable to recover it. 00:38:34.306 [2024-12-09 09:56:09.438579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.306 [2024-12-09 09:56:09.438608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.306 qpair failed and we were unable to recover it. 00:38:34.306 [2024-12-09 09:56:09.438977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.306 [2024-12-09 09:56:09.439007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.306 qpair failed and we were unable to recover it. 00:38:34.306 [2024-12-09 09:56:09.439351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.306 [2024-12-09 09:56:09.439379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.306 qpair failed and we were unable to recover it. 00:38:34.306 [2024-12-09 09:56:09.439716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.306 [2024-12-09 09:56:09.439746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.306 qpair failed and we were unable to recover it. 00:38:34.306 [2024-12-09 09:56:09.440116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.306 [2024-12-09 09:56:09.440145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.306 qpair failed and we were unable to recover it. 00:38:34.306 [2024-12-09 09:56:09.440503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.306 [2024-12-09 09:56:09.440532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.306 qpair failed and we were unable to recover it. 00:38:34.306 [2024-12-09 09:56:09.440731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.306 [2024-12-09 09:56:09.440764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.306 qpair failed and we were unable to recover it. 00:38:34.306 [2024-12-09 09:56:09.441141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.306 [2024-12-09 09:56:09.441175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.306 qpair failed and we were unable to recover it. 00:38:34.306 [2024-12-09 09:56:09.441603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.306 [2024-12-09 09:56:09.441632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.306 qpair failed and we were unable to recover it. 00:38:34.306 [2024-12-09 09:56:09.442003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.306 [2024-12-09 09:56:09.442033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.306 qpair failed and we were unable to recover it. 00:38:34.306 [2024-12-09 09:56:09.442352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.306 [2024-12-09 09:56:09.442381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.306 qpair failed and we were unable to recover it. 00:38:34.306 [2024-12-09 09:56:09.442692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.306 [2024-12-09 09:56:09.442722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.306 qpair failed and we were unable to recover it. 00:38:34.306 [2024-12-09 09:56:09.443070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.306 [2024-12-09 09:56:09.443100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.306 qpair failed and we were unable to recover it. 00:38:34.306 [2024-12-09 09:56:09.443456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.306 [2024-12-09 09:56:09.443486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.306 qpair failed and we were unable to recover it. 00:38:34.306 [2024-12-09 09:56:09.443823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.306 [2024-12-09 09:56:09.443854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.307 qpair failed and we were unable to recover it. 00:38:34.307 [2024-12-09 09:56:09.444196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.307 [2024-12-09 09:56:09.444225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.307 qpair failed and we were unable to recover it. 00:38:34.307 [2024-12-09 09:56:09.444577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.307 [2024-12-09 09:56:09.444606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.307 qpair failed and we were unable to recover it. 00:38:34.307 [2024-12-09 09:56:09.444981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.307 [2024-12-09 09:56:09.445012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.307 qpair failed and we were unable to recover it. 00:38:34.307 [2024-12-09 09:56:09.445370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.307 [2024-12-09 09:56:09.445399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.307 qpair failed and we were unable to recover it. 00:38:34.307 [2024-12-09 09:56:09.445739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.307 [2024-12-09 09:56:09.445768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.307 qpair failed and we were unable to recover it. 00:38:34.307 [2024-12-09 09:56:09.446115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.307 [2024-12-09 09:56:09.446144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.307 qpair failed and we were unable to recover it. 00:38:34.307 [2024-12-09 09:56:09.446461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.307 [2024-12-09 09:56:09.446490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.307 qpair failed and we were unable to recover it. 00:38:34.307 [2024-12-09 09:56:09.446825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.307 [2024-12-09 09:56:09.446856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.307 qpair failed and we were unable to recover it. 00:38:34.307 [2024-12-09 09:56:09.447218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.307 [2024-12-09 09:56:09.447247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.307 qpair failed and we were unable to recover it. 00:38:34.307 [2024-12-09 09:56:09.447620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.307 [2024-12-09 09:56:09.447674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.307 qpair failed and we were unable to recover it. 00:38:34.307 [2024-12-09 09:56:09.448007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.307 [2024-12-09 09:56:09.448037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.307 qpair failed and we were unable to recover it. 00:38:34.307 [2024-12-09 09:56:09.448272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.307 [2024-12-09 09:56:09.448304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.307 qpair failed and we were unable to recover it. 00:38:34.307 [2024-12-09 09:56:09.448606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.307 [2024-12-09 09:56:09.448634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.307 qpair failed and we were unable to recover it. 00:38:34.307 [2024-12-09 09:56:09.449001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.307 [2024-12-09 09:56:09.449032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.307 qpair failed and we were unable to recover it. 00:38:34.307 [2024-12-09 09:56:09.449381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.307 [2024-12-09 09:56:09.449410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.307 qpair failed and we were unable to recover it. 00:38:34.307 [2024-12-09 09:56:09.449668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.307 [2024-12-09 09:56:09.449697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.307 qpair failed and we were unable to recover it. 00:38:34.307 [2024-12-09 09:56:09.450100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.307 [2024-12-09 09:56:09.450129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.307 qpair failed and we were unable to recover it. 00:38:34.307 [2024-12-09 09:56:09.450352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.307 [2024-12-09 09:56:09.450383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.307 qpair failed and we were unable to recover it. 00:38:34.307 [2024-12-09 09:56:09.450712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.307 [2024-12-09 09:56:09.450742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.307 qpair failed and we were unable to recover it. 00:38:34.307 [2024-12-09 09:56:09.451094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.307 [2024-12-09 09:56:09.451129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.307 qpair failed and we were unable to recover it. 00:38:34.307 [2024-12-09 09:56:09.451477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.307 [2024-12-09 09:56:09.451505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.307 qpair failed and we were unable to recover it. 00:38:34.307 [2024-12-09 09:56:09.451825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.307 [2024-12-09 09:56:09.451855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.307 qpair failed and we were unable to recover it. 00:38:34.307 [2024-12-09 09:56:09.452181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.307 [2024-12-09 09:56:09.452210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.307 qpair failed and we were unable to recover it. 00:38:34.307 [2024-12-09 09:56:09.452566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.307 [2024-12-09 09:56:09.452595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.307 qpair failed and we were unable to recover it. 00:38:34.307 [2024-12-09 09:56:09.452935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.307 [2024-12-09 09:56:09.452965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.307 qpair failed and we were unable to recover it. 00:38:34.307 [2024-12-09 09:56:09.453315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.307 [2024-12-09 09:56:09.453344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.307 qpair failed and we were unable to recover it. 00:38:34.307 [2024-12-09 09:56:09.453590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.307 [2024-12-09 09:56:09.453619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.307 qpair failed and we were unable to recover it. 00:38:34.307 [2024-12-09 09:56:09.453948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.307 [2024-12-09 09:56:09.453978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.307 qpair failed and we were unable to recover it. 00:38:34.307 [2024-12-09 09:56:09.454336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.307 [2024-12-09 09:56:09.454365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.307 qpair failed and we were unable to recover it. 00:38:34.307 [2024-12-09 09:56:09.454728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.307 [2024-12-09 09:56:09.454759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.307 qpair failed and we were unable to recover it. 00:38:34.307 [2024-12-09 09:56:09.455109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.307 [2024-12-09 09:56:09.455137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.307 qpair failed and we were unable to recover it. 00:38:34.307 [2024-12-09 09:56:09.455479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.307 [2024-12-09 09:56:09.455508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.307 qpair failed and we were unable to recover it. 00:38:34.307 [2024-12-09 09:56:09.455858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.307 [2024-12-09 09:56:09.455887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.307 qpair failed and we were unable to recover it. 00:38:34.307 [2024-12-09 09:56:09.456143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.307 [2024-12-09 09:56:09.456172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.307 qpair failed and we were unable to recover it. 00:38:34.307 [2024-12-09 09:56:09.456483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.307 [2024-12-09 09:56:09.456512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.307 qpair failed and we were unable to recover it. 00:38:34.307 [2024-12-09 09:56:09.456772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.307 [2024-12-09 09:56:09.456802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.307 qpair failed and we were unable to recover it. 00:38:34.307 [2024-12-09 09:56:09.457116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.307 [2024-12-09 09:56:09.457146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.307 qpair failed and we were unable to recover it. 00:38:34.307 [2024-12-09 09:56:09.457514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.307 [2024-12-09 09:56:09.457543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.307 qpair failed and we were unable to recover it. 00:38:34.308 [2024-12-09 09:56:09.457915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.308 [2024-12-09 09:56:09.457945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.308 qpair failed and we were unable to recover it. 00:38:34.308 [2024-12-09 09:56:09.458293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.308 [2024-12-09 09:56:09.458321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.308 qpair failed and we were unable to recover it. 00:38:34.308 [2024-12-09 09:56:09.458675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.308 [2024-12-09 09:56:09.458706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.308 qpair failed and we were unable to recover it. 00:38:34.308 [2024-12-09 09:56:09.459065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.308 [2024-12-09 09:56:09.459093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.308 qpair failed and we were unable to recover it. 00:38:34.308 [2024-12-09 09:56:09.459439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.308 [2024-12-09 09:56:09.459467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.308 qpair failed and we were unable to recover it. 00:38:34.308 [2024-12-09 09:56:09.459826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.308 [2024-12-09 09:56:09.459857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.308 qpair failed and we were unable to recover it. 00:38:34.308 [2024-12-09 09:56:09.460222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.308 [2024-12-09 09:56:09.460250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.308 qpair failed and we were unable to recover it. 00:38:34.308 [2024-12-09 09:56:09.460581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.308 [2024-12-09 09:56:09.460609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.308 qpair failed and we were unable to recover it. 00:38:34.308 [2024-12-09 09:56:09.461000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.308 [2024-12-09 09:56:09.461029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.308 qpair failed and we were unable to recover it. 00:38:34.308 [2024-12-09 09:56:09.461385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.308 [2024-12-09 09:56:09.461414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.308 qpair failed and we were unable to recover it. 00:38:34.308 [2024-12-09 09:56:09.461778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.308 [2024-12-09 09:56:09.461807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.308 qpair failed and we were unable to recover it. 00:38:34.308 [2024-12-09 09:56:09.462178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.308 [2024-12-09 09:56:09.462207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.308 qpair failed and we were unable to recover it. 00:38:34.308 [2024-12-09 09:56:09.462453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.308 [2024-12-09 09:56:09.462481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.308 qpair failed and we were unable to recover it. 00:38:34.308 [2024-12-09 09:56:09.462852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.308 [2024-12-09 09:56:09.462882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.308 qpair failed and we were unable to recover it. 00:38:34.308 [2024-12-09 09:56:09.463224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.308 [2024-12-09 09:56:09.463253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.308 qpair failed and we were unable to recover it. 00:38:34.308 [2024-12-09 09:56:09.463612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.308 [2024-12-09 09:56:09.463648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.308 qpair failed and we were unable to recover it. 00:38:34.308 [2024-12-09 09:56:09.464024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.308 [2024-12-09 09:56:09.464053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.308 qpair failed and we were unable to recover it. 00:38:34.308 [2024-12-09 09:56:09.464389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.308 [2024-12-09 09:56:09.464418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.308 qpair failed and we were unable to recover it. 00:38:34.308 [2024-12-09 09:56:09.464844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.308 [2024-12-09 09:56:09.464873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.308 qpair failed and we were unable to recover it. 00:38:34.308 [2024-12-09 09:56:09.465095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.308 [2024-12-09 09:56:09.465126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.308 qpair failed and we were unable to recover it. 00:38:34.308 [2024-12-09 09:56:09.465461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.308 [2024-12-09 09:56:09.465490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.308 qpair failed and we were unable to recover it. 00:38:34.308 [2024-12-09 09:56:09.465826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.308 [2024-12-09 09:56:09.465857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.308 qpair failed and we were unable to recover it. 00:38:34.308 [2024-12-09 09:56:09.466239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.308 [2024-12-09 09:56:09.466274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.308 qpair failed and we were unable to recover it. 00:38:34.308 [2024-12-09 09:56:09.466664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.308 [2024-12-09 09:56:09.466694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.308 qpair failed and we were unable to recover it. 00:38:34.308 [2024-12-09 09:56:09.466984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.308 [2024-12-09 09:56:09.467012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.308 qpair failed and we were unable to recover it. 00:38:34.308 [2024-12-09 09:56:09.467349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.308 [2024-12-09 09:56:09.467378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.308 qpair failed and we were unable to recover it. 00:38:34.308 [2024-12-09 09:56:09.467719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.308 [2024-12-09 09:56:09.467749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.308 qpair failed and we were unable to recover it. 00:38:34.308 [2024-12-09 09:56:09.468190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.308 [2024-12-09 09:56:09.468219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.308 qpair failed and we were unable to recover it. 00:38:34.308 [2024-12-09 09:56:09.468561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.308 [2024-12-09 09:56:09.468591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.308 qpair failed and we were unable to recover it. 00:38:34.308 [2024-12-09 09:56:09.468940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.308 [2024-12-09 09:56:09.468969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.308 qpair failed and we were unable to recover it. 00:38:34.308 [2024-12-09 09:56:09.469398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.308 [2024-12-09 09:56:09.469427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.308 qpair failed and we were unable to recover it. 00:38:34.308 [2024-12-09 09:56:09.469769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.308 [2024-12-09 09:56:09.469799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.308 qpair failed and we were unable to recover it. 00:38:34.308 [2024-12-09 09:56:09.470149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.308 [2024-12-09 09:56:09.470177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.308 qpair failed and we were unable to recover it. 00:38:34.308 [2024-12-09 09:56:09.470468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.308 [2024-12-09 09:56:09.470497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.308 qpair failed and we were unable to recover it. 00:38:34.308 [2024-12-09 09:56:09.470835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.308 [2024-12-09 09:56:09.470865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.308 qpair failed and we were unable to recover it. 00:38:34.308 [2024-12-09 09:56:09.471118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.308 [2024-12-09 09:56:09.471146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.308 qpair failed and we were unable to recover it. 00:38:34.308 [2024-12-09 09:56:09.471495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.308 [2024-12-09 09:56:09.471524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.308 qpair failed and we were unable to recover it. 00:38:34.308 [2024-12-09 09:56:09.471877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.308 [2024-12-09 09:56:09.471907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.308 qpair failed and we were unable to recover it. 00:38:34.308 [2024-12-09 09:56:09.472270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.309 [2024-12-09 09:56:09.472306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.309 qpair failed and we were unable to recover it. 00:38:34.309 [2024-12-09 09:56:09.472633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.309 [2024-12-09 09:56:09.472673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.309 qpair failed and we were unable to recover it. 00:38:34.309 [2024-12-09 09:56:09.472946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.309 [2024-12-09 09:56:09.472973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.309 qpair failed and we were unable to recover it. 00:38:34.309 [2024-12-09 09:56:09.473330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.309 [2024-12-09 09:56:09.473359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.309 qpair failed and we were unable to recover it. 00:38:34.309 [2024-12-09 09:56:09.473597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.309 [2024-12-09 09:56:09.473625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.309 qpair failed and we were unable to recover it. 00:38:34.309 [2024-12-09 09:56:09.473976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.309 [2024-12-09 09:56:09.474005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.309 qpair failed and we were unable to recover it. 00:38:34.309 [2024-12-09 09:56:09.474356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.309 [2024-12-09 09:56:09.474386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.309 qpair failed and we were unable to recover it. 00:38:34.309 [2024-12-09 09:56:09.474782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.309 [2024-12-09 09:56:09.474813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.309 qpair failed and we were unable to recover it. 00:38:34.309 [2024-12-09 09:56:09.475167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.309 [2024-12-09 09:56:09.475195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.309 qpair failed and we were unable to recover it. 00:38:34.309 [2024-12-09 09:56:09.475558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.309 [2024-12-09 09:56:09.475587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.309 qpair failed and we were unable to recover it. 00:38:34.309 [2024-12-09 09:56:09.475919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.309 [2024-12-09 09:56:09.475950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.309 qpair failed and we were unable to recover it. 00:38:34.309 [2024-12-09 09:56:09.476309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.309 [2024-12-09 09:56:09.476344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.309 qpair failed and we were unable to recover it. 00:38:34.309 [2024-12-09 09:56:09.476716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.309 [2024-12-09 09:56:09.476746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.309 qpair failed and we were unable to recover it. 00:38:34.309 [2024-12-09 09:56:09.477011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.309 [2024-12-09 09:56:09.477040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.309 qpair failed and we were unable to recover it. 00:38:34.309 [2024-12-09 09:56:09.477342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.309 [2024-12-09 09:56:09.477371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.309 qpair failed and we were unable to recover it. 00:38:34.309 [2024-12-09 09:56:09.477694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.309 [2024-12-09 09:56:09.477724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.309 qpair failed and we were unable to recover it. 00:38:34.309 [2024-12-09 09:56:09.477988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.309 [2024-12-09 09:56:09.478017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.309 qpair failed and we were unable to recover it. 00:38:34.309 [2024-12-09 09:56:09.478336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.309 [2024-12-09 09:56:09.478365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.309 qpair failed and we were unable to recover it. 00:38:34.309 [2024-12-09 09:56:09.478782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.309 [2024-12-09 09:56:09.478813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.309 qpair failed and we were unable to recover it. 00:38:34.309 [2024-12-09 09:56:09.479139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.309 [2024-12-09 09:56:09.479168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.309 qpair failed and we were unable to recover it. 00:38:34.309 [2024-12-09 09:56:09.479533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.309 [2024-12-09 09:56:09.479561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.309 qpair failed and we were unable to recover it. 00:38:34.309 [2024-12-09 09:56:09.479847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.309 [2024-12-09 09:56:09.479876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.309 qpair failed and we were unable to recover it. 00:38:34.309 [2024-12-09 09:56:09.480217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.309 [2024-12-09 09:56:09.480246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.309 qpair failed and we were unable to recover it. 00:38:34.309 [2024-12-09 09:56:09.480594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.309 [2024-12-09 09:56:09.480623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.309 qpair failed and we were unable to recover it. 00:38:34.309 [2024-12-09 09:56:09.480984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.309 [2024-12-09 09:56:09.481013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.309 qpair failed and we were unable to recover it. 00:38:34.309 [2024-12-09 09:56:09.481330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.309 [2024-12-09 09:56:09.481360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.309 qpair failed and we were unable to recover it. 00:38:34.309 [2024-12-09 09:56:09.481697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.309 [2024-12-09 09:56:09.481728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.309 qpair failed and we were unable to recover it. 00:38:34.309 [2024-12-09 09:56:09.482079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.309 [2024-12-09 09:56:09.482108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.309 qpair failed and we were unable to recover it. 00:38:34.309 [2024-12-09 09:56:09.482451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.309 [2024-12-09 09:56:09.482480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.309 qpair failed and we were unable to recover it. 00:38:34.309 [2024-12-09 09:56:09.482814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.309 [2024-12-09 09:56:09.482846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.309 qpair failed and we were unable to recover it. 00:38:34.309 [2024-12-09 09:56:09.483191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.309 [2024-12-09 09:56:09.483221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.309 qpair failed and we were unable to recover it. 00:38:34.309 [2024-12-09 09:56:09.483561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.309 [2024-12-09 09:56:09.483590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.309 qpair failed and we were unable to recover it. 00:38:34.309 [2024-12-09 09:56:09.483980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.309 [2024-12-09 09:56:09.484010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.309 qpair failed and we were unable to recover it. 00:38:34.309 [2024-12-09 09:56:09.484345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.309 [2024-12-09 09:56:09.484374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.309 qpair failed and we were unable to recover it. 00:38:34.309 [2024-12-09 09:56:09.484742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.309 [2024-12-09 09:56:09.484772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.309 qpair failed and we were unable to recover it. 00:38:34.309 [2024-12-09 09:56:09.485104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.309 [2024-12-09 09:56:09.485133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.309 qpair failed and we were unable to recover it. 00:38:34.309 [2024-12-09 09:56:09.485497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.309 [2024-12-09 09:56:09.485526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.309 qpair failed and we were unable to recover it. 00:38:34.309 [2024-12-09 09:56:09.485868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.309 [2024-12-09 09:56:09.485899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.309 qpair failed and we were unable to recover it. 00:38:34.309 [2024-12-09 09:56:09.486216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.309 [2024-12-09 09:56:09.486245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.309 qpair failed and we were unable to recover it. 00:38:34.310 [2024-12-09 09:56:09.486593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.310 [2024-12-09 09:56:09.486623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.310 qpair failed and we were unable to recover it. 00:38:34.310 [2024-12-09 09:56:09.486995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.310 [2024-12-09 09:56:09.487025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.310 qpair failed and we were unable to recover it. 00:38:34.310 [2024-12-09 09:56:09.487368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.310 [2024-12-09 09:56:09.487398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.310 qpair failed and we were unable to recover it. 00:38:34.310 [2024-12-09 09:56:09.487755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.310 [2024-12-09 09:56:09.487786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.310 qpair failed and we were unable to recover it. 00:38:34.310 [2024-12-09 09:56:09.488138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.310 [2024-12-09 09:56:09.488168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.310 qpair failed and we were unable to recover it. 00:38:34.310 [2024-12-09 09:56:09.488513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.310 [2024-12-09 09:56:09.488542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.310 qpair failed and we were unable to recover it. 00:38:34.310 [2024-12-09 09:56:09.488895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.310 [2024-12-09 09:56:09.488925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.310 qpair failed and we were unable to recover it. 00:38:34.310 [2024-12-09 09:56:09.489348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.310 [2024-12-09 09:56:09.489377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.310 qpair failed and we were unable to recover it. 00:38:34.310 [2024-12-09 09:56:09.489713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.310 [2024-12-09 09:56:09.489743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.310 qpair failed and we were unable to recover it. 00:38:34.310 [2024-12-09 09:56:09.490111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.310 [2024-12-09 09:56:09.490139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.310 qpair failed and we were unable to recover it. 00:38:34.310 [2024-12-09 09:56:09.490488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.310 [2024-12-09 09:56:09.490516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.310 qpair failed and we were unable to recover it. 00:38:34.310 [2024-12-09 09:56:09.490873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.310 [2024-12-09 09:56:09.490902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.310 qpair failed and we were unable to recover it. 00:38:34.310 [2024-12-09 09:56:09.491195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.310 [2024-12-09 09:56:09.491224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.310 qpair failed and we were unable to recover it. 00:38:34.310 [2024-12-09 09:56:09.491548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.310 [2024-12-09 09:56:09.491584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.310 qpair failed and we were unable to recover it. 00:38:34.310 [2024-12-09 09:56:09.491941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.310 [2024-12-09 09:56:09.491972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.310 qpair failed and we were unable to recover it. 00:38:34.310 [2024-12-09 09:56:09.492315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.310 [2024-12-09 09:56:09.492343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.310 qpair failed and we were unable to recover it. 00:38:34.310 [2024-12-09 09:56:09.492688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.310 [2024-12-09 09:56:09.492718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.310 qpair failed and we were unable to recover it. 00:38:34.310 [2024-12-09 09:56:09.493061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.310 [2024-12-09 09:56:09.493090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.310 qpair failed and we were unable to recover it. 00:38:34.310 [2024-12-09 09:56:09.493444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.310 [2024-12-09 09:56:09.493474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.310 qpair failed and we were unable to recover it. 00:38:34.310 [2024-12-09 09:56:09.493897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.310 [2024-12-09 09:56:09.493928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.310 qpair failed and we were unable to recover it. 00:38:34.310 [2024-12-09 09:56:09.494275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.310 [2024-12-09 09:56:09.494304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.310 qpair failed and we were unable to recover it. 00:38:34.310 [2024-12-09 09:56:09.494676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.310 [2024-12-09 09:56:09.494708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.310 qpair failed and we were unable to recover it. 00:38:34.310 [2024-12-09 09:56:09.495087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.310 [2024-12-09 09:56:09.495116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.310 qpair failed and we were unable to recover it. 00:38:34.310 [2024-12-09 09:56:09.495473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.310 [2024-12-09 09:56:09.495502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.310 qpair failed and we were unable to recover it. 00:38:34.310 [2024-12-09 09:56:09.495863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.310 [2024-12-09 09:56:09.495893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.310 qpair failed and we were unable to recover it. 00:38:34.310 [2024-12-09 09:56:09.496260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.310 [2024-12-09 09:56:09.496289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.310 qpair failed and we were unable to recover it. 00:38:34.310 [2024-12-09 09:56:09.496617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.310 [2024-12-09 09:56:09.496657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.310 qpair failed and we were unable to recover it. 00:38:34.310 [2024-12-09 09:56:09.496997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.310 [2024-12-09 09:56:09.497027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.310 qpair failed and we were unable to recover it. 00:38:34.310 [2024-12-09 09:56:09.497272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.310 [2024-12-09 09:56:09.497300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.310 qpair failed and we were unable to recover it. 00:38:34.310 [2024-12-09 09:56:09.497658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.310 [2024-12-09 09:56:09.497688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.310 qpair failed and we were unable to recover it. 00:38:34.310 [2024-12-09 09:56:09.498029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.310 [2024-12-09 09:56:09.498058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.310 qpair failed and we were unable to recover it. 00:38:34.310 [2024-12-09 09:56:09.498401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.310 [2024-12-09 09:56:09.498429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.310 qpair failed and we were unable to recover it. 00:38:34.310 [2024-12-09 09:56:09.498693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.310 [2024-12-09 09:56:09.498724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.311 qpair failed and we were unable to recover it. 00:38:34.311 [2024-12-09 09:56:09.499069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.311 [2024-12-09 09:56:09.499098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.311 qpair failed and we were unable to recover it. 00:38:34.311 [2024-12-09 09:56:09.499426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.311 [2024-12-09 09:56:09.499453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.311 qpair failed and we were unable to recover it. 00:38:34.311 [2024-12-09 09:56:09.499734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.311 [2024-12-09 09:56:09.499764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.311 qpair failed and we were unable to recover it. 00:38:34.311 [2024-12-09 09:56:09.500085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.311 [2024-12-09 09:56:09.500115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.311 qpair failed and we were unable to recover it. 00:38:34.311 [2024-12-09 09:56:09.500470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.311 [2024-12-09 09:56:09.500499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.311 qpair failed and we were unable to recover it. 00:38:34.311 [2024-12-09 09:56:09.500883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.311 [2024-12-09 09:56:09.500913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.311 qpair failed and we were unable to recover it. 00:38:34.311 [2024-12-09 09:56:09.501248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.311 [2024-12-09 09:56:09.501277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.311 qpair failed and we were unable to recover it. 00:38:34.311 [2024-12-09 09:56:09.501624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.311 [2024-12-09 09:56:09.501668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.311 qpair failed and we were unable to recover it. 00:38:34.311 [2024-12-09 09:56:09.501920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.311 [2024-12-09 09:56:09.501948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.311 qpair failed and we were unable to recover it. 00:38:34.311 [2024-12-09 09:56:09.502268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.311 [2024-12-09 09:56:09.502297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.311 qpair failed and we were unable to recover it. 00:38:34.311 [2024-12-09 09:56:09.502621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.311 [2024-12-09 09:56:09.502660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.311 qpair failed and we were unable to recover it. 00:38:34.311 [2024-12-09 09:56:09.502986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.311 [2024-12-09 09:56:09.503015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.311 qpair failed and we were unable to recover it. 00:38:34.311 [2024-12-09 09:56:09.503360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.311 [2024-12-09 09:56:09.503388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.311 qpair failed and we were unable to recover it. 00:38:34.311 [2024-12-09 09:56:09.503737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.311 [2024-12-09 09:56:09.503768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.311 qpair failed and we were unable to recover it. 00:38:34.311 [2024-12-09 09:56:09.504126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.311 [2024-12-09 09:56:09.504154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.311 qpair failed and we were unable to recover it. 00:38:34.311 [2024-12-09 09:56:09.504573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.311 [2024-12-09 09:56:09.504602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.311 qpair failed and we were unable to recover it. 00:38:34.311 [2024-12-09 09:56:09.504944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.311 [2024-12-09 09:56:09.504974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.311 qpair failed and we were unable to recover it. 00:38:34.311 [2024-12-09 09:56:09.505334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.311 [2024-12-09 09:56:09.505362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.311 qpair failed and we were unable to recover it. 00:38:34.311 [2024-12-09 09:56:09.505690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.311 [2024-12-09 09:56:09.505720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.311 qpair failed and we were unable to recover it. 00:38:34.311 [2024-12-09 09:56:09.506065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.311 [2024-12-09 09:56:09.506094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.311 qpair failed and we were unable to recover it. 00:38:34.311 [2024-12-09 09:56:09.506455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.311 [2024-12-09 09:56:09.506484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.311 qpair failed and we were unable to recover it. 00:38:34.311 [2024-12-09 09:56:09.506798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.311 [2024-12-09 09:56:09.506835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.311 qpair failed and we were unable to recover it. 00:38:34.311 [2024-12-09 09:56:09.507236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.311 [2024-12-09 09:56:09.507266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.311 qpair failed and we were unable to recover it. 00:38:34.311 [2024-12-09 09:56:09.507599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.311 [2024-12-09 09:56:09.507628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.311 qpair failed and we were unable to recover it. 00:38:34.311 [2024-12-09 09:56:09.508009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.311 [2024-12-09 09:56:09.508039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.311 qpair failed and we were unable to recover it. 00:38:34.311 [2024-12-09 09:56:09.508384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.311 [2024-12-09 09:56:09.508412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.311 qpair failed and we were unable to recover it. 00:38:34.311 [2024-12-09 09:56:09.508748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.311 [2024-12-09 09:56:09.508778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.311 qpair failed and we were unable to recover it. 00:38:34.311 [2024-12-09 09:56:09.509155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.311 [2024-12-09 09:56:09.509185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.311 qpair failed and we were unable to recover it. 00:38:34.311 [2024-12-09 09:56:09.509533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.311 [2024-12-09 09:56:09.509562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.311 qpair failed and we were unable to recover it. 00:38:34.311 [2024-12-09 09:56:09.509907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.311 [2024-12-09 09:56:09.509938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.311 qpair failed and we were unable to recover it. 00:38:34.311 [2024-12-09 09:56:09.510161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.311 [2024-12-09 09:56:09.510190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.311 qpair failed and we were unable to recover it. 00:38:34.311 [2024-12-09 09:56:09.510538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.311 [2024-12-09 09:56:09.510567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.311 qpair failed and we were unable to recover it. 00:38:34.311 [2024-12-09 09:56:09.510929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.311 [2024-12-09 09:56:09.510959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.311 qpair failed and we were unable to recover it. 00:38:34.311 [2024-12-09 09:56:09.511298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.311 [2024-12-09 09:56:09.511327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.311 qpair failed and we were unable to recover it. 00:38:34.311 [2024-12-09 09:56:09.511664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.311 [2024-12-09 09:56:09.511694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.311 qpair failed and we were unable to recover it. 00:38:34.311 [2024-12-09 09:56:09.512049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.311 [2024-12-09 09:56:09.512077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.311 qpair failed and we were unable to recover it. 00:38:34.311 [2024-12-09 09:56:09.512414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.311 [2024-12-09 09:56:09.512443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.311 qpair failed and we were unable to recover it. 00:38:34.311 [2024-12-09 09:56:09.512800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.312 [2024-12-09 09:56:09.512831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.312 qpair failed and we were unable to recover it. 00:38:34.312 [2024-12-09 09:56:09.513183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.312 [2024-12-09 09:56:09.513211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.312 qpair failed and we were unable to recover it. 00:38:34.312 [2024-12-09 09:56:09.513537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.312 [2024-12-09 09:56:09.513565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.312 qpair failed and we were unable to recover it. 00:38:34.312 [2024-12-09 09:56:09.513924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.312 [2024-12-09 09:56:09.513954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.312 qpair failed and we were unable to recover it. 00:38:34.312 [2024-12-09 09:56:09.514295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.312 [2024-12-09 09:56:09.514324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.312 qpair failed and we were unable to recover it. 00:38:34.312 [2024-12-09 09:56:09.514673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.312 [2024-12-09 09:56:09.514705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.312 qpair failed and we were unable to recover it. 00:38:34.312 [2024-12-09 09:56:09.515050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.312 [2024-12-09 09:56:09.515078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.312 qpair failed and we were unable to recover it. 00:38:34.312 [2024-12-09 09:56:09.515443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.312 [2024-12-09 09:56:09.515471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.312 qpair failed and we were unable to recover it. 00:38:34.312 [2024-12-09 09:56:09.515802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.312 [2024-12-09 09:56:09.515832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.312 qpair failed and we were unable to recover it. 00:38:34.312 [2024-12-09 09:56:09.516087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.312 [2024-12-09 09:56:09.516116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.312 qpair failed and we were unable to recover it. 00:38:34.312 [2024-12-09 09:56:09.516409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.312 [2024-12-09 09:56:09.516438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.312 qpair failed and we were unable to recover it. 00:38:34.312 [2024-12-09 09:56:09.516775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.312 [2024-12-09 09:56:09.516811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.312 qpair failed and we were unable to recover it. 00:38:34.312 [2024-12-09 09:56:09.517140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.312 [2024-12-09 09:56:09.517169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.312 qpair failed and we were unable to recover it. 00:38:34.312 [2024-12-09 09:56:09.517505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.312 [2024-12-09 09:56:09.517534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.312 qpair failed and we were unable to recover it. 00:38:34.312 [2024-12-09 09:56:09.517886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.312 [2024-12-09 09:56:09.517917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.312 qpair failed and we were unable to recover it. 00:38:34.312 [2024-12-09 09:56:09.518247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.312 [2024-12-09 09:56:09.518276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.312 qpair failed and we were unable to recover it. 00:38:34.312 [2024-12-09 09:56:09.518627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.312 [2024-12-09 09:56:09.518665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.312 qpair failed and we were unable to recover it. 00:38:34.312 [2024-12-09 09:56:09.519022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.312 [2024-12-09 09:56:09.519050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.312 qpair failed and we were unable to recover it. 00:38:34.312 [2024-12-09 09:56:09.519366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.312 [2024-12-09 09:56:09.519395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.312 qpair failed and we were unable to recover it. 00:38:34.312 [2024-12-09 09:56:09.519737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.312 [2024-12-09 09:56:09.519766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.312 qpair failed and we were unable to recover it. 00:38:34.312 [2024-12-09 09:56:09.520125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.312 [2024-12-09 09:56:09.520154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.312 qpair failed and we were unable to recover it. 00:38:34.312 [2024-12-09 09:56:09.520503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.312 [2024-12-09 09:56:09.520532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.312 qpair failed and we were unable to recover it. 00:38:34.312 [2024-12-09 09:56:09.520862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.312 [2024-12-09 09:56:09.520893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.312 qpair failed and we were unable to recover it. 00:38:34.312 [2024-12-09 09:56:09.521228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.312 [2024-12-09 09:56:09.521257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.312 qpair failed and we were unable to recover it. 00:38:34.312 [2024-12-09 09:56:09.521600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.312 [2024-12-09 09:56:09.521629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.312 qpair failed and we were unable to recover it. 00:38:34.312 [2024-12-09 09:56:09.521987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.312 [2024-12-09 09:56:09.522018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.312 qpair failed and we were unable to recover it. 00:38:34.312 [2024-12-09 09:56:09.522366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.312 [2024-12-09 09:56:09.522395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.312 qpair failed and we were unable to recover it. 00:38:34.312 [2024-12-09 09:56:09.522752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.312 [2024-12-09 09:56:09.522783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.312 qpair failed and we were unable to recover it. 00:38:34.312 [2024-12-09 09:56:09.523138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.312 [2024-12-09 09:56:09.523167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.312 qpair failed and we were unable to recover it. 00:38:34.312 [2024-12-09 09:56:09.523523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.312 [2024-12-09 09:56:09.523551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.312 qpair failed and we were unable to recover it. 00:38:34.312 [2024-12-09 09:56:09.523895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.312 [2024-12-09 09:56:09.523925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.312 qpair failed and we were unable to recover it. 00:38:34.312 [2024-12-09 09:56:09.524179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.312 [2024-12-09 09:56:09.524206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.312 qpair failed and we were unable to recover it. 00:38:34.312 [2024-12-09 09:56:09.524539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.312 [2024-12-09 09:56:09.524568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.312 qpair failed and we were unable to recover it. 00:38:34.312 [2024-12-09 09:56:09.524895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.312 [2024-12-09 09:56:09.524925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.312 qpair failed and we were unable to recover it. 00:38:34.312 [2024-12-09 09:56:09.525355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.312 [2024-12-09 09:56:09.525384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.312 qpair failed and we were unable to recover it. 00:38:34.312 [2024-12-09 09:56:09.525661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.312 [2024-12-09 09:56:09.525690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.312 qpair failed and we were unable to recover it. 00:38:34.312 [2024-12-09 09:56:09.526022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.312 [2024-12-09 09:56:09.526052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.312 qpair failed and we were unable to recover it. 00:38:34.312 [2024-12-09 09:56:09.526400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.312 [2024-12-09 09:56:09.526429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.312 qpair failed and we were unable to recover it. 00:38:34.312 [2024-12-09 09:56:09.526623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.312 [2024-12-09 09:56:09.526668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.313 qpair failed and we were unable to recover it. 00:38:34.313 [2024-12-09 09:56:09.527058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.313 [2024-12-09 09:56:09.527088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.313 qpair failed and we were unable to recover it. 00:38:34.313 [2024-12-09 09:56:09.527432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.313 [2024-12-09 09:56:09.527461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.313 qpair failed and we were unable to recover it. 00:38:34.313 [2024-12-09 09:56:09.527811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.313 [2024-12-09 09:56:09.527841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.313 qpair failed and we were unable to recover it. 00:38:34.313 [2024-12-09 09:56:09.528079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.313 [2024-12-09 09:56:09.528108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.313 qpair failed and we were unable to recover it. 00:38:34.313 [2024-12-09 09:56:09.528467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.313 [2024-12-09 09:56:09.528496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.313 qpair failed and we were unable to recover it. 00:38:34.313 [2024-12-09 09:56:09.528865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.313 [2024-12-09 09:56:09.528895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.313 qpair failed and we were unable to recover it. 00:38:34.313 [2024-12-09 09:56:09.529216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.313 [2024-12-09 09:56:09.529245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.313 qpair failed and we were unable to recover it. 00:38:34.313 [2024-12-09 09:56:09.529588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.313 [2024-12-09 09:56:09.529617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.313 qpair failed and we were unable to recover it. 00:38:34.313 [2024-12-09 09:56:09.529960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.313 [2024-12-09 09:56:09.529989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.313 qpair failed and we were unable to recover it. 00:38:34.313 [2024-12-09 09:56:09.530332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.313 [2024-12-09 09:56:09.530361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.313 qpair failed and we were unable to recover it. 00:38:34.313 [2024-12-09 09:56:09.530690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.313 [2024-12-09 09:56:09.530720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.313 qpair failed and we were unable to recover it. 00:38:34.313 [2024-12-09 09:56:09.531079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.313 [2024-12-09 09:56:09.531108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.313 qpair failed and we were unable to recover it. 00:38:34.313 [2024-12-09 09:56:09.531475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.313 [2024-12-09 09:56:09.531503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.313 qpair failed and we were unable to recover it. 00:38:34.313 [2024-12-09 09:56:09.531866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.313 [2024-12-09 09:56:09.531896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.313 qpair failed and we were unable to recover it. 00:38:34.313 [2024-12-09 09:56:09.532225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.313 [2024-12-09 09:56:09.532255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.313 qpair failed and we were unable to recover it. 00:38:34.313 [2024-12-09 09:56:09.532613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.313 [2024-12-09 09:56:09.532653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.313 qpair failed and we were unable to recover it. 00:38:34.313 [2024-12-09 09:56:09.532992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.313 [2024-12-09 09:56:09.533022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.313 qpair failed and we were unable to recover it. 00:38:34.313 [2024-12-09 09:56:09.533381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.313 [2024-12-09 09:56:09.533410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.313 qpair failed and we were unable to recover it. 00:38:34.313 [2024-12-09 09:56:09.533754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.313 [2024-12-09 09:56:09.533785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.313 qpair failed and we were unable to recover it. 00:38:34.313 [2024-12-09 09:56:09.534160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.313 [2024-12-09 09:56:09.534189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.313 qpair failed and we were unable to recover it. 00:38:34.313 [2024-12-09 09:56:09.534532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.313 [2024-12-09 09:56:09.534561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.313 qpair failed and we were unable to recover it. 00:38:34.313 [2024-12-09 09:56:09.534906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.313 [2024-12-09 09:56:09.534937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.313 qpair failed and we were unable to recover it. 00:38:34.313 [2024-12-09 09:56:09.535284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.313 [2024-12-09 09:56:09.535314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.313 qpair failed and we were unable to recover it. 00:38:34.313 [2024-12-09 09:56:09.535661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.313 [2024-12-09 09:56:09.535691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.313 qpair failed and we were unable to recover it. 00:38:34.313 [2024-12-09 09:56:09.536052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.313 [2024-12-09 09:56:09.536081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.313 qpair failed and we were unable to recover it. 00:38:34.313 [2024-12-09 09:56:09.536434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.313 [2024-12-09 09:56:09.536462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.313 qpair failed and we were unable to recover it. 00:38:34.313 [2024-12-09 09:56:09.536799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.313 [2024-12-09 09:56:09.536836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.313 qpair failed and we were unable to recover it. 00:38:34.313 [2024-12-09 09:56:09.537162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.313 [2024-12-09 09:56:09.537192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.313 qpair failed and we were unable to recover it. 00:38:34.313 [2024-12-09 09:56:09.537530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.313 [2024-12-09 09:56:09.537560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.313 qpair failed and we were unable to recover it. 00:38:34.313 [2024-12-09 09:56:09.537815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.313 [2024-12-09 09:56:09.537845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.313 qpair failed and we were unable to recover it. 00:38:34.313 [2024-12-09 09:56:09.538207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.313 [2024-12-09 09:56:09.538236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.313 qpair failed and we were unable to recover it. 00:38:34.313 [2024-12-09 09:56:09.538584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.313 [2024-12-09 09:56:09.538613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.313 qpair failed and we were unable to recover it. 00:38:34.313 [2024-12-09 09:56:09.538982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.313 [2024-12-09 09:56:09.539010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.313 qpair failed and we were unable to recover it. 00:38:34.313 [2024-12-09 09:56:09.539324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.313 [2024-12-09 09:56:09.539353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.313 qpair failed and we were unable to recover it. 00:38:34.313 [2024-12-09 09:56:09.539702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.313 [2024-12-09 09:56:09.539732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.313 qpair failed and we were unable to recover it. 00:38:34.313 [2024-12-09 09:56:09.540112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.313 [2024-12-09 09:56:09.540140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.313 qpair failed and we were unable to recover it. 00:38:34.313 [2024-12-09 09:56:09.540481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.313 [2024-12-09 09:56:09.540510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.313 qpair failed and we were unable to recover it. 00:38:34.313 [2024-12-09 09:56:09.540775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.313 [2024-12-09 09:56:09.540804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.313 qpair failed and we were unable to recover it. 00:38:34.314 [2024-12-09 09:56:09.541141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.314 [2024-12-09 09:56:09.541170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.314 qpair failed and we were unable to recover it. 00:38:34.314 [2024-12-09 09:56:09.541541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.314 [2024-12-09 09:56:09.541571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.314 qpair failed and we were unable to recover it. 00:38:34.314 [2024-12-09 09:56:09.541977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.314 [2024-12-09 09:56:09.542013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.314 qpair failed and we were unable to recover it. 00:38:34.314 [2024-12-09 09:56:09.542355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.314 [2024-12-09 09:56:09.542385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.314 qpair failed and we were unable to recover it. 00:38:34.314 [2024-12-09 09:56:09.542731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.314 [2024-12-09 09:56:09.542762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.314 qpair failed and we were unable to recover it. 00:38:34.314 [2024-12-09 09:56:09.543121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.314 [2024-12-09 09:56:09.543150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.314 qpair failed and we were unable to recover it. 00:38:34.314 [2024-12-09 09:56:09.543514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.314 [2024-12-09 09:56:09.543543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.314 qpair failed and we were unable to recover it. 00:38:34.314 [2024-12-09 09:56:09.543885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.314 [2024-12-09 09:56:09.543916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.314 qpair failed and we were unable to recover it. 00:38:34.314 [2024-12-09 09:56:09.544236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.314 [2024-12-09 09:56:09.544266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.314 qpair failed and we were unable to recover it. 00:38:34.314 [2024-12-09 09:56:09.544596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.314 [2024-12-09 09:56:09.544625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.314 qpair failed and we were unable to recover it. 00:38:34.314 [2024-12-09 09:56:09.544966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.314 [2024-12-09 09:56:09.544995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.314 qpair failed and we were unable to recover it. 00:38:34.314 [2024-12-09 09:56:09.545349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.314 [2024-12-09 09:56:09.545377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.314 qpair failed and we were unable to recover it. 00:38:34.314 [2024-12-09 09:56:09.545710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.314 [2024-12-09 09:56:09.545741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.314 qpair failed and we were unable to recover it. 00:38:34.314 [2024-12-09 09:56:09.546089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.314 [2024-12-09 09:56:09.546118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.314 qpair failed and we were unable to recover it. 00:38:34.314 [2024-12-09 09:56:09.546467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.314 [2024-12-09 09:56:09.546495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.314 qpair failed and we were unable to recover it. 00:38:34.314 [2024-12-09 09:56:09.546816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.314 [2024-12-09 09:56:09.546846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.314 qpair failed and we were unable to recover it. 00:38:34.314 [2024-12-09 09:56:09.547193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.314 [2024-12-09 09:56:09.547223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.314 qpair failed and we were unable to recover it. 00:38:34.314 [2024-12-09 09:56:09.547571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.314 [2024-12-09 09:56:09.547599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.314 qpair failed and we were unable to recover it. 00:38:34.314 [2024-12-09 09:56:09.547938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.314 [2024-12-09 09:56:09.547968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.314 qpair failed and we were unable to recover it. 00:38:34.314 [2024-12-09 09:56:09.548313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.314 [2024-12-09 09:56:09.548343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.314 qpair failed and we were unable to recover it. 00:38:34.314 [2024-12-09 09:56:09.548702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.314 [2024-12-09 09:56:09.548732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.314 qpair failed and we were unable to recover it. 00:38:34.314 [2024-12-09 09:56:09.549065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.314 [2024-12-09 09:56:09.549097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.314 qpair failed and we were unable to recover it. 00:38:34.314 [2024-12-09 09:56:09.549459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.314 [2024-12-09 09:56:09.549487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.314 qpair failed and we were unable to recover it. 00:38:34.314 [2024-12-09 09:56:09.549824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.314 [2024-12-09 09:56:09.549853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.314 qpair failed and we were unable to recover it. 00:38:34.314 [2024-12-09 09:56:09.550188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.314 [2024-12-09 09:56:09.550217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.314 qpair failed and we were unable to recover it. 00:38:34.314 [2024-12-09 09:56:09.550562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.314 [2024-12-09 09:56:09.550593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.314 qpair failed and we were unable to recover it. 00:38:34.314 [2024-12-09 09:56:09.550946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.314 [2024-12-09 09:56:09.550976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.314 qpair failed and we were unable to recover it. 00:38:34.314 [2024-12-09 09:56:09.551332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.314 [2024-12-09 09:56:09.551361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.314 qpair failed and we were unable to recover it. 00:38:34.314 [2024-12-09 09:56:09.551706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.314 [2024-12-09 09:56:09.551737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.314 qpair failed and we were unable to recover it. 00:38:34.314 [2024-12-09 09:56:09.552080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.314 [2024-12-09 09:56:09.552115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.314 qpair failed and we were unable to recover it. 00:38:34.314 [2024-12-09 09:56:09.552479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.314 [2024-12-09 09:56:09.552508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.314 qpair failed and we were unable to recover it. 00:38:34.314 [2024-12-09 09:56:09.552862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.314 [2024-12-09 09:56:09.552892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.314 qpair failed and we were unable to recover it. 00:38:34.314 [2024-12-09 09:56:09.553215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.314 [2024-12-09 09:56:09.553245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.314 qpair failed and we were unable to recover it. 00:38:34.314 [2024-12-09 09:56:09.553603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.314 [2024-12-09 09:56:09.553631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.314 qpair failed and we were unable to recover it. 00:38:34.314 [2024-12-09 09:56:09.553990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.314 [2024-12-09 09:56:09.554020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.314 qpair failed and we were unable to recover it. 00:38:34.314 [2024-12-09 09:56:09.554356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.314 [2024-12-09 09:56:09.554385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.314 qpair failed and we were unable to recover it. 00:38:34.314 [2024-12-09 09:56:09.554738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.314 [2024-12-09 09:56:09.554768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.314 qpair failed and we were unable to recover it. 00:38:34.314 [2024-12-09 09:56:09.555145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.314 [2024-12-09 09:56:09.555175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.314 qpair failed and we were unable to recover it. 00:38:34.314 [2024-12-09 09:56:09.555533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.315 [2024-12-09 09:56:09.555562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.315 qpair failed and we were unable to recover it. 00:38:34.315 [2024-12-09 09:56:09.555908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.315 [2024-12-09 09:56:09.555938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.315 qpair failed and we were unable to recover it. 00:38:34.315 [2024-12-09 09:56:09.556268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.315 [2024-12-09 09:56:09.556297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.315 qpair failed and we were unable to recover it. 00:38:34.315 [2024-12-09 09:56:09.556631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.315 [2024-12-09 09:56:09.556670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.315 qpair failed and we were unable to recover it. 00:38:34.315 [2024-12-09 09:56:09.557007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.315 [2024-12-09 09:56:09.557035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.315 qpair failed and we were unable to recover it. 00:38:34.315 [2024-12-09 09:56:09.557364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.315 [2024-12-09 09:56:09.557394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.315 qpair failed and we were unable to recover it. 00:38:34.315 [2024-12-09 09:56:09.557752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.315 [2024-12-09 09:56:09.557782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.315 qpair failed and we were unable to recover it. 00:38:34.315 [2024-12-09 09:56:09.558028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.315 [2024-12-09 09:56:09.558056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.315 qpair failed and we were unable to recover it. 00:38:34.315 [2024-12-09 09:56:09.558422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.315 [2024-12-09 09:56:09.558450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.315 qpair failed and we were unable to recover it. 00:38:34.315 [2024-12-09 09:56:09.558786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.315 [2024-12-09 09:56:09.558817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.315 qpair failed and we were unable to recover it. 00:38:34.315 [2024-12-09 09:56:09.559243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.315 [2024-12-09 09:56:09.559273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.315 qpair failed and we were unable to recover it. 00:38:34.315 [2024-12-09 09:56:09.559607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.315 [2024-12-09 09:56:09.559636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.315 qpair failed and we were unable to recover it. 00:38:34.315 [2024-12-09 09:56:09.559985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.315 [2024-12-09 09:56:09.560015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.315 qpair failed and we were unable to recover it. 00:38:34.315 [2024-12-09 09:56:09.560348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.315 [2024-12-09 09:56:09.560377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.315 qpair failed and we were unable to recover it. 00:38:34.315 [2024-12-09 09:56:09.560727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.315 [2024-12-09 09:56:09.560757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.315 qpair failed and we were unable to recover it. 00:38:34.315 [2024-12-09 09:56:09.561114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.315 [2024-12-09 09:56:09.561143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.315 qpair failed and we were unable to recover it. 00:38:34.315 [2024-12-09 09:56:09.561513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.315 [2024-12-09 09:56:09.561542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.315 qpair failed and we were unable to recover it. 00:38:34.315 [2024-12-09 09:56:09.561752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.315 [2024-12-09 09:56:09.561781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.315 qpair failed and we were unable to recover it. 00:38:34.315 [2024-12-09 09:56:09.562154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.315 [2024-12-09 09:56:09.562183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.315 qpair failed and we were unable to recover it. 00:38:34.315 [2024-12-09 09:56:09.562530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.315 [2024-12-09 09:56:09.562559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.315 qpair failed and we were unable to recover it. 00:38:34.315 [2024-12-09 09:56:09.562928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.315 [2024-12-09 09:56:09.562963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.315 qpair failed and we were unable to recover it. 00:38:34.315 [2024-12-09 09:56:09.563246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.315 [2024-12-09 09:56:09.563277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.315 qpair failed and we were unable to recover it. 00:38:34.315 [2024-12-09 09:56:09.563650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.315 [2024-12-09 09:56:09.563681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.315 qpair failed and we were unable to recover it. 00:38:34.315 [2024-12-09 09:56:09.564028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.315 [2024-12-09 09:56:09.564058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.315 qpair failed and we were unable to recover it. 00:38:34.315 [2024-12-09 09:56:09.564408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.315 [2024-12-09 09:56:09.564437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.315 qpair failed and we were unable to recover it. 00:38:34.315 [2024-12-09 09:56:09.564756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.315 [2024-12-09 09:56:09.564786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.315 qpair failed and we were unable to recover it. 00:38:34.315 [2024-12-09 09:56:09.565138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.315 [2024-12-09 09:56:09.565167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.315 qpair failed and we were unable to recover it. 00:38:34.315 [2024-12-09 09:56:09.565426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.315 [2024-12-09 09:56:09.565455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.315 qpair failed and we were unable to recover it. 00:38:34.315 [2024-12-09 09:56:09.565812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.315 [2024-12-09 09:56:09.565841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.315 qpair failed and we were unable to recover it. 00:38:34.315 [2024-12-09 09:56:09.566202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.315 [2024-12-09 09:56:09.566231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.315 qpair failed and we were unable to recover it. 00:38:34.315 [2024-12-09 09:56:09.566568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.315 [2024-12-09 09:56:09.566598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.315 qpair failed and we were unable to recover it. 00:38:34.315 [2024-12-09 09:56:09.566959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.315 [2024-12-09 09:56:09.566990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.315 qpair failed and we were unable to recover it. 00:38:34.315 [2024-12-09 09:56:09.567343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.315 [2024-12-09 09:56:09.567378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.315 qpair failed and we were unable to recover it. 00:38:34.315 [2024-12-09 09:56:09.567710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.315 [2024-12-09 09:56:09.567744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.315 qpair failed and we were unable to recover it. 00:38:34.315 [2024-12-09 09:56:09.568094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.315 [2024-12-09 09:56:09.568123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.316 qpair failed and we were unable to recover it. 00:38:34.316 [2024-12-09 09:56:09.568477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.316 [2024-12-09 09:56:09.568506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.316 qpair failed and we were unable to recover it. 00:38:34.316 [2024-12-09 09:56:09.568845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.316 [2024-12-09 09:56:09.568877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.316 qpair failed and we were unable to recover it. 00:38:34.316 [2024-12-09 09:56:09.569233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.316 [2024-12-09 09:56:09.569261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.316 qpair failed and we were unable to recover it. 00:38:34.316 [2024-12-09 09:56:09.569604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.316 [2024-12-09 09:56:09.569634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.316 qpair failed and we were unable to recover it. 00:38:34.316 [2024-12-09 09:56:09.569969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.316 [2024-12-09 09:56:09.569999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.316 qpair failed and we were unable to recover it. 00:38:34.316 [2024-12-09 09:56:09.570339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.316 [2024-12-09 09:56:09.570368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.316 qpair failed and we were unable to recover it. 00:38:34.316 [2024-12-09 09:56:09.570720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.316 [2024-12-09 09:56:09.570750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.316 qpair failed and we were unable to recover it. 00:38:34.316 [2024-12-09 09:56:09.571197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.316 [2024-12-09 09:56:09.571226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.316 qpair failed and we were unable to recover it. 00:38:34.316 [2024-12-09 09:56:09.571558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.316 [2024-12-09 09:56:09.571589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.316 qpair failed and we were unable to recover it. 00:38:34.316 [2024-12-09 09:56:09.571940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.316 [2024-12-09 09:56:09.571970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.316 qpair failed and we were unable to recover it. 00:38:34.316 [2024-12-09 09:56:09.572341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.316 [2024-12-09 09:56:09.572371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.316 qpair failed and we were unable to recover it. 00:38:34.316 [2024-12-09 09:56:09.572715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.316 [2024-12-09 09:56:09.572746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.316 qpair failed and we were unable to recover it. 00:38:34.316 [2024-12-09 09:56:09.573083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.316 [2024-12-09 09:56:09.573112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.316 qpair failed and we were unable to recover it. 00:38:34.316 [2024-12-09 09:56:09.573451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.316 [2024-12-09 09:56:09.573480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.316 qpair failed and we were unable to recover it. 00:38:34.316 [2024-12-09 09:56:09.573824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.316 [2024-12-09 09:56:09.573854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.316 qpair failed and we were unable to recover it. 00:38:34.316 [2024-12-09 09:56:09.574253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.316 [2024-12-09 09:56:09.574282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.316 qpair failed and we were unable to recover it. 00:38:34.316 [2024-12-09 09:56:09.574623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.316 [2024-12-09 09:56:09.574662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.316 qpair failed and we were unable to recover it. 00:38:34.316 [2024-12-09 09:56:09.574992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.316 [2024-12-09 09:56:09.575021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.316 qpair failed and we were unable to recover it. 00:38:34.316 [2024-12-09 09:56:09.575367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.316 [2024-12-09 09:56:09.575397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.316 qpair failed and we were unable to recover it. 00:38:34.316 [2024-12-09 09:56:09.575794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.316 [2024-12-09 09:56:09.575825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.316 qpair failed and we were unable to recover it. 00:38:34.316 [2024-12-09 09:56:09.576172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.316 [2024-12-09 09:56:09.576200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.316 qpair failed and we were unable to recover it. 00:38:34.316 [2024-12-09 09:56:09.576564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.316 [2024-12-09 09:56:09.576593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.316 qpair failed and we were unable to recover it. 00:38:34.316 [2024-12-09 09:56:09.576868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.316 [2024-12-09 09:56:09.576898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.316 qpair failed and we were unable to recover it. 00:38:34.316 [2024-12-09 09:56:09.577215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.316 [2024-12-09 09:56:09.577246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.316 qpair failed and we were unable to recover it. 00:38:34.316 [2024-12-09 09:56:09.577525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.316 [2024-12-09 09:56:09.577554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.316 qpair failed and we were unable to recover it. 00:38:34.316 [2024-12-09 09:56:09.577920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.316 [2024-12-09 09:56:09.577951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.316 qpair failed and we were unable to recover it. 00:38:34.316 [2024-12-09 09:56:09.578295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.316 [2024-12-09 09:56:09.578324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.316 qpair failed and we were unable to recover it. 00:38:34.316 [2024-12-09 09:56:09.578679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.316 [2024-12-09 09:56:09.578727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.316 qpair failed and we were unable to recover it. 00:38:34.316 [2024-12-09 09:56:09.579113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.316 [2024-12-09 09:56:09.579142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.316 qpair failed and we were unable to recover it. 00:38:34.316 [2024-12-09 09:56:09.579474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.316 [2024-12-09 09:56:09.579503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.316 qpair failed and we were unable to recover it. 00:38:34.316 [2024-12-09 09:56:09.579864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.316 [2024-12-09 09:56:09.579893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.316 qpair failed and we were unable to recover it. 00:38:34.316 [2024-12-09 09:56:09.580242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.316 [2024-12-09 09:56:09.580270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.316 qpair failed and we were unable to recover it. 00:38:34.316 [2024-12-09 09:56:09.580614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.316 [2024-12-09 09:56:09.580651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.316 qpair failed and we were unable to recover it. 00:38:34.316 [2024-12-09 09:56:09.580995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.316 [2024-12-09 09:56:09.581023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.316 qpair failed and we were unable to recover it. 00:38:34.316 [2024-12-09 09:56:09.581371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.316 [2024-12-09 09:56:09.581400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.316 qpair failed and we were unable to recover it. 00:38:34.316 [2024-12-09 09:56:09.581748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.316 [2024-12-09 09:56:09.581778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.316 qpair failed and we were unable to recover it. 00:38:34.316 [2024-12-09 09:56:09.582119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.316 [2024-12-09 09:56:09.582149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.316 qpair failed and we were unable to recover it. 00:38:34.316 [2024-12-09 09:56:09.582505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.317 [2024-12-09 09:56:09.582534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.317 qpair failed and we were unable to recover it. 00:38:34.317 [2024-12-09 09:56:09.582901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.317 [2024-12-09 09:56:09.582937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.317 qpair failed and we were unable to recover it. 00:38:34.317 [2024-12-09 09:56:09.583284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.317 [2024-12-09 09:56:09.583312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.317 qpair failed and we were unable to recover it. 00:38:34.317 [2024-12-09 09:56:09.583675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.317 [2024-12-09 09:56:09.583706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.317 qpair failed and we were unable to recover it. 00:38:34.317 [2024-12-09 09:56:09.584020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.317 [2024-12-09 09:56:09.584049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.317 qpair failed and we were unable to recover it. 00:38:34.317 [2024-12-09 09:56:09.584425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.317 [2024-12-09 09:56:09.584454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.317 qpair failed and we were unable to recover it. 00:38:34.317 [2024-12-09 09:56:09.584841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.317 [2024-12-09 09:56:09.584871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.317 qpair failed and we were unable to recover it. 00:38:34.317 [2024-12-09 09:56:09.585216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.317 [2024-12-09 09:56:09.585245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.317 qpair failed and we were unable to recover it. 00:38:34.317 [2024-12-09 09:56:09.585615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.317 [2024-12-09 09:56:09.585653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.317 qpair failed and we were unable to recover it. 00:38:34.317 [2024-12-09 09:56:09.586040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.317 [2024-12-09 09:56:09.586070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.317 qpair failed and we were unable to recover it. 00:38:34.317 [2024-12-09 09:56:09.586427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.317 [2024-12-09 09:56:09.586456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.317 qpair failed and we were unable to recover it. 00:38:34.317 [2024-12-09 09:56:09.586702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.317 [2024-12-09 09:56:09.586732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.317 qpair failed and we were unable to recover it. 00:38:34.317 [2024-12-09 09:56:09.587054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.317 [2024-12-09 09:56:09.587082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.317 qpair failed and we were unable to recover it. 00:38:34.317 [2024-12-09 09:56:09.587426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.317 [2024-12-09 09:56:09.587455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.317 qpair failed and we were unable to recover it. 00:38:34.317 [2024-12-09 09:56:09.587786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.317 [2024-12-09 09:56:09.587815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.317 qpair failed and we were unable to recover it. 00:38:34.317 [2024-12-09 09:56:09.588158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.317 [2024-12-09 09:56:09.588187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.317 qpair failed and we were unable to recover it. 00:38:34.317 [2024-12-09 09:56:09.588531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.317 [2024-12-09 09:56:09.588559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.317 qpair failed and we were unable to recover it. 00:38:34.317 [2024-12-09 09:56:09.588895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.317 [2024-12-09 09:56:09.588925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.317 qpair failed and we were unable to recover it. 00:38:34.317 [2024-12-09 09:56:09.589279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.317 [2024-12-09 09:56:09.589308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.317 qpair failed and we were unable to recover it. 00:38:34.317 [2024-12-09 09:56:09.589657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.317 [2024-12-09 09:56:09.589687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.317 qpair failed and we were unable to recover it. 00:38:34.317 [2024-12-09 09:56:09.590018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.317 [2024-12-09 09:56:09.590046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.317 qpair failed and we were unable to recover it. 00:38:34.317 [2024-12-09 09:56:09.590393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.317 [2024-12-09 09:56:09.590421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.317 qpair failed and we were unable to recover it. 00:38:34.317 [2024-12-09 09:56:09.590768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.317 [2024-12-09 09:56:09.590799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.317 qpair failed and we were unable to recover it. 00:38:34.317 [2024-12-09 09:56:09.591172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.317 [2024-12-09 09:56:09.591202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.317 qpair failed and we were unable to recover it. 00:38:34.317 [2024-12-09 09:56:09.591551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.317 [2024-12-09 09:56:09.591579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.317 qpair failed and we were unable to recover it. 00:38:34.317 [2024-12-09 09:56:09.591932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.317 [2024-12-09 09:56:09.591962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.317 qpair failed and we were unable to recover it. 00:38:34.317 [2024-12-09 09:56:09.592270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.317 [2024-12-09 09:56:09.592298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.317 qpair failed and we were unable to recover it. 00:38:34.317 [2024-12-09 09:56:09.592654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.317 [2024-12-09 09:56:09.592683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.317 qpair failed and we were unable to recover it. 00:38:34.317 [2024-12-09 09:56:09.593057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.317 [2024-12-09 09:56:09.593091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.317 qpair failed and we were unable to recover it. 00:38:34.317 [2024-12-09 09:56:09.593442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.317 [2024-12-09 09:56:09.593471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.317 qpair failed and we were unable to recover it. 00:38:34.317 [2024-12-09 09:56:09.593804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.317 [2024-12-09 09:56:09.593834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.317 qpair failed and we were unable to recover it. 00:38:34.317 [2024-12-09 09:56:09.594185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.317 [2024-12-09 09:56:09.594213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.317 qpair failed and we were unable to recover it. 00:38:34.317 [2024-12-09 09:56:09.594571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.317 [2024-12-09 09:56:09.594600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.317 qpair failed and we were unable to recover it. 00:38:34.317 [2024-12-09 09:56:09.594943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.317 [2024-12-09 09:56:09.594973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.317 qpair failed and we were unable to recover it. 00:38:34.317 [2024-12-09 09:56:09.595361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.317 [2024-12-09 09:56:09.595390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.317 qpair failed and we were unable to recover it. 00:38:34.317 [2024-12-09 09:56:09.595725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.317 [2024-12-09 09:56:09.595755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.317 qpair failed and we were unable to recover it. 00:38:34.317 [2024-12-09 09:56:09.596105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.317 [2024-12-09 09:56:09.596134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.317 qpair failed and we were unable to recover it. 00:38:34.317 [2024-12-09 09:56:09.596482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.317 [2024-12-09 09:56:09.596511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.317 qpair failed and we were unable to recover it. 00:38:34.317 [2024-12-09 09:56:09.596856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.317 [2024-12-09 09:56:09.596887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.317 qpair failed and we were unable to recover it. 00:38:34.318 [2024-12-09 09:56:09.597246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.318 [2024-12-09 09:56:09.597275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.318 qpair failed and we were unable to recover it. 00:38:34.318 [2024-12-09 09:56:09.597612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.318 [2024-12-09 09:56:09.597650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.318 qpair failed and we were unable to recover it. 00:38:34.318 [2024-12-09 09:56:09.597983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.318 [2024-12-09 09:56:09.598012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.318 qpair failed and we were unable to recover it. 00:38:34.318 [2024-12-09 09:56:09.598356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.318 [2024-12-09 09:56:09.598385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.318 qpair failed and we were unable to recover it. 00:38:34.318 [2024-12-09 09:56:09.598727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.318 [2024-12-09 09:56:09.598757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.318 qpair failed and we were unable to recover it. 00:38:34.318 [2024-12-09 09:56:09.599098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.318 [2024-12-09 09:56:09.599127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.318 qpair failed and we were unable to recover it. 00:38:34.318 [2024-12-09 09:56:09.599474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.318 [2024-12-09 09:56:09.599503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.318 qpair failed and we were unable to recover it. 00:38:34.318 [2024-12-09 09:56:09.599898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.318 [2024-12-09 09:56:09.599928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.318 qpair failed and we were unable to recover it. 00:38:34.318 [2024-12-09 09:56:09.600245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.318 [2024-12-09 09:56:09.600274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.318 qpair failed and we were unable to recover it. 00:38:34.318 [2024-12-09 09:56:09.600656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.318 [2024-12-09 09:56:09.600687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.318 qpair failed and we were unable to recover it. 00:38:34.318 [2024-12-09 09:56:09.601020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.318 [2024-12-09 09:56:09.601048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.318 qpair failed and we were unable to recover it. 00:38:34.318 [2024-12-09 09:56:09.601292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.318 [2024-12-09 09:56:09.601321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.318 qpair failed and we were unable to recover it. 00:38:34.318 [2024-12-09 09:56:09.601602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.318 [2024-12-09 09:56:09.601631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.318 qpair failed and we were unable to recover it. 00:38:34.318 [2024-12-09 09:56:09.601981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.318 [2024-12-09 09:56:09.602011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.318 qpair failed and we were unable to recover it. 00:38:34.318 [2024-12-09 09:56:09.602355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.318 [2024-12-09 09:56:09.602384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.318 qpair failed and we were unable to recover it. 00:38:34.318 [2024-12-09 09:56:09.602712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.318 [2024-12-09 09:56:09.602742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.318 qpair failed and we were unable to recover it. 00:38:34.318 [2024-12-09 09:56:09.603088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.318 [2024-12-09 09:56:09.603117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.318 qpair failed and we were unable to recover it. 00:38:34.318 [2024-12-09 09:56:09.603454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.318 [2024-12-09 09:56:09.603483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.318 qpair failed and we were unable to recover it. 00:38:34.318 [2024-12-09 09:56:09.603827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.318 [2024-12-09 09:56:09.603858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.318 qpair failed and we were unable to recover it. 00:38:34.318 [2024-12-09 09:56:09.604218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.318 [2024-12-09 09:56:09.604247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.318 qpair failed and we were unable to recover it. 00:38:34.318 [2024-12-09 09:56:09.604654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.318 [2024-12-09 09:56:09.604684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.318 qpair failed and we were unable to recover it. 00:38:34.318 [2024-12-09 09:56:09.604919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.318 [2024-12-09 09:56:09.604947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.318 qpair failed and we were unable to recover it. 00:38:34.318 [2024-12-09 09:56:09.605269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.318 [2024-12-09 09:56:09.605298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.318 qpair failed and we were unable to recover it. 00:38:34.318 [2024-12-09 09:56:09.605649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.318 [2024-12-09 09:56:09.605679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.318 qpair failed and we were unable to recover it. 00:38:34.318 [2024-12-09 09:56:09.606054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.318 [2024-12-09 09:56:09.606083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.318 qpair failed and we were unable to recover it. 00:38:34.318 [2024-12-09 09:56:09.606427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.318 [2024-12-09 09:56:09.606456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.318 qpair failed and we were unable to recover it. 00:38:34.318 [2024-12-09 09:56:09.606814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.318 [2024-12-09 09:56:09.606844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.318 qpair failed and we were unable to recover it. 00:38:34.318 [2024-12-09 09:56:09.607197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.318 [2024-12-09 09:56:09.607224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.318 qpair failed and we were unable to recover it. 00:38:34.318 [2024-12-09 09:56:09.607570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.318 [2024-12-09 09:56:09.607598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.318 qpair failed and we were unable to recover it. 00:38:34.318 [2024-12-09 09:56:09.607982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.318 [2024-12-09 09:56:09.608012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.318 qpair failed and we were unable to recover it. 00:38:34.318 [2024-12-09 09:56:09.608366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.318 [2024-12-09 09:56:09.608400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.318 qpair failed and we were unable to recover it. 00:38:34.318 [2024-12-09 09:56:09.608738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.318 [2024-12-09 09:56:09.608770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.318 qpair failed and we were unable to recover it. 00:38:34.318 [2024-12-09 09:56:09.609138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.318 [2024-12-09 09:56:09.609167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.318 qpair failed and we were unable to recover it. 00:38:34.318 [2024-12-09 09:56:09.609503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.318 [2024-12-09 09:56:09.609532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.318 qpair failed and we were unable to recover it. 00:38:34.318 [2024-12-09 09:56:09.609882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.318 [2024-12-09 09:56:09.609912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.318 qpair failed and we were unable to recover it. 00:38:34.318 [2024-12-09 09:56:09.610255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.318 [2024-12-09 09:56:09.610284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.318 qpair failed and we were unable to recover it. 00:38:34.318 [2024-12-09 09:56:09.610654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.318 [2024-12-09 09:56:09.610684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.318 qpair failed and we were unable to recover it. 00:38:34.318 [2024-12-09 09:56:09.611025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.318 [2024-12-09 09:56:09.611056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.318 qpair failed and we were unable to recover it. 00:38:34.318 [2024-12-09 09:56:09.611392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.319 [2024-12-09 09:56:09.611421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.319 qpair failed and we were unable to recover it. 00:38:34.319 [2024-12-09 09:56:09.611778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.319 [2024-12-09 09:56:09.611810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.319 qpair failed and we were unable to recover it. 00:38:34.319 [2024-12-09 09:56:09.612148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.319 [2024-12-09 09:56:09.612179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.319 qpair failed and we were unable to recover it. 00:38:34.319 [2024-12-09 09:56:09.612530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.319 [2024-12-09 09:56:09.612560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.319 qpair failed and we were unable to recover it. 00:38:34.319 [2024-12-09 09:56:09.612907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.319 [2024-12-09 09:56:09.612939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.319 qpair failed and we were unable to recover it. 00:38:34.319 [2024-12-09 09:56:09.613280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.319 [2024-12-09 09:56:09.613311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.319 qpair failed and we were unable to recover it. 00:38:34.319 [2024-12-09 09:56:09.613704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.319 [2024-12-09 09:56:09.613736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.319 qpair failed and we were unable to recover it. 00:38:34.319 [2024-12-09 09:56:09.614081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.319 [2024-12-09 09:56:09.614112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.319 qpair failed and we were unable to recover it. 00:38:34.319 [2024-12-09 09:56:09.614460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.319 [2024-12-09 09:56:09.614490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.319 qpair failed and we were unable to recover it. 00:38:34.319 [2024-12-09 09:56:09.614842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.319 [2024-12-09 09:56:09.614873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.319 qpair failed and we were unable to recover it. 00:38:34.319 [2024-12-09 09:56:09.615123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.319 [2024-12-09 09:56:09.615152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.319 qpair failed and we were unable to recover it. 00:38:34.319 [2024-12-09 09:56:09.615509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.319 [2024-12-09 09:56:09.615540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.319 qpair failed and we were unable to recover it. 00:38:34.319 [2024-12-09 09:56:09.615863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.319 [2024-12-09 09:56:09.615894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.319 qpair failed and we were unable to recover it. 00:38:34.319 [2024-12-09 09:56:09.616237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.319 [2024-12-09 09:56:09.616267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.319 qpair failed and we were unable to recover it. 00:38:34.319 [2024-12-09 09:56:09.616492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.319 [2024-12-09 09:56:09.616522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.319 qpair failed and we were unable to recover it. 00:38:34.319 [2024-12-09 09:56:09.616788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.319 [2024-12-09 09:56:09.616820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.319 qpair failed and we were unable to recover it. 00:38:34.319 [2024-12-09 09:56:09.617175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.319 [2024-12-09 09:56:09.617205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.319 qpair failed and we were unable to recover it. 00:38:34.319 [2024-12-09 09:56:09.617553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.319 [2024-12-09 09:56:09.617583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.319 qpair failed and we were unable to recover it. 00:38:34.319 [2024-12-09 09:56:09.617942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.319 [2024-12-09 09:56:09.617974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.319 qpair failed and we were unable to recover it. 00:38:34.319 [2024-12-09 09:56:09.618333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.319 [2024-12-09 09:56:09.618369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.319 qpair failed and we were unable to recover it. 00:38:34.319 [2024-12-09 09:56:09.618711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.319 [2024-12-09 09:56:09.618743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.319 qpair failed and we were unable to recover it. 00:38:34.319 [2024-12-09 09:56:09.619076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.319 [2024-12-09 09:56:09.619105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.319 qpair failed and we were unable to recover it. 00:38:34.319 [2024-12-09 09:56:09.619371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.319 [2024-12-09 09:56:09.619401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.319 qpair failed and we were unable to recover it. 00:38:34.319 [2024-12-09 09:56:09.619743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.319 [2024-12-09 09:56:09.619774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.319 qpair failed and we were unable to recover it. 00:38:34.319 [2024-12-09 09:56:09.620123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.319 [2024-12-09 09:56:09.620154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.319 qpair failed and we were unable to recover it. 00:38:34.319 [2024-12-09 09:56:09.620497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.319 [2024-12-09 09:56:09.620528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.319 qpair failed and we were unable to recover it. 00:38:34.319 [2024-12-09 09:56:09.620890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.319 [2024-12-09 09:56:09.620922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.319 qpair failed and we were unable to recover it. 00:38:34.319 [2024-12-09 09:56:09.621274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.319 [2024-12-09 09:56:09.621304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.319 qpair failed and we were unable to recover it. 00:38:34.319 [2024-12-09 09:56:09.621653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.319 [2024-12-09 09:56:09.621685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.319 qpair failed and we were unable to recover it. 00:38:34.319 [2024-12-09 09:56:09.622034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.319 [2024-12-09 09:56:09.622065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.319 qpair failed and we were unable to recover it. 00:38:34.319 [2024-12-09 09:56:09.622408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.319 [2024-12-09 09:56:09.622439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.319 qpair failed and we were unable to recover it. 00:38:34.319 [2024-12-09 09:56:09.622783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.319 [2024-12-09 09:56:09.622813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.319 qpair failed and we were unable to recover it. 00:38:34.319 [2024-12-09 09:56:09.623163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.319 [2024-12-09 09:56:09.623193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.319 qpair failed and we were unable to recover it. 00:38:34.319 [2024-12-09 09:56:09.623530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.319 [2024-12-09 09:56:09.623562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.319 qpair failed and we were unable to recover it. 00:38:34.319 [2024-12-09 09:56:09.623911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.319 [2024-12-09 09:56:09.623943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.319 qpair failed and we were unable to recover it. 00:38:34.320 [2024-12-09 09:56:09.624300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.320 [2024-12-09 09:56:09.624330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.320 qpair failed and we were unable to recover it. 00:38:34.320 [2024-12-09 09:56:09.624660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.320 [2024-12-09 09:56:09.624691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.320 qpair failed and we were unable to recover it. 00:38:34.320 [2024-12-09 09:56:09.625027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.320 [2024-12-09 09:56:09.625058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.320 qpair failed and we were unable to recover it. 00:38:34.320 [2024-12-09 09:56:09.625409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.320 [2024-12-09 09:56:09.625439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.320 qpair failed and we were unable to recover it. 00:38:34.320 [2024-12-09 09:56:09.625799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.320 [2024-12-09 09:56:09.625830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.320 qpair failed and we were unable to recover it. 00:38:34.320 [2024-12-09 09:56:09.626177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.320 [2024-12-09 09:56:09.626207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.320 qpair failed and we were unable to recover it. 00:38:34.320 [2024-12-09 09:56:09.626541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.320 [2024-12-09 09:56:09.626573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.320 qpair failed and we were unable to recover it. 00:38:34.320 [2024-12-09 09:56:09.626903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.320 [2024-12-09 09:56:09.626934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.320 qpair failed and we were unable to recover it. 00:38:34.320 [2024-12-09 09:56:09.627301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.320 [2024-12-09 09:56:09.627333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.320 qpair failed and we were unable to recover it. 00:38:34.320 [2024-12-09 09:56:09.627676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.320 [2024-12-09 09:56:09.627707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.320 qpair failed and we were unable to recover it. 00:38:34.320 [2024-12-09 09:56:09.627938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.320 [2024-12-09 09:56:09.627968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.320 qpair failed and we were unable to recover it. 00:38:34.320 [2024-12-09 09:56:09.628314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.320 [2024-12-09 09:56:09.628345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.320 qpair failed and we were unable to recover it. 00:38:34.320 [2024-12-09 09:56:09.628703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.320 [2024-12-09 09:56:09.628735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.320 qpair failed and we were unable to recover it. 00:38:34.320 [2024-12-09 09:56:09.629092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.320 [2024-12-09 09:56:09.629122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.320 qpair failed and we were unable to recover it. 00:38:34.320 [2024-12-09 09:56:09.629465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.320 [2024-12-09 09:56:09.629495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.320 qpair failed and we were unable to recover it. 00:38:34.320 [2024-12-09 09:56:09.629857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.320 [2024-12-09 09:56:09.629889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.320 qpair failed and we were unable to recover it. 00:38:34.320 [2024-12-09 09:56:09.630214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.320 [2024-12-09 09:56:09.630244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.320 qpair failed and we were unable to recover it. 00:38:34.320 [2024-12-09 09:56:09.630590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.320 [2024-12-09 09:56:09.630621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.320 qpair failed and we were unable to recover it. 00:38:34.320 [2024-12-09 09:56:09.630775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.320 [2024-12-09 09:56:09.630806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.320 qpair failed and we were unable to recover it. 00:38:34.320 [2024-12-09 09:56:09.631178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.320 [2024-12-09 09:56:09.631211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.320 qpair failed and we were unable to recover it. 00:38:34.320 [2024-12-09 09:56:09.631539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.320 [2024-12-09 09:56:09.631568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.320 qpair failed and we were unable to recover it. 00:38:34.320 [2024-12-09 09:56:09.631937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.320 [2024-12-09 09:56:09.631969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.320 qpair failed and we were unable to recover it. 00:38:34.320 [2024-12-09 09:56:09.632307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.320 [2024-12-09 09:56:09.632337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.320 qpair failed and we were unable to recover it. 00:38:34.320 [2024-12-09 09:56:09.632666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.320 [2024-12-09 09:56:09.632699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.320 qpair failed and we were unable to recover it. 00:38:34.320 [2024-12-09 09:56:09.633024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.320 [2024-12-09 09:56:09.633054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.320 qpair failed and we were unable to recover it. 00:38:34.320 [2024-12-09 09:56:09.633397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.320 [2024-12-09 09:56:09.633433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.320 qpair failed and we were unable to recover it. 00:38:34.320 [2024-12-09 09:56:09.633775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.320 [2024-12-09 09:56:09.633806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.320 qpair failed and we were unable to recover it. 00:38:34.320 [2024-12-09 09:56:09.634139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.320 [2024-12-09 09:56:09.634170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.320 qpair failed and we were unable to recover it. 00:38:34.320 [2024-12-09 09:56:09.634537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.320 [2024-12-09 09:56:09.634568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.320 qpair failed and we were unable to recover it. 00:38:34.320 [2024-12-09 09:56:09.634997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.320 [2024-12-09 09:56:09.635029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.320 qpair failed and we were unable to recover it. 00:38:34.320 [2024-12-09 09:56:09.635356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.320 [2024-12-09 09:56:09.635386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.320 qpair failed and we were unable to recover it. 00:38:34.320 [2024-12-09 09:56:09.635736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.320 [2024-12-09 09:56:09.635767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.320 qpair failed and we were unable to recover it. 00:38:34.320 [2024-12-09 09:56:09.636118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.320 [2024-12-09 09:56:09.636149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.320 qpair failed and we were unable to recover it. 00:38:34.320 [2024-12-09 09:56:09.636483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.320 [2024-12-09 09:56:09.636515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.320 qpair failed and we were unable to recover it. 00:38:34.320 [2024-12-09 09:56:09.636868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.320 [2024-12-09 09:56:09.636899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.320 qpair failed and we were unable to recover it. 00:38:34.320 [2024-12-09 09:56:09.637243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.320 [2024-12-09 09:56:09.637273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.320 qpair failed and we were unable to recover it. 00:38:34.320 [2024-12-09 09:56:09.637628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.320 [2024-12-09 09:56:09.637671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.320 qpair failed and we were unable to recover it. 00:38:34.320 [2024-12-09 09:56:09.638009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.320 [2024-12-09 09:56:09.638040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.320 qpair failed and we were unable to recover it. 00:38:34.320 [2024-12-09 09:56:09.638381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.321 [2024-12-09 09:56:09.638412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.321 qpair failed and we were unable to recover it. 00:38:34.321 [2024-12-09 09:56:09.638744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.321 [2024-12-09 09:56:09.638775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.321 qpair failed and we were unable to recover it. 00:38:34.321 [2024-12-09 09:56:09.639139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.321 [2024-12-09 09:56:09.639170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.321 qpair failed and we were unable to recover it. 00:38:34.321 [2024-12-09 09:56:09.639392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.321 [2024-12-09 09:56:09.639422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.321 qpair failed and we were unable to recover it. 00:38:34.321 [2024-12-09 09:56:09.639779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.321 [2024-12-09 09:56:09.639811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.321 qpair failed and we were unable to recover it. 00:38:34.321 [2024-12-09 09:56:09.640159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.321 [2024-12-09 09:56:09.640190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.321 qpair failed and we were unable to recover it. 00:38:34.321 [2024-12-09 09:56:09.640545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.321 [2024-12-09 09:56:09.640576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.321 qpair failed and we were unable to recover it. 00:38:34.321 [2024-12-09 09:56:09.640937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.321 [2024-12-09 09:56:09.640969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.321 qpair failed and we were unable to recover it. 00:38:34.321 [2024-12-09 09:56:09.641311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.321 [2024-12-09 09:56:09.641343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.321 qpair failed and we were unable to recover it. 00:38:34.321 [2024-12-09 09:56:09.641696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.321 [2024-12-09 09:56:09.641728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.321 qpair failed and we were unable to recover it. 00:38:34.321 [2024-12-09 09:56:09.642105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.321 [2024-12-09 09:56:09.642135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.321 qpair failed and we were unable to recover it. 00:38:34.321 [2024-12-09 09:56:09.642492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.321 [2024-12-09 09:56:09.642523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.321 qpair failed and we were unable to recover it. 00:38:34.321 [2024-12-09 09:56:09.642846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.321 [2024-12-09 09:56:09.642879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.321 qpair failed and we were unable to recover it. 00:38:34.321 [2024-12-09 09:56:09.643247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.321 [2024-12-09 09:56:09.643279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.321 qpair failed and we were unable to recover it. 00:38:34.321 [2024-12-09 09:56:09.643607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.321 [2024-12-09 09:56:09.643652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.321 qpair failed and we were unable to recover it. 00:38:34.321 [2024-12-09 09:56:09.644001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.321 [2024-12-09 09:56:09.644032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.321 qpair failed and we were unable to recover it. 00:38:34.321 [2024-12-09 09:56:09.644377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.321 [2024-12-09 09:56:09.644408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.321 qpair failed and we were unable to recover it. 00:38:34.321 [2024-12-09 09:56:09.644756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.321 [2024-12-09 09:56:09.644787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.321 qpair failed and we were unable to recover it. 00:38:34.321 [2024-12-09 09:56:09.645155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.321 [2024-12-09 09:56:09.645185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.321 qpair failed and we were unable to recover it. 00:38:34.321 [2024-12-09 09:56:09.645525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.321 [2024-12-09 09:56:09.645557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.321 qpair failed and we were unable to recover it. 00:38:34.321 [2024-12-09 09:56:09.645839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.321 [2024-12-09 09:56:09.645870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.321 qpair failed and we were unable to recover it. 00:38:34.321 [2024-12-09 09:56:09.646206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.321 [2024-12-09 09:56:09.646236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.321 qpair failed and we were unable to recover it. 00:38:34.321 [2024-12-09 09:56:09.646585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.321 [2024-12-09 09:56:09.646615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.321 qpair failed and we were unable to recover it. 00:38:34.321 [2024-12-09 09:56:09.646956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.321 [2024-12-09 09:56:09.646987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.321 qpair failed and we were unable to recover it. 00:38:34.321 [2024-12-09 09:56:09.647217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.321 [2024-12-09 09:56:09.647248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.321 qpair failed and we were unable to recover it. 00:38:34.321 [2024-12-09 09:56:09.647574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.321 [2024-12-09 09:56:09.647606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.321 qpair failed and we were unable to recover it. 00:38:34.321 [2024-12-09 09:56:09.647968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.321 [2024-12-09 09:56:09.648000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.321 qpair failed and we were unable to recover it. 00:38:34.321 [2024-12-09 09:56:09.648342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.321 [2024-12-09 09:56:09.648372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.321 qpair failed and we were unable to recover it. 00:38:34.321 [2024-12-09 09:56:09.648715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.321 [2024-12-09 09:56:09.648747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.321 qpair failed and we were unable to recover it. 00:38:34.321 [2024-12-09 09:56:09.649125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.321 [2024-12-09 09:56:09.649156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.321 qpair failed and we were unable to recover it. 00:38:34.321 [2024-12-09 09:56:09.649507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.321 [2024-12-09 09:56:09.649538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.321 qpair failed and we were unable to recover it. 00:38:34.321 [2024-12-09 09:56:09.649886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.321 [2024-12-09 09:56:09.649917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.321 qpair failed and we were unable to recover it. 00:38:34.321 [2024-12-09 09:56:09.650240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.321 [2024-12-09 09:56:09.650271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.321 qpair failed and we were unable to recover it. 00:38:34.321 [2024-12-09 09:56:09.650602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.321 [2024-12-09 09:56:09.650632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.321 qpair failed and we were unable to recover it. 00:38:34.321 [2024-12-09 09:56:09.651055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.321 [2024-12-09 09:56:09.651085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.321 qpair failed and we were unable to recover it. 00:38:34.321 [2024-12-09 09:56:09.651432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.321 [2024-12-09 09:56:09.651464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.321 qpair failed and we were unable to recover it. 00:38:34.321 [2024-12-09 09:56:09.651843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.321 [2024-12-09 09:56:09.651874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.321 qpair failed and we were unable to recover it. 00:38:34.321 [2024-12-09 09:56:09.652230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.321 [2024-12-09 09:56:09.652260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.321 qpair failed and we were unable to recover it. 00:38:34.321 [2024-12-09 09:56:09.652620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.322 [2024-12-09 09:56:09.652662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.322 qpair failed and we were unable to recover it. 00:38:34.322 [2024-12-09 09:56:09.653033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.322 [2024-12-09 09:56:09.653063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.322 qpair failed and we were unable to recover it. 00:38:34.322 [2024-12-09 09:56:09.653421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.322 [2024-12-09 09:56:09.653451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.322 qpair failed and we were unable to recover it. 00:38:34.322 [2024-12-09 09:56:09.653788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.322 [2024-12-09 09:56:09.653819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.322 qpair failed and we were unable to recover it. 00:38:34.322 [2024-12-09 09:56:09.654192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.322 [2024-12-09 09:56:09.654221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.322 qpair failed and we were unable to recover it. 00:38:34.322 [2024-12-09 09:56:09.654616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.322 [2024-12-09 09:56:09.654656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.322 qpair failed and we were unable to recover it. 00:38:34.322 [2024-12-09 09:56:09.655021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.322 [2024-12-09 09:56:09.655051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.322 qpair failed and we were unable to recover it. 00:38:34.322 [2024-12-09 09:56:09.655405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.322 [2024-12-09 09:56:09.655434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.322 qpair failed and we were unable to recover it. 00:38:34.322 [2024-12-09 09:56:09.655744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.322 [2024-12-09 09:56:09.655775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.322 qpair failed and we were unable to recover it. 00:38:34.322 [2024-12-09 09:56:09.656148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.322 [2024-12-09 09:56:09.656177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.322 qpair failed and we were unable to recover it. 00:38:34.322 [2024-12-09 09:56:09.656521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.322 [2024-12-09 09:56:09.656550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.322 qpair failed and we were unable to recover it. 00:38:34.322 [2024-12-09 09:56:09.656943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.322 [2024-12-09 09:56:09.656974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.322 qpair failed and we were unable to recover it. 00:38:34.322 [2024-12-09 09:56:09.657328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.322 [2024-12-09 09:56:09.657358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.322 qpair failed and we were unable to recover it. 00:38:34.322 [2024-12-09 09:56:09.657692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.322 [2024-12-09 09:56:09.657724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.322 qpair failed and we were unable to recover it. 00:38:34.322 [2024-12-09 09:56:09.658070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.322 [2024-12-09 09:56:09.658099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.322 qpair failed and we were unable to recover it. 00:38:34.322 [2024-12-09 09:56:09.658418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.322 [2024-12-09 09:56:09.658449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.322 qpair failed and we were unable to recover it. 00:38:34.322 [2024-12-09 09:56:09.658713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.322 [2024-12-09 09:56:09.658744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.322 qpair failed and we were unable to recover it. 00:38:34.322 [2024-12-09 09:56:09.659072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.322 [2024-12-09 09:56:09.659107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.322 qpair failed and we were unable to recover it. 00:38:34.322 [2024-12-09 09:56:09.659470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.322 [2024-12-09 09:56:09.659500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.322 qpair failed and we were unable to recover it. 00:38:34.322 [2024-12-09 09:56:09.659944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.322 [2024-12-09 09:56:09.659976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.322 qpair failed and we were unable to recover it. 00:38:34.322 [2024-12-09 09:56:09.660304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.322 [2024-12-09 09:56:09.660334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.322 qpair failed and we were unable to recover it. 00:38:34.322 [2024-12-09 09:56:09.660689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.322 [2024-12-09 09:56:09.660721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.322 qpair failed and we were unable to recover it. 00:38:34.322 [2024-12-09 09:56:09.661089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.322 [2024-12-09 09:56:09.661118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.322 qpair failed and we were unable to recover it. 00:38:34.322 [2024-12-09 09:56:09.661468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.322 [2024-12-09 09:56:09.661498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.322 qpair failed and we were unable to recover it. 00:38:34.322 [2024-12-09 09:56:09.661860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.322 [2024-12-09 09:56:09.661891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.322 qpair failed and we were unable to recover it. 00:38:34.322 [2024-12-09 09:56:09.662242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.322 [2024-12-09 09:56:09.662271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.322 qpair failed and we were unable to recover it. 00:38:34.322 [2024-12-09 09:56:09.662611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.322 [2024-12-09 09:56:09.662649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.322 qpair failed and we were unable to recover it. 00:38:34.322 [2024-12-09 09:56:09.663034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.322 [2024-12-09 09:56:09.663064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.322 qpair failed and we were unable to recover it. 00:38:34.322 [2024-12-09 09:56:09.663445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.322 [2024-12-09 09:56:09.663474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.322 qpair failed and we were unable to recover it. 00:38:34.322 [2024-12-09 09:56:09.663834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.322 [2024-12-09 09:56:09.663865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.322 qpair failed and we were unable to recover it. 00:38:34.322 [2024-12-09 09:56:09.664097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.322 [2024-12-09 09:56:09.664126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.322 qpair failed and we were unable to recover it. 00:38:34.322 [2024-12-09 09:56:09.664494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.322 [2024-12-09 09:56:09.664524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.322 qpair failed and we were unable to recover it. 00:38:34.322 [2024-12-09 09:56:09.664890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.322 [2024-12-09 09:56:09.664921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.322 qpair failed and we were unable to recover it. 00:38:34.322 [2024-12-09 09:56:09.665157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.322 [2024-12-09 09:56:09.665186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.322 qpair failed and we were unable to recover it. 00:38:34.322 [2024-12-09 09:56:09.665542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.322 [2024-12-09 09:56:09.665571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.322 qpair failed and we were unable to recover it. 00:38:34.322 [2024-12-09 09:56:09.665933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.322 [2024-12-09 09:56:09.665965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.322 qpair failed and we were unable to recover it. 00:38:34.322 [2024-12-09 09:56:09.666311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.322 [2024-12-09 09:56:09.666341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.322 qpair failed and we were unable to recover it. 00:38:34.322 [2024-12-09 09:56:09.666700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.322 [2024-12-09 09:56:09.666731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.322 qpair failed and we were unable to recover it. 00:38:34.322 [2024-12-09 09:56:09.667084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.322 [2024-12-09 09:56:09.667114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.322 qpair failed and we were unable to recover it. 00:38:34.323 [2024-12-09 09:56:09.667450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.323 [2024-12-09 09:56:09.667480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.323 qpair failed and we were unable to recover it. 00:38:34.323 [2024-12-09 09:56:09.667746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.323 [2024-12-09 09:56:09.667776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.323 qpair failed and we were unable to recover it. 00:38:34.323 [2024-12-09 09:56:09.668123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.323 [2024-12-09 09:56:09.668154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.323 qpair failed and we were unable to recover it. 00:38:34.323 [2024-12-09 09:56:09.668540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.323 [2024-12-09 09:56:09.668570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.323 qpair failed and we were unable to recover it. 00:38:34.323 [2024-12-09 09:56:09.668958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.323 [2024-12-09 09:56:09.668989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.323 qpair failed and we were unable to recover it. 00:38:34.323 [2024-12-09 09:56:09.669318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.323 [2024-12-09 09:56:09.669348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.323 qpair failed and we were unable to recover it. 00:38:34.323 [2024-12-09 09:56:09.669677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.323 [2024-12-09 09:56:09.669709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.323 qpair failed and we were unable to recover it. 00:38:34.323 [2024-12-09 09:56:09.670055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.323 [2024-12-09 09:56:09.670085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.323 qpair failed and we were unable to recover it. 00:38:34.323 [2024-12-09 09:56:09.670424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.323 [2024-12-09 09:56:09.670454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.323 qpair failed and we were unable to recover it. 00:38:34.323 [2024-12-09 09:56:09.670784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.323 [2024-12-09 09:56:09.670815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.323 qpair failed and we were unable to recover it. 00:38:34.323 [2024-12-09 09:56:09.671163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.323 [2024-12-09 09:56:09.671192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.323 qpair failed and we were unable to recover it. 00:38:34.323 [2024-12-09 09:56:09.671523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.323 [2024-12-09 09:56:09.671553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.323 qpair failed and we were unable to recover it. 00:38:34.323 [2024-12-09 09:56:09.671950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.323 [2024-12-09 09:56:09.671982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.323 qpair failed and we were unable to recover it. 00:38:34.323 [2024-12-09 09:56:09.672332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.323 [2024-12-09 09:56:09.672361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.323 qpair failed and we were unable to recover it. 00:38:34.323 [2024-12-09 09:56:09.672712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.323 [2024-12-09 09:56:09.672744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.323 qpair failed and we were unable to recover it. 00:38:34.323 [2024-12-09 09:56:09.673091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.323 [2024-12-09 09:56:09.673120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.323 qpair failed and we were unable to recover it. 00:38:34.323 [2024-12-09 09:56:09.673434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.323 [2024-12-09 09:56:09.673464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.323 qpair failed and we were unable to recover it. 00:38:34.323 [2024-12-09 09:56:09.673729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.323 [2024-12-09 09:56:09.673760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.323 qpair failed and we were unable to recover it. 00:38:34.323 [2024-12-09 09:56:09.674112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.323 [2024-12-09 09:56:09.674142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.323 qpair failed and we were unable to recover it. 00:38:34.323 [2024-12-09 09:56:09.674571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.323 [2024-12-09 09:56:09.674602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.323 qpair failed and we were unable to recover it. 00:38:34.323 [2024-12-09 09:56:09.674855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.323 [2024-12-09 09:56:09.674887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.323 qpair failed and we were unable to recover it. 00:38:34.323 [2024-12-09 09:56:09.675237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.323 [2024-12-09 09:56:09.675267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.323 qpair failed and we were unable to recover it. 00:38:34.323 [2024-12-09 09:56:09.675620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.323 [2024-12-09 09:56:09.675672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.323 qpair failed and we were unable to recover it. 00:38:34.323 [2024-12-09 09:56:09.676035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.323 [2024-12-09 09:56:09.676064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.323 qpair failed and we were unable to recover it. 00:38:34.323 [2024-12-09 09:56:09.676412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.323 [2024-12-09 09:56:09.676442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.323 qpair failed and we were unable to recover it. 00:38:34.323 [2024-12-09 09:56:09.676696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.323 [2024-12-09 09:56:09.676727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.323 qpair failed and we were unable to recover it. 00:38:34.323 [2024-12-09 09:56:09.677069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.323 [2024-12-09 09:56:09.677099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.323 qpair failed and we were unable to recover it. 00:38:34.323 [2024-12-09 09:56:09.677448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.323 [2024-12-09 09:56:09.677477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.323 qpair failed and we were unable to recover it. 00:38:34.323 [2024-12-09 09:56:09.677827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.323 [2024-12-09 09:56:09.677858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.323 qpair failed and we were unable to recover it. 00:38:34.323 [2024-12-09 09:56:09.678206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.323 [2024-12-09 09:56:09.678236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.323 qpair failed and we were unable to recover it. 00:38:34.323 [2024-12-09 09:56:09.678581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.323 [2024-12-09 09:56:09.678612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.323 qpair failed and we were unable to recover it. 00:38:34.323 [2024-12-09 09:56:09.678965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.323 [2024-12-09 09:56:09.678996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.323 qpair failed and we were unable to recover it. 00:38:34.323 [2024-12-09 09:56:09.679357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.323 [2024-12-09 09:56:09.679388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.323 qpair failed and we were unable to recover it. 00:38:34.323 [2024-12-09 09:56:09.679758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.323 [2024-12-09 09:56:09.679789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.323 qpair failed and we were unable to recover it. 00:38:34.323 [2024-12-09 09:56:09.680151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.323 [2024-12-09 09:56:09.680181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.323 qpair failed and we were unable to recover it. 00:38:34.323 [2024-12-09 09:56:09.680511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.323 [2024-12-09 09:56:09.680541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.323 qpair failed and we were unable to recover it. 00:38:34.323 [2024-12-09 09:56:09.680885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.323 [2024-12-09 09:56:09.680915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.323 qpair failed and we were unable to recover it. 00:38:34.323 [2024-12-09 09:56:09.681164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.323 [2024-12-09 09:56:09.681192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.323 qpair failed and we were unable to recover it. 00:38:34.323 [2024-12-09 09:56:09.681539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.324 [2024-12-09 09:56:09.681569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.324 qpair failed and we were unable to recover it. 00:38:34.324 [2024-12-09 09:56:09.681892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.324 [2024-12-09 09:56:09.681923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.324 qpair failed and we were unable to recover it. 00:38:34.324 [2024-12-09 09:56:09.682251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.324 [2024-12-09 09:56:09.682281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.324 qpair failed and we were unable to recover it. 00:38:34.324 [2024-12-09 09:56:09.682616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.324 [2024-12-09 09:56:09.682655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.324 qpair failed and we were unable to recover it. 00:38:34.324 [2024-12-09 09:56:09.683048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.324 [2024-12-09 09:56:09.683079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.324 qpair failed and we were unable to recover it. 00:38:34.324 [2024-12-09 09:56:09.683415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.324 [2024-12-09 09:56:09.683445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.324 qpair failed and we were unable to recover it. 00:38:34.324 [2024-12-09 09:56:09.683790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.324 [2024-12-09 09:56:09.683822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.324 qpair failed and we were unable to recover it. 00:38:34.324 [2024-12-09 09:56:09.684201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.324 [2024-12-09 09:56:09.684231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.324 qpair failed and we were unable to recover it. 00:38:34.324 [2024-12-09 09:56:09.684631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.324 [2024-12-09 09:56:09.684678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.324 qpair failed and we were unable to recover it. 00:38:34.324 [2024-12-09 09:56:09.685028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.324 [2024-12-09 09:56:09.685058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.324 qpair failed and we were unable to recover it. 00:38:34.324 [2024-12-09 09:56:09.685314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.324 [2024-12-09 09:56:09.685344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.324 qpair failed and we were unable to recover it. 00:38:34.324 [2024-12-09 09:56:09.685587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.324 [2024-12-09 09:56:09.685616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.324 qpair failed and we were unable to recover it. 00:38:34.324 [2024-12-09 09:56:09.685971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.324 [2024-12-09 09:56:09.686001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.324 qpair failed and we were unable to recover it. 00:38:34.324 [2024-12-09 09:56:09.686350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.324 [2024-12-09 09:56:09.686380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.324 qpair failed and we were unable to recover it. 00:38:34.324 [2024-12-09 09:56:09.686609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.324 [2024-12-09 09:56:09.686648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.324 qpair failed and we were unable to recover it. 00:38:34.324 [2024-12-09 09:56:09.687044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.324 [2024-12-09 09:56:09.687075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.324 qpair failed and we were unable to recover it. 00:38:34.324 [2024-12-09 09:56:09.687420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.324 [2024-12-09 09:56:09.687450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.324 qpair failed and we were unable to recover it. 00:38:34.324 [2024-12-09 09:56:09.687681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.324 [2024-12-09 09:56:09.687712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.324 qpair failed and we were unable to recover it. 00:38:34.324 [2024-12-09 09:56:09.688070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.324 [2024-12-09 09:56:09.688100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.324 qpair failed and we were unable to recover it. 00:38:34.324 [2024-12-09 09:56:09.688471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.324 [2024-12-09 09:56:09.688502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.324 qpair failed and we were unable to recover it. 00:38:34.324 [2024-12-09 09:56:09.688831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.324 [2024-12-09 09:56:09.688862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.324 qpair failed and we were unable to recover it. 00:38:34.324 [2024-12-09 09:56:09.689180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.324 [2024-12-09 09:56:09.689209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.324 qpair failed and we were unable to recover it. 00:38:34.324 [2024-12-09 09:56:09.689605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.324 [2024-12-09 09:56:09.689635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.324 qpair failed and we were unable to recover it. 00:38:34.324 [2024-12-09 09:56:09.689978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.324 [2024-12-09 09:56:09.690008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.324 qpair failed and we were unable to recover it. 00:38:34.324 [2024-12-09 09:56:09.690259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.324 [2024-12-09 09:56:09.690294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.324 qpair failed and we were unable to recover it. 00:38:34.324 [2024-12-09 09:56:09.690632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.324 [2024-12-09 09:56:09.690671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.324 qpair failed and we were unable to recover it. 00:38:34.324 [2024-12-09 09:56:09.691101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.324 [2024-12-09 09:56:09.691131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.324 qpair failed and we were unable to recover it. 00:38:34.324 [2024-12-09 09:56:09.691449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.324 [2024-12-09 09:56:09.691479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.324 qpair failed and we were unable to recover it. 00:38:34.324 [2024-12-09 09:56:09.691731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.324 [2024-12-09 09:56:09.691763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.324 qpair failed and we were unable to recover it. 00:38:34.324 [2024-12-09 09:56:09.692110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.324 [2024-12-09 09:56:09.692140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.324 qpair failed and we were unable to recover it. 00:38:34.324 [2024-12-09 09:56:09.692532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.324 [2024-12-09 09:56:09.692563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.324 qpair failed and we were unable to recover it. 00:38:34.324 [2024-12-09 09:56:09.692936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.324 [2024-12-09 09:56:09.692968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.324 qpair failed and we were unable to recover it. 00:38:34.324 [2024-12-09 09:56:09.693298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.324 [2024-12-09 09:56:09.693328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.324 qpair failed and we were unable to recover it. 00:38:34.324 [2024-12-09 09:56:09.693558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.324 [2024-12-09 09:56:09.693588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.324 qpair failed and we were unable to recover it. 00:38:34.325 [2024-12-09 09:56:09.693950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.325 [2024-12-09 09:56:09.693980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.325 qpair failed and we were unable to recover it. 00:38:34.325 [2024-12-09 09:56:09.694332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.325 [2024-12-09 09:56:09.694362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.325 qpair failed and we were unable to recover it. 00:38:34.325 [2024-12-09 09:56:09.694717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.325 [2024-12-09 09:56:09.694749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.325 qpair failed and we were unable to recover it. 00:38:34.325 [2024-12-09 09:56:09.695152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.325 [2024-12-09 09:56:09.695182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.325 qpair failed and we were unable to recover it. 00:38:34.325 [2024-12-09 09:56:09.695518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.325 [2024-12-09 09:56:09.695548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.325 qpair failed and we were unable to recover it. 00:38:34.325 [2024-12-09 09:56:09.695898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.325 [2024-12-09 09:56:09.695930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.325 qpair failed and we were unable to recover it. 00:38:34.325 [2024-12-09 09:56:09.696290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.325 [2024-12-09 09:56:09.696320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.325 qpair failed and we were unable to recover it. 00:38:34.325 [2024-12-09 09:56:09.696689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.325 [2024-12-09 09:56:09.696722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.325 qpair failed and we were unable to recover it. 00:38:34.325 [2024-12-09 09:56:09.697074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.325 [2024-12-09 09:56:09.697105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.325 qpair failed and we were unable to recover it. 00:38:34.325 [2024-12-09 09:56:09.697350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.325 [2024-12-09 09:56:09.697380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.325 qpair failed and we were unable to recover it. 00:38:34.325 [2024-12-09 09:56:09.697717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.325 [2024-12-09 09:56:09.697748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.325 qpair failed and we were unable to recover it. 00:38:34.325 [2024-12-09 09:56:09.698110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.325 [2024-12-09 09:56:09.698140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.325 qpair failed and we were unable to recover it. 00:38:34.325 [2024-12-09 09:56:09.698485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.325 [2024-12-09 09:56:09.698514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.325 qpair failed and we were unable to recover it. 00:38:34.325 [2024-12-09 09:56:09.698867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.325 [2024-12-09 09:56:09.698897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.325 qpair failed and we were unable to recover it. 00:38:34.325 [2024-12-09 09:56:09.699239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.325 [2024-12-09 09:56:09.699269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.325 qpair failed and we were unable to recover it. 00:38:34.325 [2024-12-09 09:56:09.699517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.325 [2024-12-09 09:56:09.699552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.325 qpair failed and we were unable to recover it. 00:38:34.325 [2024-12-09 09:56:09.699922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.325 [2024-12-09 09:56:09.699953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.325 qpair failed and we were unable to recover it. 00:38:34.325 [2024-12-09 09:56:09.700304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.325 [2024-12-09 09:56:09.700333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.325 qpair failed and we were unable to recover it. 00:38:34.325 [2024-12-09 09:56:09.700665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.325 [2024-12-09 09:56:09.700696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.325 qpair failed and we were unable to recover it. 00:38:34.325 [2024-12-09 09:56:09.701051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.325 [2024-12-09 09:56:09.701081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.325 qpair failed and we were unable to recover it. 00:38:34.325 [2024-12-09 09:56:09.701487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.325 [2024-12-09 09:56:09.701517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.325 qpair failed and we were unable to recover it. 00:38:34.325 [2024-12-09 09:56:09.701866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.325 [2024-12-09 09:56:09.701898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.325 qpair failed and we were unable to recover it. 00:38:34.325 [2024-12-09 09:56:09.702241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.325 [2024-12-09 09:56:09.702271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.325 qpair failed and we were unable to recover it. 00:38:34.325 [2024-12-09 09:56:09.702602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.325 [2024-12-09 09:56:09.702632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.325 qpair failed and we were unable to recover it. 00:38:34.325 [2024-12-09 09:56:09.702992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.325 [2024-12-09 09:56:09.703024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.325 qpair failed and we were unable to recover it. 00:38:34.325 [2024-12-09 09:56:09.703267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.325 [2024-12-09 09:56:09.703297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.325 qpair failed and we were unable to recover it. 00:38:34.325 [2024-12-09 09:56:09.703672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.325 [2024-12-09 09:56:09.703704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.325 qpair failed and we were unable to recover it. 00:38:34.325 [2024-12-09 09:56:09.704075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.325 [2024-12-09 09:56:09.704107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.325 qpair failed and we were unable to recover it. 00:38:34.325 [2024-12-09 09:56:09.704443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.325 [2024-12-09 09:56:09.704472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.325 qpair failed and we were unable to recover it. 00:38:34.325 [2024-12-09 09:56:09.704819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.325 [2024-12-09 09:56:09.704850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.325 qpair failed and we were unable to recover it. 00:38:34.325 [2024-12-09 09:56:09.705105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.325 [2024-12-09 09:56:09.705138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.325 qpair failed and we were unable to recover it. 00:38:34.325 [2024-12-09 09:56:09.705484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.325 [2024-12-09 09:56:09.705514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.325 qpair failed and we were unable to recover it. 00:38:34.325 [2024-12-09 09:56:09.705958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.325 [2024-12-09 09:56:09.705989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.325 qpair failed and we were unable to recover it. 00:38:34.325 [2024-12-09 09:56:09.706344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.325 [2024-12-09 09:56:09.706374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.325 qpair failed and we were unable to recover it. 00:38:34.325 [2024-12-09 09:56:09.706748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.325 [2024-12-09 09:56:09.706779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.325 qpair failed and we were unable to recover it. 00:38:34.325 [2024-12-09 09:56:09.707121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.325 [2024-12-09 09:56:09.707152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.325 qpair failed and we were unable to recover it. 00:38:34.325 [2024-12-09 09:56:09.707497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.325 [2024-12-09 09:56:09.707526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.325 qpair failed and we were unable to recover it. 00:38:34.325 [2024-12-09 09:56:09.707965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.325 [2024-12-09 09:56:09.707996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.325 qpair failed and we were unable to recover it. 00:38:34.325 [2024-12-09 09:56:09.708331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.326 [2024-12-09 09:56:09.708361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.326 qpair failed and we were unable to recover it. 00:38:34.326 [2024-12-09 09:56:09.708711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.326 [2024-12-09 09:56:09.708743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.326 qpair failed and we were unable to recover it. 00:38:34.326 [2024-12-09 09:56:09.709104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.326 [2024-12-09 09:56:09.709133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.326 qpair failed and we were unable to recover it. 00:38:34.326 [2024-12-09 09:56:09.709550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.326 [2024-12-09 09:56:09.709580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.326 qpair failed and we were unable to recover it. 00:38:34.326 [2024-12-09 09:56:09.709921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.326 [2024-12-09 09:56:09.709960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.326 qpair failed and we were unable to recover it. 00:38:34.326 [2024-12-09 09:56:09.710302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.326 [2024-12-09 09:56:09.710331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.326 qpair failed and we were unable to recover it. 00:38:34.326 [2024-12-09 09:56:09.710660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.326 [2024-12-09 09:56:09.710691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.326 qpair failed and we were unable to recover it. 00:38:34.326 [2024-12-09 09:56:09.711079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.326 [2024-12-09 09:56:09.711109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.326 qpair failed and we were unable to recover it. 00:38:34.326 [2024-12-09 09:56:09.711335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.326 [2024-12-09 09:56:09.711365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.326 qpair failed and we were unable to recover it. 00:38:34.326 [2024-12-09 09:56:09.711702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.326 [2024-12-09 09:56:09.711734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.326 qpair failed and we were unable to recover it. 00:38:34.326 [2024-12-09 09:56:09.712005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.326 [2024-12-09 09:56:09.712035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.326 qpair failed and we were unable to recover it. 00:38:34.326 [2024-12-09 09:56:09.712385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.326 [2024-12-09 09:56:09.712415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.326 qpair failed and we were unable to recover it. 00:38:34.326 [2024-12-09 09:56:09.712804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.326 [2024-12-09 09:56:09.712836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.326 qpair failed and we were unable to recover it. 00:38:34.326 [2024-12-09 09:56:09.713193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.326 [2024-12-09 09:56:09.713223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.326 qpair failed and we were unable to recover it. 00:38:34.326 [2024-12-09 09:56:09.713542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.326 [2024-12-09 09:56:09.713572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.326 qpair failed and we were unable to recover it. 00:38:34.326 [2024-12-09 09:56:09.713934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.326 [2024-12-09 09:56:09.713965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.326 qpair failed and we were unable to recover it. 00:38:34.326 [2024-12-09 09:56:09.714216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.326 [2024-12-09 09:56:09.714247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.326 qpair failed and we were unable to recover it. 00:38:34.326 [2024-12-09 09:56:09.714587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.326 [2024-12-09 09:56:09.714617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.326 qpair failed and we were unable to recover it. 00:38:34.326 [2024-12-09 09:56:09.714972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.326 [2024-12-09 09:56:09.715008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.326 qpair failed and we were unable to recover it. 00:38:34.326 [2024-12-09 09:56:09.715361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.326 [2024-12-09 09:56:09.715392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.326 qpair failed and we were unable to recover it. 00:38:34.326 [2024-12-09 09:56:09.715740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.326 [2024-12-09 09:56:09.715771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.326 qpair failed and we were unable to recover it. 00:38:34.326 [2024-12-09 09:56:09.716017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.326 [2024-12-09 09:56:09.716048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.326 qpair failed and we were unable to recover it. 00:38:34.326 [2024-12-09 09:56:09.716382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.326 [2024-12-09 09:56:09.716412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.326 qpair failed and we were unable to recover it. 00:38:34.326 [2024-12-09 09:56:09.716757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.326 [2024-12-09 09:56:09.716789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.326 qpair failed and we were unable to recover it. 00:38:34.326 [2024-12-09 09:56:09.717139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.326 [2024-12-09 09:56:09.717169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.326 qpair failed and we were unable to recover it. 00:38:34.326 [2024-12-09 09:56:09.717518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.326 [2024-12-09 09:56:09.717548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.326 qpair failed and we were unable to recover it. 00:38:34.326 [2024-12-09 09:56:09.717886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.326 [2024-12-09 09:56:09.717917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.326 qpair failed and we were unable to recover it. 00:38:34.326 [2024-12-09 09:56:09.718270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.326 [2024-12-09 09:56:09.718300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.326 qpair failed and we were unable to recover it. 00:38:34.326 [2024-12-09 09:56:09.718656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.326 [2024-12-09 09:56:09.718689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.326 qpair failed and we were unable to recover it. 00:38:34.326 [2024-12-09 09:56:09.719035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.326 [2024-12-09 09:56:09.719065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.326 qpair failed and we were unable to recover it. 00:38:34.326 [2024-12-09 09:56:09.719400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.326 [2024-12-09 09:56:09.719429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.326 qpair failed and we were unable to recover it. 00:38:34.326 [2024-12-09 09:56:09.719681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.326 [2024-12-09 09:56:09.719716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.326 qpair failed and we were unable to recover it. 00:38:34.326 [2024-12-09 09:56:09.720087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.326 [2024-12-09 09:56:09.720118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.326 qpair failed and we were unable to recover it. 00:38:34.326 [2024-12-09 09:56:09.720377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.326 [2024-12-09 09:56:09.720408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.326 qpair failed and we were unable to recover it. 00:38:34.326 [2024-12-09 09:56:09.720799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.326 [2024-12-09 09:56:09.720830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.326 qpair failed and we were unable to recover it. 00:38:34.326 [2024-12-09 09:56:09.721175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.326 [2024-12-09 09:56:09.721205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.326 qpair failed and we were unable to recover it. 00:38:34.326 [2024-12-09 09:56:09.721420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.326 [2024-12-09 09:56:09.721453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.326 qpair failed and we were unable to recover it. 00:38:34.326 [2024-12-09 09:56:09.721794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.326 [2024-12-09 09:56:09.721826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.326 qpair failed and we were unable to recover it. 00:38:34.326 [2024-12-09 09:56:09.722175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.326 [2024-12-09 09:56:09.722205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.326 qpair failed and we were unable to recover it. 00:38:34.327 [2024-12-09 09:56:09.722558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.327 [2024-12-09 09:56:09.722588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.327 qpair failed and we were unable to recover it. 00:38:34.327 [2024-12-09 09:56:09.722924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.327 [2024-12-09 09:56:09.722956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.327 qpair failed and we were unable to recover it. 00:38:34.327 [2024-12-09 09:56:09.723296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.327 [2024-12-09 09:56:09.723325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.327 qpair failed and we were unable to recover it. 00:38:34.327 [2024-12-09 09:56:09.723675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.327 [2024-12-09 09:56:09.723706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.327 qpair failed and we were unable to recover it. 00:38:34.327 [2024-12-09 09:56:09.724057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.327 [2024-12-09 09:56:09.724087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.327 qpair failed and we were unable to recover it. 00:38:34.327 [2024-12-09 09:56:09.724416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.327 [2024-12-09 09:56:09.724447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.327 qpair failed and we were unable to recover it. 00:38:34.327 [2024-12-09 09:56:09.724786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.327 [2024-12-09 09:56:09.724823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.327 qpair failed and we were unable to recover it. 00:38:34.327 [2024-12-09 09:56:09.725163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.327 [2024-12-09 09:56:09.725193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.327 qpair failed and we were unable to recover it. 00:38:34.327 [2024-12-09 09:56:09.725536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.327 [2024-12-09 09:56:09.725565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.327 qpair failed and we were unable to recover it. 00:38:34.327 [2024-12-09 09:56:09.725912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.327 [2024-12-09 09:56:09.725942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.327 qpair failed and we were unable to recover it. 00:38:34.327 [2024-12-09 09:56:09.726273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.327 [2024-12-09 09:56:09.726303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.327 qpair failed and we were unable to recover it. 00:38:34.327 [2024-12-09 09:56:09.726647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.327 [2024-12-09 09:56:09.726679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.327 qpair failed and we were unable to recover it. 00:38:34.327 [2024-12-09 09:56:09.726920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.327 [2024-12-09 09:56:09.726951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.327 qpair failed and we were unable to recover it. 00:38:34.327 [2024-12-09 09:56:09.727293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.327 [2024-12-09 09:56:09.727324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.327 qpair failed and we were unable to recover it. 00:38:34.327 [2024-12-09 09:56:09.727556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.327 [2024-12-09 09:56:09.727589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.327 qpair failed and we were unable to recover it. 00:38:34.327 [2024-12-09 09:56:09.727973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.327 [2024-12-09 09:56:09.728004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.327 qpair failed and we were unable to recover it. 00:38:34.327 [2024-12-09 09:56:09.728331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.327 [2024-12-09 09:56:09.728361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.327 qpair failed and we were unable to recover it. 00:38:34.327 [2024-12-09 09:56:09.728703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.327 [2024-12-09 09:56:09.728736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.327 qpair failed and we were unable to recover it. 00:38:34.327 [2024-12-09 09:56:09.729069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.327 [2024-12-09 09:56:09.729098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.327 qpair failed and we were unable to recover it. 00:38:34.327 [2024-12-09 09:56:09.729439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.327 [2024-12-09 09:56:09.729469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.327 qpair failed and we were unable to recover it. 00:38:34.327 [2024-12-09 09:56:09.729632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.327 [2024-12-09 09:56:09.729672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.327 qpair failed and we were unable to recover it. 00:38:34.327 [2024-12-09 09:56:09.730031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.327 [2024-12-09 09:56:09.730060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.327 qpair failed and we were unable to recover it. 00:38:34.327 [2024-12-09 09:56:09.730403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.327 [2024-12-09 09:56:09.730433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.327 qpair failed and we were unable to recover it. 00:38:34.327 [2024-12-09 09:56:09.730786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.327 [2024-12-09 09:56:09.730817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.327 qpair failed and we were unable to recover it. 00:38:34.327 [2024-12-09 09:56:09.731150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.327 [2024-12-09 09:56:09.731180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.327 qpair failed and we were unable to recover it. 00:38:34.327 [2024-12-09 09:56:09.731528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.327 [2024-12-09 09:56:09.731559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.327 qpair failed and we were unable to recover it. 00:38:34.327 [2024-12-09 09:56:09.731892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.327 [2024-12-09 09:56:09.731923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.327 qpair failed and we were unable to recover it. 00:38:34.327 [2024-12-09 09:56:09.732279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.327 [2024-12-09 09:56:09.732309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.327 qpair failed and we were unable to recover it. 00:38:34.327 [2024-12-09 09:56:09.732663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.327 [2024-12-09 09:56:09.732695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.327 qpair failed and we were unable to recover it. 00:38:34.327 [2024-12-09 09:56:09.733040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.327 [2024-12-09 09:56:09.733071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.327 qpair failed and we were unable to recover it. 00:38:34.327 [2024-12-09 09:56:09.733412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.327 [2024-12-09 09:56:09.733442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.327 qpair failed and we were unable to recover it. 00:38:34.327 [2024-12-09 09:56:09.733786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.327 [2024-12-09 09:56:09.733816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.327 qpair failed and we were unable to recover it. 00:38:34.327 [2024-12-09 09:56:09.734175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.327 [2024-12-09 09:56:09.734204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.327 qpair failed and we were unable to recover it. 00:38:34.327 [2024-12-09 09:56:09.734548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.327 [2024-12-09 09:56:09.734584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.327 qpair failed and we were unable to recover it. 00:38:34.327 [2024-12-09 09:56:09.734918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.327 [2024-12-09 09:56:09.734949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.327 qpair failed and we were unable to recover it. 00:38:34.327 [2024-12-09 09:56:09.735289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.327 [2024-12-09 09:56:09.735319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.327 qpair failed and we were unable to recover it. 00:38:34.327 [2024-12-09 09:56:09.735689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.327 [2024-12-09 09:56:09.735720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.327 qpair failed and we were unable to recover it. 00:38:34.327 [2024-12-09 09:56:09.736124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.327 [2024-12-09 09:56:09.736155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.327 qpair failed and we were unable to recover it. 00:38:34.327 [2024-12-09 09:56:09.736481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.328 [2024-12-09 09:56:09.736511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.328 qpair failed and we were unable to recover it. 00:38:34.328 [2024-12-09 09:56:09.736886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.328 [2024-12-09 09:56:09.736917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.328 qpair failed and we were unable to recover it. 00:38:34.328 [2024-12-09 09:56:09.737238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.328 [2024-12-09 09:56:09.737269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.328 qpair failed and we were unable to recover it. 00:38:34.328 [2024-12-09 09:56:09.737629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.328 [2024-12-09 09:56:09.737670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.328 qpair failed and we were unable to recover it. 00:38:34.328 [2024-12-09 09:56:09.738072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.328 [2024-12-09 09:56:09.738102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.328 qpair failed and we were unable to recover it. 00:38:34.328 [2024-12-09 09:56:09.738438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.328 [2024-12-09 09:56:09.738468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.328 qpair failed and we were unable to recover it. 00:38:34.328 [2024-12-09 09:56:09.738808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.328 [2024-12-09 09:56:09.738840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.328 qpair failed and we were unable to recover it. 00:38:34.328 [2024-12-09 09:56:09.739194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.328 [2024-12-09 09:56:09.739223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.328 qpair failed and we were unable to recover it. 00:38:34.328 [2024-12-09 09:56:09.739550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.328 [2024-12-09 09:56:09.739580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.328 qpair failed and we were unable to recover it. 00:38:34.328 [2024-12-09 09:56:09.739933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.328 [2024-12-09 09:56:09.739965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.328 qpair failed and we were unable to recover it. 00:38:34.328 [2024-12-09 09:56:09.740322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.328 [2024-12-09 09:56:09.740352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.328 qpair failed and we were unable to recover it. 00:38:34.328 [2024-12-09 09:56:09.740741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.328 [2024-12-09 09:56:09.740772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.328 qpair failed and we were unable to recover it. 00:38:34.328 [2024-12-09 09:56:09.741109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.328 [2024-12-09 09:56:09.741139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.328 qpair failed and we were unable to recover it. 00:38:34.328 [2024-12-09 09:56:09.741488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.328 [2024-12-09 09:56:09.741518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.328 qpair failed and we were unable to recover it. 00:38:34.328 [2024-12-09 09:56:09.741859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.328 [2024-12-09 09:56:09.741890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.328 qpair failed and we were unable to recover it. 00:38:34.328 [2024-12-09 09:56:09.742238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.328 [2024-12-09 09:56:09.742269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.328 qpair failed and we were unable to recover it. 00:38:34.328 [2024-12-09 09:56:09.742504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.328 [2024-12-09 09:56:09.742534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.328 qpair failed and we were unable to recover it. 00:38:34.328 [2024-12-09 09:56:09.742943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.328 [2024-12-09 09:56:09.742974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.328 qpair failed and we were unable to recover it. 00:38:34.328 [2024-12-09 09:56:09.743315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.328 [2024-12-09 09:56:09.743345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.328 qpair failed and we were unable to recover it. 00:38:34.328 [2024-12-09 09:56:09.743680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.328 [2024-12-09 09:56:09.743712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.328 qpair failed and we were unable to recover it. 00:38:34.328 [2024-12-09 09:56:09.744096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.328 [2024-12-09 09:56:09.744126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.328 qpair failed and we were unable to recover it. 00:38:34.328 [2024-12-09 09:56:09.744453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.328 [2024-12-09 09:56:09.744483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.328 qpair failed and we were unable to recover it. 00:38:34.603 [2024-12-09 09:56:09.744814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.603 [2024-12-09 09:56:09.744846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.603 qpair failed and we were unable to recover it. 00:38:34.603 [2024-12-09 09:56:09.745190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.603 [2024-12-09 09:56:09.745222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.603 qpair failed and we were unable to recover it. 00:38:34.603 [2024-12-09 09:56:09.745563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.603 [2024-12-09 09:56:09.745594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.603 qpair failed and we were unable to recover it. 00:38:34.603 [2024-12-09 09:56:09.745964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.603 [2024-12-09 09:56:09.745995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.603 qpair failed and we were unable to recover it. 00:38:34.603 [2024-12-09 09:56:09.746349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.603 [2024-12-09 09:56:09.746379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.603 qpair failed and we were unable to recover it. 00:38:34.603 [2024-12-09 09:56:09.746707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.603 [2024-12-09 09:56:09.746737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.603 qpair failed and we were unable to recover it. 00:38:34.603 [2024-12-09 09:56:09.747097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.603 [2024-12-09 09:56:09.747128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.603 qpair failed and we were unable to recover it. 00:38:34.603 [2024-12-09 09:56:09.747470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.603 [2024-12-09 09:56:09.747500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.603 qpair failed and we were unable to recover it. 00:38:34.603 [2024-12-09 09:56:09.747855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.603 [2024-12-09 09:56:09.747886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.603 qpair failed and we were unable to recover it. 00:38:34.603 [2024-12-09 09:56:09.748277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.603 [2024-12-09 09:56:09.748307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.603 qpair failed and we were unable to recover it. 00:38:34.604 [2024-12-09 09:56:09.748633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.604 [2024-12-09 09:56:09.748676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.604 qpair failed and we were unable to recover it. 00:38:34.604 [2024-12-09 09:56:09.749021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.604 [2024-12-09 09:56:09.749052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.604 qpair failed and we were unable to recover it. 00:38:34.604 [2024-12-09 09:56:09.749406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.604 [2024-12-09 09:56:09.749437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.604 qpair failed and we were unable to recover it. 00:38:34.604 [2024-12-09 09:56:09.749776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.604 [2024-12-09 09:56:09.749807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.604 qpair failed and we were unable to recover it. 00:38:34.604 [2024-12-09 09:56:09.750159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.604 [2024-12-09 09:56:09.750195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.604 qpair failed and we were unable to recover it. 00:38:34.604 [2024-12-09 09:56:09.750528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.604 [2024-12-09 09:56:09.750558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.604 qpair failed and we were unable to recover it. 00:38:34.604 [2024-12-09 09:56:09.750892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.604 [2024-12-09 09:56:09.750924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.604 qpair failed and we were unable to recover it. 00:38:34.604 [2024-12-09 09:56:09.751283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.604 [2024-12-09 09:56:09.751313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.604 qpair failed and we were unable to recover it. 00:38:34.604 [2024-12-09 09:56:09.751675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.604 [2024-12-09 09:56:09.751707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.604 qpair failed and we were unable to recover it. 00:38:34.604 [2024-12-09 09:56:09.752095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.604 [2024-12-09 09:56:09.752125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.604 qpair failed and we were unable to recover it. 00:38:34.604 [2024-12-09 09:56:09.752480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.604 [2024-12-09 09:56:09.752510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.604 qpair failed and we were unable to recover it. 00:38:34.604 [2024-12-09 09:56:09.752855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.604 [2024-12-09 09:56:09.752886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.604 qpair failed and we were unable to recover it. 00:38:34.604 [2024-12-09 09:56:09.753228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.604 [2024-12-09 09:56:09.753258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.604 qpair failed and we were unable to recover it. 00:38:34.604 [2024-12-09 09:56:09.753602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.604 [2024-12-09 09:56:09.753632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.604 qpair failed and we were unable to recover it. 00:38:34.604 [2024-12-09 09:56:09.753980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.604 [2024-12-09 09:56:09.754012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.604 qpair failed and we were unable to recover it. 00:38:34.604 [2024-12-09 09:56:09.754346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.604 [2024-12-09 09:56:09.754377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.604 qpair failed and we were unable to recover it. 00:38:34.604 [2024-12-09 09:56:09.754660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.604 [2024-12-09 09:56:09.754692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.604 qpair failed and we were unable to recover it. 00:38:34.604 [2024-12-09 09:56:09.755035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.604 [2024-12-09 09:56:09.755064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.604 qpair failed and we were unable to recover it. 00:38:34.604 [2024-12-09 09:56:09.755410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.604 [2024-12-09 09:56:09.755440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.604 qpair failed and we were unable to recover it. 00:38:34.604 [2024-12-09 09:56:09.755681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.604 [2024-12-09 09:56:09.755712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.604 qpair failed and we were unable to recover it. 00:38:34.604 [2024-12-09 09:56:09.756125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.604 [2024-12-09 09:56:09.756155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.604 qpair failed and we were unable to recover it. 00:38:34.604 [2024-12-09 09:56:09.756498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.604 [2024-12-09 09:56:09.756529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.604 qpair failed and we were unable to recover it. 00:38:34.604 [2024-12-09 09:56:09.756867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.604 [2024-12-09 09:56:09.756898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.604 qpair failed and we were unable to recover it. 00:38:34.604 [2024-12-09 09:56:09.757247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.604 [2024-12-09 09:56:09.757277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.604 qpair failed and we were unable to recover it. 00:38:34.604 [2024-12-09 09:56:09.757620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.604 [2024-12-09 09:56:09.757661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.604 qpair failed and we were unable to recover it. 00:38:34.604 [2024-12-09 09:56:09.758006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.604 [2024-12-09 09:56:09.758038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.604 qpair failed and we were unable to recover it. 00:38:34.604 [2024-12-09 09:56:09.758389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.604 [2024-12-09 09:56:09.758420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.604 qpair failed and we were unable to recover it. 00:38:34.604 [2024-12-09 09:56:09.758768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.604 [2024-12-09 09:56:09.758800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.604 qpair failed and we were unable to recover it. 00:38:34.604 [2024-12-09 09:56:09.759189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.604 [2024-12-09 09:56:09.759218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.604 qpair failed and we were unable to recover it. 00:38:34.604 [2024-12-09 09:56:09.759444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.604 [2024-12-09 09:56:09.759474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.604 qpair failed and we were unable to recover it. 00:38:34.604 [2024-12-09 09:56:09.759821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.604 [2024-12-09 09:56:09.759853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.604 qpair failed and we were unable to recover it. 00:38:34.604 [2024-12-09 09:56:09.760196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.604 [2024-12-09 09:56:09.760232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.604 qpair failed and we were unable to recover it. 00:38:34.604 [2024-12-09 09:56:09.760579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.604 [2024-12-09 09:56:09.760609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.604 qpair failed and we were unable to recover it. 00:38:34.604 [2024-12-09 09:56:09.760976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.604 [2024-12-09 09:56:09.761007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.604 qpair failed and we were unable to recover it. 00:38:34.604 [2024-12-09 09:56:09.761370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.604 [2024-12-09 09:56:09.761399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.604 qpair failed and we were unable to recover it. 00:38:34.604 [2024-12-09 09:56:09.761753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.604 [2024-12-09 09:56:09.761784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.604 qpair failed and we were unable to recover it. 00:38:34.604 [2024-12-09 09:56:09.762136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.604 [2024-12-09 09:56:09.762166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.604 qpair failed and we were unable to recover it. 00:38:34.604 [2024-12-09 09:56:09.762514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.604 [2024-12-09 09:56:09.762544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.605 qpair failed and we were unable to recover it. 00:38:34.605 [2024-12-09 09:56:09.762879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.605 [2024-12-09 09:56:09.762911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.605 qpair failed and we were unable to recover it. 00:38:34.605 [2024-12-09 09:56:09.763259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.605 [2024-12-09 09:56:09.763289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.605 qpair failed and we were unable to recover it. 00:38:34.605 [2024-12-09 09:56:09.763631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.605 [2024-12-09 09:56:09.763671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.605 qpair failed and we were unable to recover it. 00:38:34.605 [2024-12-09 09:56:09.764014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.605 [2024-12-09 09:56:09.764045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.605 qpair failed and we were unable to recover it. 00:38:34.605 [2024-12-09 09:56:09.764393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.605 [2024-12-09 09:56:09.764423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.605 qpair failed and we were unable to recover it. 00:38:34.605 [2024-12-09 09:56:09.764659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.605 [2024-12-09 09:56:09.764690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.605 qpair failed and we were unable to recover it. 00:38:34.605 [2024-12-09 09:56:09.765023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.605 [2024-12-09 09:56:09.765052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.605 qpair failed and we were unable to recover it. 00:38:34.605 [2024-12-09 09:56:09.765407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.605 [2024-12-09 09:56:09.765437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.605 qpair failed and we were unable to recover it. 00:38:34.605 [2024-12-09 09:56:09.765784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.605 [2024-12-09 09:56:09.765816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.605 qpair failed and we were unable to recover it. 00:38:34.605 [2024-12-09 09:56:09.766165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.605 [2024-12-09 09:56:09.766195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.605 qpair failed and we were unable to recover it. 00:38:34.605 [2024-12-09 09:56:09.766542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.605 [2024-12-09 09:56:09.766572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.605 qpair failed and we were unable to recover it. 00:38:34.605 [2024-12-09 09:56:09.766897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.605 [2024-12-09 09:56:09.766928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.605 qpair failed and we were unable to recover it. 00:38:34.605 [2024-12-09 09:56:09.767204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.605 [2024-12-09 09:56:09.767234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.605 qpair failed and we were unable to recover it. 00:38:34.605 [2024-12-09 09:56:09.767561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.605 [2024-12-09 09:56:09.767591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.605 qpair failed and we were unable to recover it. 00:38:34.605 [2024-12-09 09:56:09.767949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.605 [2024-12-09 09:56:09.767980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.605 qpair failed and we were unable to recover it. 00:38:34.605 [2024-12-09 09:56:09.768324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.605 [2024-12-09 09:56:09.768355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.605 qpair failed and we were unable to recover it. 00:38:34.605 [2024-12-09 09:56:09.768711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.605 [2024-12-09 09:56:09.768742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.605 qpair failed and we were unable to recover it. 00:38:34.605 [2024-12-09 09:56:09.769081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.605 [2024-12-09 09:56:09.769111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.605 qpair failed and we were unable to recover it. 00:38:34.605 [2024-12-09 09:56:09.769440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.605 [2024-12-09 09:56:09.769471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.605 qpair failed and we were unable to recover it. 00:38:34.605 [2024-12-09 09:56:09.769813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.605 [2024-12-09 09:56:09.769845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.605 qpair failed and we were unable to recover it. 00:38:34.605 [2024-12-09 09:56:09.770196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.605 [2024-12-09 09:56:09.770227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.605 qpair failed and we were unable to recover it. 00:38:34.605 [2024-12-09 09:56:09.770567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.605 [2024-12-09 09:56:09.770598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.605 qpair failed and we were unable to recover it. 00:38:34.605 [2024-12-09 09:56:09.770941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.605 [2024-12-09 09:56:09.770972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.605 qpair failed and we were unable to recover it. 00:38:34.605 [2024-12-09 09:56:09.771323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.605 [2024-12-09 09:56:09.771353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.605 qpair failed and we were unable to recover it. 00:38:34.605 [2024-12-09 09:56:09.771599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.605 [2024-12-09 09:56:09.771629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.605 qpair failed and we were unable to recover it. 00:38:34.605 [2024-12-09 09:56:09.772016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.605 [2024-12-09 09:56:09.772047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.605 qpair failed and we were unable to recover it. 00:38:34.605 [2024-12-09 09:56:09.772283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.605 [2024-12-09 09:56:09.772315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.605 qpair failed and we were unable to recover it. 00:38:34.605 [2024-12-09 09:56:09.772657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.605 [2024-12-09 09:56:09.772688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.605 qpair failed and we were unable to recover it. 00:38:34.605 [2024-12-09 09:56:09.773000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.605 [2024-12-09 09:56:09.773030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.605 qpair failed and we were unable to recover it. 00:38:34.605 [2024-12-09 09:56:09.773384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.605 [2024-12-09 09:56:09.773414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.605 qpair failed and we were unable to recover it. 00:38:34.605 [2024-12-09 09:56:09.773758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.605 [2024-12-09 09:56:09.773790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.605 qpair failed and we were unable to recover it. 00:38:34.605 [2024-12-09 09:56:09.774134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.605 [2024-12-09 09:56:09.774164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.605 qpair failed and we were unable to recover it. 00:38:34.605 [2024-12-09 09:56:09.774524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.605 [2024-12-09 09:56:09.774554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.605 qpair failed and we were unable to recover it. 00:38:34.605 [2024-12-09 09:56:09.774887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.605 [2024-12-09 09:56:09.774919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.605 qpair failed and we were unable to recover it. 00:38:34.605 [2024-12-09 09:56:09.775263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.605 [2024-12-09 09:56:09.775299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.605 qpair failed and we were unable to recover it. 00:38:34.605 [2024-12-09 09:56:09.775636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.605 [2024-12-09 09:56:09.775676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.605 qpair failed and we were unable to recover it. 00:38:34.605 [2024-12-09 09:56:09.776010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.605 [2024-12-09 09:56:09.776041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.605 qpair failed and we were unable to recover it. 00:38:34.605 [2024-12-09 09:56:09.776386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.605 [2024-12-09 09:56:09.776417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.605 qpair failed and we were unable to recover it. 00:38:34.605 [2024-12-09 09:56:09.776762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.606 [2024-12-09 09:56:09.776793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.606 qpair failed and we were unable to recover it. 00:38:34.606 [2024-12-09 09:56:09.777160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.606 [2024-12-09 09:56:09.777191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.606 qpair failed and we were unable to recover it. 00:38:34.606 [2024-12-09 09:56:09.777542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.606 [2024-12-09 09:56:09.777572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.606 qpair failed and we were unable to recover it. 00:38:34.606 [2024-12-09 09:56:09.777916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.606 [2024-12-09 09:56:09.777947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.606 qpair failed and we were unable to recover it. 00:38:34.606 [2024-12-09 09:56:09.778286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.606 [2024-12-09 09:56:09.778317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.606 qpair failed and we were unable to recover it. 00:38:34.606 [2024-12-09 09:56:09.778654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.606 [2024-12-09 09:56:09.778686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.606 qpair failed and we were unable to recover it. 00:38:34.606 [2024-12-09 09:56:09.779019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.606 [2024-12-09 09:56:09.779049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.606 qpair failed and we were unable to recover it. 00:38:34.606 [2024-12-09 09:56:09.779391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.606 [2024-12-09 09:56:09.779422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.606 qpair failed and we were unable to recover it. 00:38:34.606 [2024-12-09 09:56:09.779769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.606 [2024-12-09 09:56:09.779801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.606 qpair failed and we were unable to recover it. 00:38:34.606 [2024-12-09 09:56:09.780030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.606 [2024-12-09 09:56:09.780060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.606 qpair failed and we were unable to recover it. 00:38:34.606 [2024-12-09 09:56:09.780435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.606 [2024-12-09 09:56:09.780465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.606 qpair failed and we were unable to recover it. 00:38:34.606 [2024-12-09 09:56:09.780807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.606 [2024-12-09 09:56:09.780837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.606 qpair failed and we were unable to recover it. 00:38:34.606 [2024-12-09 09:56:09.781088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.606 [2024-12-09 09:56:09.781118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.606 qpair failed and we were unable to recover it. 00:38:34.606 [2024-12-09 09:56:09.781892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.606 [2024-12-09 09:56:09.781937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.606 qpair failed and we were unable to recover it. 00:38:34.606 [2024-12-09 09:56:09.782300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.606 [2024-12-09 09:56:09.782334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.606 qpair failed and we were unable to recover it. 00:38:34.606 [2024-12-09 09:56:09.782697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.606 [2024-12-09 09:56:09.782732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.606 qpair failed and we were unable to recover it. 00:38:34.606 [2024-12-09 09:56:09.783061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.606 [2024-12-09 09:56:09.783091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.606 qpair failed and we were unable to recover it. 00:38:34.606 [2024-12-09 09:56:09.783430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.606 [2024-12-09 09:56:09.783460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.606 qpair failed and we were unable to recover it. 00:38:34.606 [2024-12-09 09:56:09.783818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.606 [2024-12-09 09:56:09.783850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.606 qpair failed and we were unable to recover it. 00:38:34.606 [2024-12-09 09:56:09.784190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.606 [2024-12-09 09:56:09.784219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.606 qpair failed and we were unable to recover it. 00:38:34.606 [2024-12-09 09:56:09.784557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.606 [2024-12-09 09:56:09.784587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.606 qpair failed and we were unable to recover it. 00:38:34.606 [2024-12-09 09:56:09.784935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.606 [2024-12-09 09:56:09.784965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.606 qpair failed and we were unable to recover it. 00:38:34.606 [2024-12-09 09:56:09.785314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.606 [2024-12-09 09:56:09.785344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.606 qpair failed and we were unable to recover it. 00:38:34.606 [2024-12-09 09:56:09.785696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.606 [2024-12-09 09:56:09.785727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.606 qpair failed and we were unable to recover it. 00:38:34.606 [2024-12-09 09:56:09.786071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.606 [2024-12-09 09:56:09.786101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.606 qpair failed and we were unable to recover it. 00:38:34.606 [2024-12-09 09:56:09.786448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.606 [2024-12-09 09:56:09.786479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.606 qpair failed and we were unable to recover it. 00:38:34.606 [2024-12-09 09:56:09.786856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.606 [2024-12-09 09:56:09.786888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.606 qpair failed and we were unable to recover it. 00:38:34.606 [2024-12-09 09:56:09.787226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.606 [2024-12-09 09:56:09.787257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.606 qpair failed and we were unable to recover it. 00:38:34.606 [2024-12-09 09:56:09.787594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.606 [2024-12-09 09:56:09.787624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.606 qpair failed and we were unable to recover it. 00:38:34.606 [2024-12-09 09:56:09.787973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.606 [2024-12-09 09:56:09.788003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.606 qpair failed and we were unable to recover it. 00:38:34.606 [2024-12-09 09:56:09.788366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.606 [2024-12-09 09:56:09.788396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.606 qpair failed and we were unable to recover it. 00:38:34.606 [2024-12-09 09:56:09.788729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.606 [2024-12-09 09:56:09.788761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.606 qpair failed and we were unable to recover it. 00:38:34.606 [2024-12-09 09:56:09.789107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.606 [2024-12-09 09:56:09.789137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.606 qpair failed and we were unable to recover it. 00:38:34.606 [2024-12-09 09:56:09.789471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.606 [2024-12-09 09:56:09.789502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.606 qpair failed and we were unable to recover it. 00:38:34.606 [2024-12-09 09:56:09.789852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.606 [2024-12-09 09:56:09.789883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.606 qpair failed and we were unable to recover it. 00:38:34.606 [2024-12-09 09:56:09.790227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.606 [2024-12-09 09:56:09.790257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.606 qpair failed and we were unable to recover it. 00:38:34.606 [2024-12-09 09:56:09.790597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.606 [2024-12-09 09:56:09.790627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.606 qpair failed and we were unable to recover it. 00:38:34.606 [2024-12-09 09:56:09.790986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.606 [2024-12-09 09:56:09.791018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.606 qpair failed and we were unable to recover it. 00:38:34.606 [2024-12-09 09:56:09.791331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.606 [2024-12-09 09:56:09.791361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.607 qpair failed and we were unable to recover it. 00:38:34.607 [2024-12-09 09:56:09.791691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.607 [2024-12-09 09:56:09.791723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.607 qpair failed and we were unable to recover it. 00:38:34.607 [2024-12-09 09:56:09.792064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.607 [2024-12-09 09:56:09.792094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.607 qpair failed and we were unable to recover it. 00:38:34.607 [2024-12-09 09:56:09.792437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.607 [2024-12-09 09:56:09.792467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.607 qpair failed and we were unable to recover it. 00:38:34.607 [2024-12-09 09:56:09.792789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.607 [2024-12-09 09:56:09.792819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.607 qpair failed and we were unable to recover it. 00:38:34.607 [2024-12-09 09:56:09.793167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.607 [2024-12-09 09:56:09.793197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.607 qpair failed and we were unable to recover it. 00:38:34.607 [2024-12-09 09:56:09.793541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.607 [2024-12-09 09:56:09.793571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.607 qpair failed and we were unable to recover it. 00:38:34.607 [2024-12-09 09:56:09.793916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.607 [2024-12-09 09:56:09.793947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.607 qpair failed and we were unable to recover it. 00:38:34.607 [2024-12-09 09:56:09.794344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.607 [2024-12-09 09:56:09.794373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.607 qpair failed and we were unable to recover it. 00:38:34.607 [2024-12-09 09:56:09.794787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.607 [2024-12-09 09:56:09.794818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.607 qpair failed and we were unable to recover it. 00:38:34.607 [2024-12-09 09:56:09.795152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.607 [2024-12-09 09:56:09.795181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.607 qpair failed and we were unable to recover it. 00:38:34.607 [2024-12-09 09:56:09.795542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.607 [2024-12-09 09:56:09.795571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.607 qpair failed and we were unable to recover it. 00:38:34.607 [2024-12-09 09:56:09.795930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.607 [2024-12-09 09:56:09.795961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.607 qpair failed and we were unable to recover it. 00:38:34.607 [2024-12-09 09:56:09.796300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.607 [2024-12-09 09:56:09.796331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.607 qpair failed and we were unable to recover it. 00:38:34.607 [2024-12-09 09:56:09.796671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.607 [2024-12-09 09:56:09.796702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.607 qpair failed and we were unable to recover it. 00:38:34.607 [2024-12-09 09:56:09.797055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.607 [2024-12-09 09:56:09.797085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.607 qpair failed and we were unable to recover it. 00:38:34.607 [2024-12-09 09:56:09.797431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.607 [2024-12-09 09:56:09.797461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.607 qpair failed and we were unable to recover it. 00:38:34.607 [2024-12-09 09:56:09.797804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.607 [2024-12-09 09:56:09.797836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.607 qpair failed and we were unable to recover it. 00:38:34.607 [2024-12-09 09:56:09.798187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.607 [2024-12-09 09:56:09.798217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.607 qpair failed and we were unable to recover it. 00:38:34.607 [2024-12-09 09:56:09.798557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.607 [2024-12-09 09:56:09.798586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.607 qpair failed and we were unable to recover it. 00:38:34.607 [2024-12-09 09:56:09.798943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.607 [2024-12-09 09:56:09.798974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.607 qpair failed and we were unable to recover it. 00:38:34.607 [2024-12-09 09:56:09.799325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.607 [2024-12-09 09:56:09.799355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.607 qpair failed and we were unable to recover it. 00:38:34.607 [2024-12-09 09:56:09.799675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.607 [2024-12-09 09:56:09.799706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.607 qpair failed and we were unable to recover it. 00:38:34.607 [2024-12-09 09:56:09.800045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.607 [2024-12-09 09:56:09.800075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.607 qpair failed and we were unable to recover it. 00:38:34.607 [2024-12-09 09:56:09.800319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.607 [2024-12-09 09:56:09.800349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.607 qpair failed and we were unable to recover it. 00:38:34.607 [2024-12-09 09:56:09.800695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.607 [2024-12-09 09:56:09.800726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.607 qpair failed and we were unable to recover it. 00:38:34.607 [2024-12-09 09:56:09.801066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.607 [2024-12-09 09:56:09.801101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.607 qpair failed and we were unable to recover it. 00:38:34.607 [2024-12-09 09:56:09.801439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.607 [2024-12-09 09:56:09.801468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.607 qpair failed and we were unable to recover it. 00:38:34.607 [2024-12-09 09:56:09.801790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.607 [2024-12-09 09:56:09.801822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.607 qpair failed and we were unable to recover it. 00:38:34.607 [2024-12-09 09:56:09.802164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.607 [2024-12-09 09:56:09.802193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.607 qpair failed and we were unable to recover it. 00:38:34.607 [2024-12-09 09:56:09.802542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.607 [2024-12-09 09:56:09.802572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.607 qpair failed and we were unable to recover it. 00:38:34.607 [2024-12-09 09:56:09.802922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.607 [2024-12-09 09:56:09.802954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.607 qpair failed and we were unable to recover it. 00:38:34.607 [2024-12-09 09:56:09.803301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.607 [2024-12-09 09:56:09.803331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.607 qpair failed and we were unable to recover it. 00:38:34.607 [2024-12-09 09:56:09.803671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.607 [2024-12-09 09:56:09.803704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.607 qpair failed and we were unable to recover it. 00:38:34.607 [2024-12-09 09:56:09.804043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.607 [2024-12-09 09:56:09.804073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.607 qpair failed and we were unable to recover it. 00:38:34.607 [2024-12-09 09:56:09.804416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.607 [2024-12-09 09:56:09.804445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.607 qpair failed and we were unable to recover it. 00:38:34.607 [2024-12-09 09:56:09.804810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.607 [2024-12-09 09:56:09.804841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.607 qpair failed and we were unable to recover it. 00:38:34.607 [2024-12-09 09:56:09.805175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.607 [2024-12-09 09:56:09.805204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.607 qpair failed and we were unable to recover it. 00:38:34.607 [2024-12-09 09:56:09.805485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.607 [2024-12-09 09:56:09.805514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.607 qpair failed and we were unable to recover it. 00:38:34.607 [2024-12-09 09:56:09.805875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.608 [2024-12-09 09:56:09.805906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.608 qpair failed and we were unable to recover it. 00:38:34.608 [2024-12-09 09:56:09.806253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.608 [2024-12-09 09:56:09.806283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.608 qpair failed and we were unable to recover it. 00:38:34.608 [2024-12-09 09:56:09.806626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.608 [2024-12-09 09:56:09.806665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.608 qpair failed and we were unable to recover it. 00:38:34.608 [2024-12-09 09:56:09.807066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.608 [2024-12-09 09:56:09.807096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.608 qpair failed and we were unable to recover it. 00:38:34.608 [2024-12-09 09:56:09.807438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.608 [2024-12-09 09:56:09.807467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.608 qpair failed and we were unable to recover it. 00:38:34.608 [2024-12-09 09:56:09.807778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.608 [2024-12-09 09:56:09.807809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.608 qpair failed and we were unable to recover it. 00:38:34.608 [2024-12-09 09:56:09.808146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.608 [2024-12-09 09:56:09.808176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.608 qpair failed and we were unable to recover it. 00:38:34.608 [2024-12-09 09:56:09.808519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.608 [2024-12-09 09:56:09.808548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.608 qpair failed and we were unable to recover it. 00:38:34.608 [2024-12-09 09:56:09.808896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.608 [2024-12-09 09:56:09.808926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.608 qpair failed and we were unable to recover it. 00:38:34.608 [2024-12-09 09:56:09.809289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.608 [2024-12-09 09:56:09.809320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.608 qpair failed and we were unable to recover it. 00:38:34.608 [2024-12-09 09:56:09.809655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.608 [2024-12-09 09:56:09.809687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.608 qpair failed and we were unable to recover it. 00:38:34.608 [2024-12-09 09:56:09.810047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.608 [2024-12-09 09:56:09.810076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.608 qpair failed and we were unable to recover it. 00:38:34.608 [2024-12-09 09:56:09.810418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.608 [2024-12-09 09:56:09.810449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.608 qpair failed and we were unable to recover it. 00:38:34.608 [2024-12-09 09:56:09.810797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.608 [2024-12-09 09:56:09.810828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.608 qpair failed and we were unable to recover it. 00:38:34.608 [2024-12-09 09:56:09.811178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.608 [2024-12-09 09:56:09.811208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.608 qpair failed and we were unable to recover it. 00:38:34.608 [2024-12-09 09:56:09.811554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.608 [2024-12-09 09:56:09.811584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.608 qpair failed and we were unable to recover it. 00:38:34.608 [2024-12-09 09:56:09.811958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.608 [2024-12-09 09:56:09.811989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.608 qpair failed and we were unable to recover it. 00:38:34.608 [2024-12-09 09:56:09.812329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.608 [2024-12-09 09:56:09.812358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.608 qpair failed and we were unable to recover it. 00:38:34.608 [2024-12-09 09:56:09.812695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.608 [2024-12-09 09:56:09.812726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.608 qpair failed and we were unable to recover it. 00:38:34.608 [2024-12-09 09:56:09.812956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.608 [2024-12-09 09:56:09.812986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.608 qpair failed and we were unable to recover it. 00:38:34.608 [2024-12-09 09:56:09.813339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.608 [2024-12-09 09:56:09.813368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.608 qpair failed and we were unable to recover it. 00:38:34.608 [2024-12-09 09:56:09.813720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.608 [2024-12-09 09:56:09.813750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.608 qpair failed and we were unable to recover it. 00:38:34.608 [2024-12-09 09:56:09.814097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.608 [2024-12-09 09:56:09.814127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.608 qpair failed and we were unable to recover it. 00:38:34.608 [2024-12-09 09:56:09.814475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.608 [2024-12-09 09:56:09.814504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.608 qpair failed and we were unable to recover it. 00:38:34.608 [2024-12-09 09:56:09.814756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.608 [2024-12-09 09:56:09.814786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.608 qpair failed and we were unable to recover it. 00:38:34.608 [2024-12-09 09:56:09.815153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.608 [2024-12-09 09:56:09.815183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.608 qpair failed and we were unable to recover it. 00:38:34.608 [2024-12-09 09:56:09.815522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.608 [2024-12-09 09:56:09.815553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.608 qpair failed and we were unable to recover it. 00:38:34.608 [2024-12-09 09:56:09.815887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.608 [2024-12-09 09:56:09.815918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.608 qpair failed and we were unable to recover it. 00:38:34.608 [2024-12-09 09:56:09.816244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.608 [2024-12-09 09:56:09.816274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.608 qpair failed and we were unable to recover it. 00:38:34.608 [2024-12-09 09:56:09.816600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.608 [2024-12-09 09:56:09.816631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.608 qpair failed and we were unable to recover it. 00:38:34.608 [2024-12-09 09:56:09.816990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.608 [2024-12-09 09:56:09.817021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.608 qpair failed and we were unable to recover it. 00:38:34.608 [2024-12-09 09:56:09.817363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.608 [2024-12-09 09:56:09.817393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.608 qpair failed and we were unable to recover it. 00:38:34.608 [2024-12-09 09:56:09.817697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.608 [2024-12-09 09:56:09.817729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.608 qpair failed and we were unable to recover it. 00:38:34.609 [2024-12-09 09:56:09.818086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.609 [2024-12-09 09:56:09.818115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.609 qpair failed and we were unable to recover it. 00:38:34.609 [2024-12-09 09:56:09.818457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.609 [2024-12-09 09:56:09.818488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.609 qpair failed and we were unable to recover it. 00:38:34.609 [2024-12-09 09:56:09.818822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.609 [2024-12-09 09:56:09.818855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.609 qpair failed and we were unable to recover it. 00:38:34.609 [2024-12-09 09:56:09.819218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.609 [2024-12-09 09:56:09.819247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.609 qpair failed and we were unable to recover it. 00:38:34.609 [2024-12-09 09:56:09.819594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.609 [2024-12-09 09:56:09.819623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.609 qpair failed and we were unable to recover it. 00:38:34.609 [2024-12-09 09:56:09.819944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.609 [2024-12-09 09:56:09.819975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.609 qpair failed and we were unable to recover it. 00:38:34.609 [2024-12-09 09:56:09.820316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.609 [2024-12-09 09:56:09.820345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.609 qpair failed and we were unable to recover it. 00:38:34.609 [2024-12-09 09:56:09.820691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.609 [2024-12-09 09:56:09.820722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.609 qpair failed and we were unable to recover it. 00:38:34.609 [2024-12-09 09:56:09.821082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.609 [2024-12-09 09:56:09.821112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.609 qpair failed and we were unable to recover it. 00:38:34.609 [2024-12-09 09:56:09.821479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.609 [2024-12-09 09:56:09.821510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.609 qpair failed and we were unable to recover it. 00:38:34.609 [2024-12-09 09:56:09.821861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.609 [2024-12-09 09:56:09.821893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.609 qpair failed and we were unable to recover it. 00:38:34.609 [2024-12-09 09:56:09.822236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.609 [2024-12-09 09:56:09.822267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.609 qpair failed and we were unable to recover it. 00:38:34.609 [2024-12-09 09:56:09.822482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.609 [2024-12-09 09:56:09.822511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.609 qpair failed and we were unable to recover it. 00:38:34.609 [2024-12-09 09:56:09.822876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.609 [2024-12-09 09:56:09.822906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.609 qpair failed and we were unable to recover it. 00:38:34.609 [2024-12-09 09:56:09.823245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.609 [2024-12-09 09:56:09.823275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.609 qpair failed and we were unable to recover it. 00:38:34.609 [2024-12-09 09:56:09.823607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.609 [2024-12-09 09:56:09.823645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.609 qpair failed and we were unable to recover it. 00:38:34.609 [2024-12-09 09:56:09.823985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.609 [2024-12-09 09:56:09.824015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.609 qpair failed and we were unable to recover it. 00:38:34.609 [2024-12-09 09:56:09.824350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.609 [2024-12-09 09:56:09.824380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.609 qpair failed and we were unable to recover it. 00:38:34.609 [2024-12-09 09:56:09.824724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.609 [2024-12-09 09:56:09.824755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.609 qpair failed and we were unable to recover it. 00:38:34.609 [2024-12-09 09:56:09.825095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.609 [2024-12-09 09:56:09.825126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.609 qpair failed and we were unable to recover it. 00:38:34.609 [2024-12-09 09:56:09.825479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.609 [2024-12-09 09:56:09.825508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.609 qpair failed and we were unable to recover it. 00:38:34.609 [2024-12-09 09:56:09.825862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.609 [2024-12-09 09:56:09.825894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.609 qpair failed and we were unable to recover it. 00:38:34.609 [2024-12-09 09:56:09.826233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.609 [2024-12-09 09:56:09.826278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.609 qpair failed and we were unable to recover it. 00:38:34.609 [2024-12-09 09:56:09.826623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.609 [2024-12-09 09:56:09.826676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.609 qpair failed and we were unable to recover it. 00:38:34.609 [2024-12-09 09:56:09.826930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.609 [2024-12-09 09:56:09.826961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.609 qpair failed and we were unable to recover it. 00:38:34.609 [2024-12-09 09:56:09.827291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.609 [2024-12-09 09:56:09.827321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.609 qpair failed and we were unable to recover it. 00:38:34.609 [2024-12-09 09:56:09.827542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.609 [2024-12-09 09:56:09.827572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.609 qpair failed and we were unable to recover it. 00:38:34.609 [2024-12-09 09:56:09.827753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.609 [2024-12-09 09:56:09.827784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.609 qpair failed and we were unable to recover it. 00:38:34.609 [2024-12-09 09:56:09.828121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.609 [2024-12-09 09:56:09.828150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.609 qpair failed and we were unable to recover it. 00:38:34.609 [2024-12-09 09:56:09.828510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.609 [2024-12-09 09:56:09.828540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.609 qpair failed and we were unable to recover it. 00:38:34.609 [2024-12-09 09:56:09.828862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.609 [2024-12-09 09:56:09.828893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.609 qpair failed and we were unable to recover it. 00:38:34.609 [2024-12-09 09:56:09.829131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.609 [2024-12-09 09:56:09.829160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.609 qpair failed and we were unable to recover it. 00:38:34.609 [2024-12-09 09:56:09.829481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.609 [2024-12-09 09:56:09.829511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.609 qpair failed and we were unable to recover it. 00:38:34.609 [2024-12-09 09:56:09.829870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.609 [2024-12-09 09:56:09.829901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.609 qpair failed and we were unable to recover it. 00:38:34.609 [2024-12-09 09:56:09.830247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.609 [2024-12-09 09:56:09.830277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.609 qpair failed and we were unable to recover it. 00:38:34.609 [2024-12-09 09:56:09.830622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.609 [2024-12-09 09:56:09.830660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.609 qpair failed and we were unable to recover it. 00:38:34.609 [2024-12-09 09:56:09.831023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.609 [2024-12-09 09:56:09.831053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.609 qpair failed and we were unable to recover it. 00:38:34.609 [2024-12-09 09:56:09.831391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.610 [2024-12-09 09:56:09.831421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.610 qpair failed and we were unable to recover it. 00:38:34.610 [2024-12-09 09:56:09.831765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.610 [2024-12-09 09:56:09.831796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.610 qpair failed and we were unable to recover it. 00:38:34.610 [2024-12-09 09:56:09.832152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.610 [2024-12-09 09:56:09.832181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.610 qpair failed and we were unable to recover it. 00:38:34.610 [2024-12-09 09:56:09.832515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.610 [2024-12-09 09:56:09.832545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.610 qpair failed and we were unable to recover it. 00:38:34.610 [2024-12-09 09:56:09.832884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.610 [2024-12-09 09:56:09.832916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.610 qpair failed and we were unable to recover it. 00:38:34.610 [2024-12-09 09:56:09.833249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.610 [2024-12-09 09:56:09.833279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.610 qpair failed and we were unable to recover it. 00:38:34.610 [2024-12-09 09:56:09.833479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.610 [2024-12-09 09:56:09.833510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.610 qpair failed and we were unable to recover it. 00:38:34.610 [2024-12-09 09:56:09.833888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.610 [2024-12-09 09:56:09.833920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.610 qpair failed and we were unable to recover it. 00:38:34.610 [2024-12-09 09:56:09.834253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.610 [2024-12-09 09:56:09.834282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.610 qpair failed and we were unable to recover it. 00:38:34.610 [2024-12-09 09:56:09.834632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.610 [2024-12-09 09:56:09.834683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.610 qpair failed and we were unable to recover it. 00:38:34.610 [2024-12-09 09:56:09.835022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.610 [2024-12-09 09:56:09.835052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.610 qpair failed and we were unable to recover it. 00:38:34.610 [2024-12-09 09:56:09.835412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.610 [2024-12-09 09:56:09.835442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.610 qpair failed and we were unable to recover it. 00:38:34.610 [2024-12-09 09:56:09.835785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.610 [2024-12-09 09:56:09.835816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.610 qpair failed and we were unable to recover it. 00:38:34.610 [2024-12-09 09:56:09.836157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.610 [2024-12-09 09:56:09.836187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.610 qpair failed and we were unable to recover it. 00:38:34.610 [2024-12-09 09:56:09.836538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.610 [2024-12-09 09:56:09.836568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.610 qpair failed and we were unable to recover it. 00:38:34.610 [2024-12-09 09:56:09.836868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.610 [2024-12-09 09:56:09.836898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.610 qpair failed and we were unable to recover it. 00:38:34.610 [2024-12-09 09:56:09.837237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.610 [2024-12-09 09:56:09.837267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.610 qpair failed and we were unable to recover it. 00:38:34.610 [2024-12-09 09:56:09.837609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.610 [2024-12-09 09:56:09.837649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.610 qpair failed and we were unable to recover it. 00:38:34.610 [2024-12-09 09:56:09.838019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.610 [2024-12-09 09:56:09.838049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.610 qpair failed and we were unable to recover it. 00:38:34.610 [2024-12-09 09:56:09.838388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.610 [2024-12-09 09:56:09.838418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.610 qpair failed and we were unable to recover it. 00:38:34.610 [2024-12-09 09:56:09.838684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.610 [2024-12-09 09:56:09.838716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.610 qpair failed and we were unable to recover it. 00:38:34.610 [2024-12-09 09:56:09.839081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.610 [2024-12-09 09:56:09.839111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.610 qpair failed and we were unable to recover it. 00:38:34.610 [2024-12-09 09:56:09.839463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.610 [2024-12-09 09:56:09.839493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.610 qpair failed and we were unable to recover it. 00:38:34.610 [2024-12-09 09:56:09.839833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.610 [2024-12-09 09:56:09.839865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.610 qpair failed and we were unable to recover it. 00:38:34.610 [2024-12-09 09:56:09.840218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.610 [2024-12-09 09:56:09.840247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.610 qpair failed and we were unable to recover it. 00:38:34.610 [2024-12-09 09:56:09.840594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.610 [2024-12-09 09:56:09.840624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.610 qpair failed and we were unable to recover it. 00:38:34.610 [2024-12-09 09:56:09.840873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.610 [2024-12-09 09:56:09.840908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.610 qpair failed and we were unable to recover it. 00:38:34.610 [2024-12-09 09:56:09.841297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.610 [2024-12-09 09:56:09.841327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.610 qpair failed and we were unable to recover it. 00:38:34.610 [2024-12-09 09:56:09.841660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.610 [2024-12-09 09:56:09.841691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.610 qpair failed and we were unable to recover it. 00:38:34.610 [2024-12-09 09:56:09.842040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.610 [2024-12-09 09:56:09.842070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.610 qpair failed and we were unable to recover it. 00:38:34.610 [2024-12-09 09:56:09.842432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.610 [2024-12-09 09:56:09.842463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.610 qpair failed and we were unable to recover it. 00:38:34.610 [2024-12-09 09:56:09.842870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.610 [2024-12-09 09:56:09.842901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.610 qpair failed and we were unable to recover it. 00:38:34.610 [2024-12-09 09:56:09.843239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.610 [2024-12-09 09:56:09.843268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.610 qpair failed and we were unable to recover it. 00:38:34.610 [2024-12-09 09:56:09.843611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.610 [2024-12-09 09:56:09.843651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.610 qpair failed and we were unable to recover it. 00:38:34.610 [2024-12-09 09:56:09.844012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.610 [2024-12-09 09:56:09.844041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.610 qpair failed and we were unable to recover it. 00:38:34.610 [2024-12-09 09:56:09.844387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.610 [2024-12-09 09:56:09.844417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.610 qpair failed and we were unable to recover it. 00:38:34.610 [2024-12-09 09:56:09.844702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.610 [2024-12-09 09:56:09.844733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.611 qpair failed and we were unable to recover it. 00:38:34.611 [2024-12-09 09:56:09.845108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.611 [2024-12-09 09:56:09.845139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.611 qpair failed and we were unable to recover it. 00:38:34.611 [2024-12-09 09:56:09.845481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.611 [2024-12-09 09:56:09.845510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.611 qpair failed and we were unable to recover it. 00:38:34.611 [2024-12-09 09:56:09.845863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.611 [2024-12-09 09:56:09.845894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.611 qpair failed and we were unable to recover it. 00:38:34.611 [2024-12-09 09:56:09.846316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.611 [2024-12-09 09:56:09.846347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.611 qpair failed and we were unable to recover it. 00:38:34.611 [2024-12-09 09:56:09.846684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.611 [2024-12-09 09:56:09.846715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.611 qpair failed and we were unable to recover it. 00:38:34.611 [2024-12-09 09:56:09.847075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.611 [2024-12-09 09:56:09.847105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.611 qpair failed and we were unable to recover it. 00:38:34.611 [2024-12-09 09:56:09.847465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.611 [2024-12-09 09:56:09.847495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.611 qpair failed and we were unable to recover it. 00:38:34.611 [2024-12-09 09:56:09.847819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.611 [2024-12-09 09:56:09.847850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.611 qpair failed and we were unable to recover it. 00:38:34.611 [2024-12-09 09:56:09.848188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.611 [2024-12-09 09:56:09.848219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.611 qpair failed and we were unable to recover it. 00:38:34.611 [2024-12-09 09:56:09.848530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.611 [2024-12-09 09:56:09.848561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.611 qpair failed and we were unable to recover it. 00:38:34.611 [2024-12-09 09:56:09.848914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.611 [2024-12-09 09:56:09.848944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.611 qpair failed and we were unable to recover it. 00:38:34.611 [2024-12-09 09:56:09.849275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.611 [2024-12-09 09:56:09.849305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.611 qpair failed and we were unable to recover it. 00:38:34.611 [2024-12-09 09:56:09.849655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.611 [2024-12-09 09:56:09.849687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.611 qpair failed and we were unable to recover it. 00:38:34.611 [2024-12-09 09:56:09.850033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.611 [2024-12-09 09:56:09.850063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.611 qpair failed and we were unable to recover it. 00:38:34.611 [2024-12-09 09:56:09.850425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.611 [2024-12-09 09:56:09.850455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.611 qpair failed and we were unable to recover it. 00:38:34.611 [2024-12-09 09:56:09.850813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.611 [2024-12-09 09:56:09.850843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.611 qpair failed and we were unable to recover it. 00:38:34.611 [2024-12-09 09:56:09.851191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.611 [2024-12-09 09:56:09.851227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.611 qpair failed and we were unable to recover it. 00:38:34.611 [2024-12-09 09:56:09.851546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.611 [2024-12-09 09:56:09.851577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.611 qpair failed and we were unable to recover it. 00:38:34.611 [2024-12-09 09:56:09.851917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.611 [2024-12-09 09:56:09.851948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.611 qpair failed and we were unable to recover it. 00:38:34.611 [2024-12-09 09:56:09.852279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.611 [2024-12-09 09:56:09.852310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.611 qpair failed and we were unable to recover it. 00:38:34.611 [2024-12-09 09:56:09.852655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.611 [2024-12-09 09:56:09.852686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.611 qpair failed and we were unable to recover it. 00:38:34.611 [2024-12-09 09:56:09.853026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.611 [2024-12-09 09:56:09.853056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.611 qpair failed and we were unable to recover it. 00:38:34.611 [2024-12-09 09:56:09.853415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.611 [2024-12-09 09:56:09.853445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.611 qpair failed and we were unable to recover it. 00:38:34.611 [2024-12-09 09:56:09.853766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.611 [2024-12-09 09:56:09.853797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.611 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3065474 Killed "${NVMF_APP[@]}" "$@" 00:38:34.611 qpair failed and we were unable to recover it. 00:38:34.611 [2024-12-09 09:56:09.854155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.611 [2024-12-09 09:56:09.854185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.611 qpair failed and we were unable to recover it. 00:38:34.611 [2024-12-09 09:56:09.854547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.611 [2024-12-09 09:56:09.854577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.611 qpair failed and we were unable to recover it. 00:38:34.611 09:56:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:38:34.611 [2024-12-09 09:56:09.854925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.611 [2024-12-09 09:56:09.854957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.611 qpair failed and we were unable to recover it. 00:38:34.611 09:56:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:38:34.611 09:56:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:34.611 [2024-12-09 09:56:09.855320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.611 [2024-12-09 09:56:09.855350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.611 qpair failed and we were unable to recover it. 00:38:34.611 09:56:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:34.611 [2024-12-09 09:56:09.855729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.611 [2024-12-09 09:56:09.855762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.611 qpair failed and we were unable to recover it. 00:38:34.611 09:56:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:34.611 [2024-12-09 09:56:09.856088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.611 [2024-12-09 09:56:09.856117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.611 qpair failed and we were unable to recover it. 00:38:34.611 [2024-12-09 09:56:09.856482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.611 [2024-12-09 09:56:09.856511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.611 qpair failed and we were unable to recover it. 00:38:34.611 [2024-12-09 09:56:09.856860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.611 [2024-12-09 09:56:09.856892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.611 qpair failed and we were unable to recover it. 00:38:34.611 [2024-12-09 09:56:09.857232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.611 [2024-12-09 09:56:09.857261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.611 qpair failed and we were unable to recover it. 00:38:34.611 [2024-12-09 09:56:09.857609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.611 [2024-12-09 09:56:09.857647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.611 qpair failed and we were unable to recover it. 00:38:34.611 [2024-12-09 09:56:09.857970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.611 [2024-12-09 09:56:09.858001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.611 qpair failed and we were unable to recover it. 00:38:34.611 [2024-12-09 09:56:09.858351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.612 [2024-12-09 09:56:09.858381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.612 qpair failed and we were unable to recover it. 00:38:34.612 [2024-12-09 09:56:09.858691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.612 [2024-12-09 09:56:09.858722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.612 qpair failed and we were unable to recover it. 00:38:34.612 [2024-12-09 09:56:09.859085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.612 [2024-12-09 09:56:09.859115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.612 qpair failed and we were unable to recover it. 00:38:34.612 [2024-12-09 09:56:09.859472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.612 [2024-12-09 09:56:09.859501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.612 qpair failed and we were unable to recover it. 00:38:34.612 [2024-12-09 09:56:09.859818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.612 [2024-12-09 09:56:09.859849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.612 qpair failed and we were unable to recover it. 00:38:34.612 [2024-12-09 09:56:09.860203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.612 [2024-12-09 09:56:09.860234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.612 qpair failed and we were unable to recover it. 00:38:34.612 [2024-12-09 09:56:09.860574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.612 [2024-12-09 09:56:09.860604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.612 qpair failed and we were unable to recover it. 00:38:34.612 [2024-12-09 09:56:09.860960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.612 [2024-12-09 09:56:09.860992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.612 qpair failed and we were unable to recover it. 00:38:34.612 [2024-12-09 09:56:09.861333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.612 [2024-12-09 09:56:09.861363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.612 qpair failed and we were unable to recover it. 00:38:34.612 [2024-12-09 09:56:09.861699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.612 [2024-12-09 09:56:09.861730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.612 qpair failed and we were unable to recover it. 00:38:34.612 [2024-12-09 09:56:09.862105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.612 [2024-12-09 09:56:09.862135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.612 qpair failed and we were unable to recover it. 00:38:34.612 [2024-12-09 09:56:09.862382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.612 [2024-12-09 09:56:09.862411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.612 qpair failed and we were unable to recover it. 00:38:34.612 [2024-12-09 09:56:09.862742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.612 [2024-12-09 09:56:09.862778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.612 qpair failed and we were unable to recover it. 00:38:34.612 [2024-12-09 09:56:09.863113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.612 [2024-12-09 09:56:09.863142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.612 qpair failed and we were unable to recover it. 00:38:34.612 [2024-12-09 09:56:09.863490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.612 [2024-12-09 09:56:09.863519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.612 qpair failed and we were unable to recover it. 00:38:34.612 09:56:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3066510 00:38:34.612 [2024-12-09 09:56:09.863758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.612 [2024-12-09 09:56:09.863789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.612 qpair failed and we were unable to recover it. 00:38:34.612 09:56:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3066510 00:38:34.612 [2024-12-09 09:56:09.864126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.612 [2024-12-09 09:56:09.864155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.612 09:56:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:38:34.612 qpair failed and we were unable to recover it. 00:38:34.612 09:56:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3066510 ']' 00:38:34.612 [2024-12-09 09:56:09.864515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.612 [2024-12-09 09:56:09.864550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.612 qpair failed and we were unable to recover it. 00:38:34.612 09:56:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:34.612 09:56:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:34.612 [2024-12-09 09:56:09.864941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.612 [2024-12-09 09:56:09.864971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.612 qpair failed and we were unable to recover it. 00:38:34.612 09:56:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:34.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:34.612 09:56:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:34.612 [2024-12-09 09:56:09.865343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.612 [2024-12-09 09:56:09.865372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.612 qpair failed and we were unable to recover it. 00:38:34.612 09:56:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:34.612 [2024-12-09 09:56:09.865712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.612 [2024-12-09 09:56:09.865743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.612 qpair failed and we were unable to recover it. 00:38:34.612 [2024-12-09 09:56:09.866068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.612 [2024-12-09 09:56:09.866097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.612 qpair failed and we were unable to recover it. 00:38:34.612 [2024-12-09 09:56:09.866504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.612 [2024-12-09 09:56:09.866533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.612 qpair failed and we were unable to recover it. 00:38:34.612 [2024-12-09 09:56:09.866920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.612 [2024-12-09 09:56:09.866950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.612 qpair failed and we were unable to recover it. 00:38:34.612 [2024-12-09 09:56:09.867184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.612 [2024-12-09 09:56:09.867212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.612 qpair failed and we were unable to recover it. 00:38:34.612 [2024-12-09 09:56:09.867619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.612 [2024-12-09 09:56:09.867658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.612 qpair failed and we were unable to recover it. 00:38:34.612 [2024-12-09 09:56:09.868016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.612 [2024-12-09 09:56:09.868045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.612 qpair failed and we were unable to recover it. 00:38:34.612 [2024-12-09 09:56:09.868376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.612 [2024-12-09 09:56:09.868405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.612 qpair failed and we were unable to recover it. 00:38:34.612 [2024-12-09 09:56:09.868623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.612 [2024-12-09 09:56:09.868675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.612 qpair failed and we were unable to recover it. 00:38:34.612 [2024-12-09 09:56:09.868931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.612 [2024-12-09 09:56:09.868962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.612 qpair failed and we were unable to recover it. 00:38:34.612 [2024-12-09 09:56:09.869277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.612 [2024-12-09 09:56:09.869306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.612 qpair failed and we were unable to recover it. 00:38:34.613 [2024-12-09 09:56:09.869661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.613 [2024-12-09 09:56:09.869693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.613 qpair failed and we were unable to recover it. 00:38:34.613 [2024-12-09 09:56:09.870025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.613 [2024-12-09 09:56:09.870055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.613 qpair failed and we were unable to recover it. 00:38:34.613 [2024-12-09 09:56:09.870407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.613 [2024-12-09 09:56:09.870439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.613 qpair failed and we were unable to recover it. 00:38:34.613 [2024-12-09 09:56:09.870801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.613 [2024-12-09 09:56:09.870832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.613 qpair failed and we were unable to recover it. 00:38:34.613 [2024-12-09 09:56:09.871180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.613 [2024-12-09 09:56:09.871209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.613 qpair failed and we were unable to recover it. 00:38:34.613 [2024-12-09 09:56:09.871454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.613 [2024-12-09 09:56:09.871484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.613 qpair failed and we were unable to recover it. 00:38:34.613 [2024-12-09 09:56:09.871817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.613 [2024-12-09 09:56:09.871847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.613 qpair failed and we were unable to recover it. 00:38:34.613 [2024-12-09 09:56:09.872204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.613 [2024-12-09 09:56:09.872233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.613 qpair failed and we were unable to recover it. 00:38:34.613 [2024-12-09 09:56:09.872609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.613 [2024-12-09 09:56:09.872646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.613 qpair failed and we were unable to recover it. 00:38:34.613 [2024-12-09 09:56:09.872920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.613 [2024-12-09 09:56:09.872949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.613 qpair failed and we were unable to recover it. 00:38:34.613 [2024-12-09 09:56:09.873292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.613 [2024-12-09 09:56:09.873321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.613 qpair failed and we were unable to recover it. 00:38:34.613 [2024-12-09 09:56:09.873723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.613 [2024-12-09 09:56:09.873754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.613 qpair failed and we were unable to recover it. 00:38:34.613 [2024-12-09 09:56:09.874099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.613 [2024-12-09 09:56:09.874128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.613 qpair failed and we were unable to recover it. 00:38:34.613 [2024-12-09 09:56:09.874481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.613 [2024-12-09 09:56:09.874510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.613 qpair failed and we were unable to recover it. 00:38:34.613 [2024-12-09 09:56:09.874853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.613 [2024-12-09 09:56:09.874884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.613 qpair failed and we were unable to recover it. 00:38:34.613 [2024-12-09 09:56:09.875227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.613 [2024-12-09 09:56:09.875255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.613 qpair failed and we were unable to recover it. 00:38:34.613 [2024-12-09 09:56:09.875604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.613 [2024-12-09 09:56:09.875636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.613 qpair failed and we were unable to recover it. 00:38:34.613 [2024-12-09 09:56:09.876035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.613 [2024-12-09 09:56:09.876064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.613 qpair failed and we were unable to recover it. 00:38:34.613 [2024-12-09 09:56:09.876396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.613 [2024-12-09 09:56:09.876426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.613 qpair failed and we were unable to recover it. 00:38:34.613 [2024-12-09 09:56:09.876771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.613 [2024-12-09 09:56:09.876801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.613 qpair failed and we were unable to recover it. 00:38:34.613 [2024-12-09 09:56:09.877154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.613 [2024-12-09 09:56:09.877182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.613 qpair failed and we were unable to recover it. 00:38:34.613 [2024-12-09 09:56:09.877533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.613 [2024-12-09 09:56:09.877562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.613 qpair failed and we were unable to recover it. 00:38:34.613 [2024-12-09 09:56:09.877917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.613 [2024-12-09 09:56:09.877947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.613 qpair failed and we were unable to recover it. 00:38:34.613 [2024-12-09 09:56:09.878290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.613 [2024-12-09 09:56:09.878319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.613 qpair failed and we were unable to recover it. 00:38:34.613 [2024-12-09 09:56:09.878665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.613 [2024-12-09 09:56:09.878696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.613 qpair failed and we were unable to recover it. 00:38:34.613 [2024-12-09 09:56:09.879054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.613 [2024-12-09 09:56:09.879082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.613 qpair failed and we were unable to recover it. 00:38:34.613 [2024-12-09 09:56:09.879432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.613 [2024-12-09 09:56:09.879461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.613 qpair failed and we were unable to recover it. 00:38:34.613 [2024-12-09 09:56:09.879875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.613 [2024-12-09 09:56:09.879906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.613 qpair failed and we were unable to recover it. 00:38:34.613 [2024-12-09 09:56:09.880284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.613 [2024-12-09 09:56:09.880313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.613 qpair failed and we were unable to recover it. 00:38:34.613 [2024-12-09 09:56:09.880568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.613 [2024-12-09 09:56:09.880596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.613 qpair failed and we were unable to recover it. 00:38:34.613 [2024-12-09 09:56:09.881005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.613 [2024-12-09 09:56:09.881034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.613 qpair failed and we were unable to recover it. 00:38:34.613 [2024-12-09 09:56:09.881379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.613 [2024-12-09 09:56:09.881408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.613 qpair failed and we were unable to recover it. 00:38:34.613 [2024-12-09 09:56:09.881811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.613 [2024-12-09 09:56:09.881841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.613 qpair failed and we were unable to recover it. 00:38:34.613 [2024-12-09 09:56:09.882171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.613 [2024-12-09 09:56:09.882199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.613 qpair failed and we were unable to recover it. 00:38:34.613 [2024-12-09 09:56:09.882544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.613 [2024-12-09 09:56:09.882573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.613 qpair failed and we were unable to recover it. 00:38:34.613 [2024-12-09 09:56:09.882744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.613 [2024-12-09 09:56:09.882778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.613 qpair failed and we were unable to recover it. 00:38:34.613 [2024-12-09 09:56:09.883101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.614 [2024-12-09 09:56:09.883130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.614 qpair failed and we were unable to recover it. 00:38:34.614 [2024-12-09 09:56:09.883477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.614 [2024-12-09 09:56:09.883506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.614 qpair failed and we were unable to recover it. 00:38:34.614 [2024-12-09 09:56:09.883935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.614 [2024-12-09 09:56:09.883966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.614 qpair failed and we were unable to recover it. 00:38:34.614 [2024-12-09 09:56:09.884236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.614 [2024-12-09 09:56:09.884266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.614 qpair failed and we were unable to recover it. 00:38:34.614 [2024-12-09 09:56:09.884602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.614 [2024-12-09 09:56:09.884631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.614 qpair failed and we were unable to recover it. 00:38:34.614 [2024-12-09 09:56:09.884805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.614 [2024-12-09 09:56:09.884832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.614 qpair failed and we were unable to recover it. 00:38:34.614 [2024-12-09 09:56:09.885064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.614 [2024-12-09 09:56:09.885095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.614 qpair failed and we were unable to recover it. 00:38:34.614 [2024-12-09 09:56:09.885330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.614 [2024-12-09 09:56:09.885359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.614 qpair failed and we were unable to recover it. 00:38:34.614 [2024-12-09 09:56:09.885711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.614 [2024-12-09 09:56:09.885743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.614 qpair failed and we were unable to recover it. 00:38:34.614 [2024-12-09 09:56:09.886003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.614 [2024-12-09 09:56:09.886032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.614 qpair failed and we were unable to recover it. 00:38:34.614 [2024-12-09 09:56:09.886391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.614 [2024-12-09 09:56:09.886420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.614 qpair failed and we were unable to recover it. 00:38:34.614 [2024-12-09 09:56:09.886843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.614 [2024-12-09 09:56:09.886873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.614 qpair failed and we were unable to recover it. 00:38:34.614 [2024-12-09 09:56:09.887209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.614 [2024-12-09 09:56:09.887237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.614 qpair failed and we were unable to recover it. 00:38:34.614 [2024-12-09 09:56:09.887603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.614 [2024-12-09 09:56:09.887632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.614 qpair failed and we were unable to recover it. 00:38:34.614 [2024-12-09 09:56:09.887878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.614 [2024-12-09 09:56:09.887907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.614 qpair failed and we were unable to recover it. 00:38:34.614 [2024-12-09 09:56:09.888132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.614 [2024-12-09 09:56:09.888164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.614 qpair failed and we were unable to recover it. 00:38:34.614 [2024-12-09 09:56:09.888504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.614 [2024-12-09 09:56:09.888534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.614 qpair failed and we were unable to recover it. 00:38:34.614 [2024-12-09 09:56:09.888997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.614 [2024-12-09 09:56:09.889028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.614 qpair failed and we were unable to recover it. 00:38:34.614 [2024-12-09 09:56:09.889362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.614 [2024-12-09 09:56:09.889391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.614 qpair failed and we were unable to recover it. 00:38:34.614 [2024-12-09 09:56:09.889614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.614 [2024-12-09 09:56:09.889656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.614 qpair failed and we were unable to recover it. 00:38:34.614 [2024-12-09 09:56:09.889902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.614 [2024-12-09 09:56:09.889930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.614 qpair failed and we were unable to recover it. 00:38:34.614 [2024-12-09 09:56:09.890321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.614 [2024-12-09 09:56:09.890350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.614 qpair failed and we were unable to recover it. 00:38:34.614 [2024-12-09 09:56:09.890573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.614 [2024-12-09 09:56:09.890601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.614 qpair failed and we were unable to recover it. 00:38:34.614 [2024-12-09 09:56:09.891083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.614 [2024-12-09 09:56:09.891113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.614 qpair failed and we were unable to recover it. 00:38:34.614 [2024-12-09 09:56:09.891518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.614 [2024-12-09 09:56:09.891548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.614 qpair failed and we were unable to recover it. 00:38:34.614 [2024-12-09 09:56:09.891907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.614 [2024-12-09 09:56:09.891938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.614 qpair failed and we were unable to recover it. 00:38:34.614 [2024-12-09 09:56:09.892176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.614 [2024-12-09 09:56:09.892208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.614 qpair failed and we were unable to recover it. 00:38:34.614 [2024-12-09 09:56:09.892555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.614 [2024-12-09 09:56:09.892585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.614 qpair failed and we were unable to recover it. 00:38:34.614 [2024-12-09 09:56:09.892849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.614 [2024-12-09 09:56:09.892882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.614 qpair failed and we were unable to recover it. 00:38:34.614 [2024-12-09 09:56:09.893241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.614 [2024-12-09 09:56:09.893276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.614 qpair failed and we were unable to recover it. 00:38:34.614 [2024-12-09 09:56:09.893688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.614 [2024-12-09 09:56:09.893718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.614 qpair failed and we were unable to recover it. 00:38:34.614 [2024-12-09 09:56:09.894165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.614 [2024-12-09 09:56:09.894194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.614 qpair failed and we were unable to recover it. 00:38:34.614 [2024-12-09 09:56:09.894487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.614 [2024-12-09 09:56:09.894516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.614 qpair failed and we were unable to recover it. 00:38:34.614 [2024-12-09 09:56:09.894865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.614 [2024-12-09 09:56:09.894895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.614 qpair failed and we were unable to recover it. 00:38:34.614 [2024-12-09 09:56:09.895152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.614 [2024-12-09 09:56:09.895183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.614 qpair failed and we were unable to recover it. 00:38:34.614 [2024-12-09 09:56:09.895543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.614 [2024-12-09 09:56:09.895572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.614 qpair failed and we were unable to recover it. 00:38:34.614 [2024-12-09 09:56:09.895859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.614 [2024-12-09 09:56:09.895889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.614 qpair failed and we were unable to recover it. 00:38:34.614 [2024-12-09 09:56:09.896230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.614 [2024-12-09 09:56:09.896260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.614 qpair failed and we were unable to recover it. 00:38:34.614 [2024-12-09 09:56:09.896611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.614 [2024-12-09 09:56:09.896655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.614 qpair failed and we were unable to recover it. 00:38:34.615 [2024-12-09 09:56:09.897005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.615 [2024-12-09 09:56:09.897034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.615 qpair failed and we were unable to recover it. 00:38:34.615 [2024-12-09 09:56:09.897401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.615 [2024-12-09 09:56:09.897430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.615 qpair failed and we were unable to recover it. 00:38:34.615 [2024-12-09 09:56:09.897770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.615 [2024-12-09 09:56:09.897800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.615 qpair failed and we were unable to recover it. 00:38:34.615 [2024-12-09 09:56:09.898131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.615 [2024-12-09 09:56:09.898159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.615 qpair failed and we were unable to recover it. 00:38:34.615 [2024-12-09 09:56:09.898517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.615 [2024-12-09 09:56:09.898546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.615 qpair failed and we were unable to recover it. 00:38:34.615 [2024-12-09 09:56:09.898900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.615 [2024-12-09 09:56:09.898929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.615 qpair failed and we were unable to recover it. 00:38:34.615 [2024-12-09 09:56:09.899171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.615 [2024-12-09 09:56:09.899203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.615 qpair failed and we were unable to recover it. 00:38:34.615 [2024-12-09 09:56:09.899581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.615 [2024-12-09 09:56:09.899610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.615 qpair failed and we were unable to recover it. 00:38:34.615 [2024-12-09 09:56:09.899843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.615 [2024-12-09 09:56:09.899874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.615 qpair failed and we were unable to recover it. 00:38:34.615 [2024-12-09 09:56:09.900092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.615 [2024-12-09 09:56:09.900120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.615 qpair failed and we were unable to recover it. 00:38:34.615 [2024-12-09 09:56:09.900472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.615 [2024-12-09 09:56:09.900501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.615 qpair failed and we were unable to recover it. 00:38:34.615 [2024-12-09 09:56:09.900871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.615 [2024-12-09 09:56:09.900902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.615 qpair failed and we were unable to recover it. 00:38:34.615 [2024-12-09 09:56:09.901327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.615 [2024-12-09 09:56:09.901356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.615 qpair failed and we were unable to recover it. 00:38:34.615 [2024-12-09 09:56:09.901706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.615 [2024-12-09 09:56:09.901736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.615 qpair failed and we were unable to recover it. 00:38:34.615 [2024-12-09 09:56:09.902098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.615 [2024-12-09 09:56:09.902127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.615 qpair failed and we were unable to recover it. 00:38:34.615 [2024-12-09 09:56:09.902501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.615 [2024-12-09 09:56:09.902530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.615 qpair failed and we were unable to recover it. 00:38:34.615 [2024-12-09 09:56:09.902876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.615 [2024-12-09 09:56:09.902906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.615 qpair failed and we were unable to recover it. 00:38:34.615 [2024-12-09 09:56:09.903274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.615 [2024-12-09 09:56:09.903303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.615 qpair failed and we were unable to recover it. 00:38:34.615 [2024-12-09 09:56:09.903671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.615 [2024-12-09 09:56:09.903702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.615 qpair failed and we were unable to recover it. 00:38:34.615 [2024-12-09 09:56:09.904155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.615 [2024-12-09 09:56:09.904184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.615 qpair failed and we were unable to recover it. 00:38:34.615 [2024-12-09 09:56:09.904533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.615 [2024-12-09 09:56:09.904562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.615 qpair failed and we were unable to recover it. 00:38:34.615 [2024-12-09 09:56:09.904925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.615 [2024-12-09 09:56:09.904956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.615 qpair failed and we were unable to recover it. 00:38:34.615 [2024-12-09 09:56:09.905307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.615 [2024-12-09 09:56:09.905335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.615 qpair failed and we were unable to recover it. 00:38:34.615 [2024-12-09 09:56:09.905558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.615 [2024-12-09 09:56:09.905586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.615 qpair failed and we were unable to recover it. 00:38:34.615 [2024-12-09 09:56:09.905826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.615 [2024-12-09 09:56:09.905856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.615 qpair failed and we were unable to recover it. 00:38:34.615 [2024-12-09 09:56:09.906219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.615 [2024-12-09 09:56:09.906248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.615 qpair failed and we were unable to recover it. 00:38:34.615 [2024-12-09 09:56:09.906600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.615 [2024-12-09 09:56:09.906628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.615 qpair failed and we were unable to recover it. 00:38:34.615 [2024-12-09 09:56:09.907028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.615 [2024-12-09 09:56:09.907057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.615 qpair failed and we were unable to recover it. 00:38:34.615 [2024-12-09 09:56:09.907403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.615 [2024-12-09 09:56:09.907432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.615 qpair failed and we were unable to recover it. 00:38:34.615 [2024-12-09 09:56:09.907786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.615 [2024-12-09 09:56:09.907816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.615 qpair failed and we were unable to recover it. 00:38:34.615 [2024-12-09 09:56:09.908184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.615 [2024-12-09 09:56:09.908213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.615 qpair failed and we were unable to recover it. 00:38:34.615 [2024-12-09 09:56:09.908621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.615 [2024-12-09 09:56:09.908667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.615 qpair failed and we were unable to recover it. 00:38:34.615 [2024-12-09 09:56:09.909044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.615 [2024-12-09 09:56:09.909073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.615 qpair failed and we were unable to recover it. 00:38:34.615 [2024-12-09 09:56:09.909288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.615 [2024-12-09 09:56:09.909317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.615 qpair failed and we were unable to recover it. 00:38:34.615 [2024-12-09 09:56:09.909669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.615 [2024-12-09 09:56:09.909700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.615 qpair failed and we were unable to recover it. 00:38:34.615 [2024-12-09 09:56:09.910068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.615 [2024-12-09 09:56:09.910096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.615 qpair failed and we were unable to recover it. 00:38:34.615 [2024-12-09 09:56:09.910459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.615 [2024-12-09 09:56:09.910488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.615 qpair failed and we were unable to recover it. 00:38:34.615 [2024-12-09 09:56:09.910873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.615 [2024-12-09 09:56:09.910902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.615 qpair failed and we were unable to recover it. 00:38:34.615 [2024-12-09 09:56:09.911265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.616 [2024-12-09 09:56:09.911294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.616 qpair failed and we were unable to recover it. 00:38:34.616 [2024-12-09 09:56:09.911554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.616 [2024-12-09 09:56:09.911582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.616 qpair failed and we were unable to recover it. 00:38:34.616 [2024-12-09 09:56:09.911950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.616 [2024-12-09 09:56:09.911982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.616 qpair failed and we were unable to recover it. 00:38:34.616 [2024-12-09 09:56:09.912313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.616 [2024-12-09 09:56:09.912342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.616 qpair failed and we were unable to recover it. 00:38:34.616 [2024-12-09 09:56:09.912409] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:38:34.616 [2024-12-09 09:56:09.912451] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:34.616 [2024-12-09 09:56:09.912703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.616 [2024-12-09 09:56:09.912733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.616 qpair failed and we were unable to recover it. 00:38:34.616 [2024-12-09 09:56:09.912984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.616 [2024-12-09 09:56:09.913017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.616 qpair failed and we were unable to recover it. 00:38:34.616 [2024-12-09 09:56:09.913404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.616 [2024-12-09 09:56:09.913434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.616 qpair failed and we were unable to recover it. 00:38:34.616 [2024-12-09 09:56:09.913810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.616 [2024-12-09 09:56:09.913841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.616 qpair failed and we were unable to recover it. 00:38:34.616 [2024-12-09 09:56:09.914207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.616 [2024-12-09 09:56:09.914236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.616 qpair failed and we were unable to recover it. 00:38:34.616 [2024-12-09 09:56:09.914487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.616 [2024-12-09 09:56:09.914517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.616 qpair failed and we were unable to recover it. 00:38:34.616 [2024-12-09 09:56:09.914878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.616 [2024-12-09 09:56:09.914909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.616 qpair failed and we were unable to recover it. 00:38:34.616 [2024-12-09 09:56:09.915025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.616 [2024-12-09 09:56:09.915056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.616 qpair failed and we were unable to recover it. 00:38:34.616 [2024-12-09 09:56:09.915276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.616 [2024-12-09 09:56:09.915306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.616 qpair failed and we were unable to recover it. 00:38:34.616 [2024-12-09 09:56:09.915621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.616 [2024-12-09 09:56:09.915660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.616 qpair failed and we were unable to recover it. 00:38:34.616 [2024-12-09 09:56:09.915894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.616 [2024-12-09 09:56:09.915923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.616 qpair failed and we were unable to recover it. 00:38:34.616 [2024-12-09 09:56:09.916325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.616 [2024-12-09 09:56:09.916355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.616 qpair failed and we were unable to recover it. 00:38:34.616 [2024-12-09 09:56:09.916610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.616 [2024-12-09 09:56:09.916651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:34.616 qpair failed and we were unable to recover it. 00:38:34.616 Read completed with error (sct=0, sc=8) 00:38:34.616 starting I/O failed 00:38:34.616 Read completed with error (sct=0, sc=8) 00:38:34.616 starting I/O failed 00:38:34.616 Read completed with error (sct=0, sc=8) 00:38:34.616 starting I/O failed 00:38:34.616 Read completed with error (sct=0, sc=8) 00:38:34.616 starting I/O failed 00:38:34.616 Read completed with error (sct=0, sc=8) 00:38:34.616 starting I/O failed 00:38:34.616 Read completed with error (sct=0, sc=8) 00:38:34.616 starting I/O failed 00:38:34.616 Read completed with error (sct=0, sc=8) 00:38:34.616 starting I/O failed 00:38:34.616 Read completed with error (sct=0, sc=8) 00:38:34.616 starting I/O failed 00:38:34.616 Read completed with error (sct=0, sc=8) 00:38:34.616 starting I/O failed 00:38:34.616 Read completed with error (sct=0, sc=8) 00:38:34.616 starting I/O failed 00:38:34.616 Read completed with error (sct=0, sc=8) 00:38:34.616 starting I/O failed 00:38:34.616 Write completed with error (sct=0, sc=8) 00:38:34.616 starting I/O failed 00:38:34.616 Read completed with error (sct=0, sc=8) 00:38:34.616 starting I/O failed 00:38:34.616 Read completed with error (sct=0, sc=8) 00:38:34.616 starting I/O failed 00:38:34.616 Write completed with error (sct=0, sc=8) 00:38:34.616 starting I/O failed 00:38:34.616 Write completed with error (sct=0, sc=8) 00:38:34.616 starting I/O failed 00:38:34.616 Write completed with error (sct=0, sc=8) 00:38:34.616 starting I/O failed 00:38:34.616 Read completed with error (sct=0, sc=8) 00:38:34.616 starting I/O failed 00:38:34.616 Write completed with error (sct=0, sc=8) 00:38:34.616 starting I/O failed 00:38:34.616 Write completed with error (sct=0, sc=8) 00:38:34.616 starting I/O failed 00:38:34.616 Read completed with error (sct=0, sc=8) 00:38:34.616 starting I/O failed 00:38:34.616 Write completed with error (sct=0, sc=8) 00:38:34.616 starting I/O failed 00:38:34.616 Write completed with error (sct=0, sc=8) 00:38:34.616 starting I/O failed 00:38:34.616 Read completed with error (sct=0, sc=8) 00:38:34.616 starting I/O failed 00:38:34.616 Write completed with error (sct=0, sc=8) 00:38:34.616 starting I/O failed 00:38:34.616 Write completed with error (sct=0, sc=8) 00:38:34.616 starting I/O failed 00:38:34.616 Read completed with error (sct=0, sc=8) 00:38:34.616 starting I/O failed 00:38:34.616 Write completed with error (sct=0, sc=8) 00:38:34.616 starting I/O failed 00:38:34.616 Write completed with error (sct=0, sc=8) 00:38:34.616 starting I/O failed 00:38:34.616 Read completed with error (sct=0, sc=8) 00:38:34.616 starting I/O failed 00:38:34.616 Write completed with error (sct=0, sc=8) 00:38:34.616 starting I/O failed 00:38:34.616 Write completed with error (sct=0, sc=8) 00:38:34.616 starting I/O failed 00:38:34.616 [2024-12-09 09:56:09.916895] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:34.616 [2024-12-09 09:56:09.917321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.616 [2024-12-09 09:56:09.917336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.616 qpair failed and we were unable to recover it. 00:38:34.616 [2024-12-09 09:56:09.917545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.616 [2024-12-09 09:56:09.917553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.616 qpair failed and we were unable to recover it. 00:38:34.616 [2024-12-09 09:56:09.917847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.616 [2024-12-09 09:56:09.917856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.616 qpair failed and we were unable to recover it. 00:38:34.616 [2024-12-09 09:56:09.918160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.616 [2024-12-09 09:56:09.918169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.616 qpair failed and we were unable to recover it. 00:38:34.616 [2024-12-09 09:56:09.918485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.616 [2024-12-09 09:56:09.918493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.616 qpair failed and we were unable to recover it. 00:38:34.616 [2024-12-09 09:56:09.918795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.616 [2024-12-09 09:56:09.918803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.616 qpair failed and we were unable to recover it. 00:38:34.616 [2024-12-09 09:56:09.919118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.616 [2024-12-09 09:56:09.919125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.616 qpair failed and we were unable to recover it. 00:38:34.616 [2024-12-09 09:56:09.919451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.616 [2024-12-09 09:56:09.919458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.616 qpair failed and we were unable to recover it. 00:38:34.616 [2024-12-09 09:56:09.919546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.616 [2024-12-09 09:56:09.919553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.616 qpair failed and we were unable to recover it. 00:38:34.616 [2024-12-09 09:56:09.919716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.616 [2024-12-09 09:56:09.919724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.617 qpair failed and we were unable to recover it. 00:38:34.617 [2024-12-09 09:56:09.920023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.617 [2024-12-09 09:56:09.920031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.617 qpair failed and we were unable to recover it. 00:38:34.617 [2024-12-09 09:56:09.920233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.617 [2024-12-09 09:56:09.920241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.617 qpair failed and we were unable to recover it. 00:38:34.617 [2024-12-09 09:56:09.920558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.617 [2024-12-09 09:56:09.920566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.617 qpair failed and we were unable to recover it. 00:38:34.617 [2024-12-09 09:56:09.920917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.617 [2024-12-09 09:56:09.920924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.617 qpair failed and we were unable to recover it. 00:38:34.617 [2024-12-09 09:56:09.921114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.617 [2024-12-09 09:56:09.921122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.617 qpair failed and we were unable to recover it. 00:38:34.617 [2024-12-09 09:56:09.921341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.617 [2024-12-09 09:56:09.921348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.617 qpair failed and we were unable to recover it. 00:38:34.617 [2024-12-09 09:56:09.921736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.617 [2024-12-09 09:56:09.921744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.617 qpair failed and we were unable to recover it. 00:38:34.617 [2024-12-09 09:56:09.922083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.617 [2024-12-09 09:56:09.922090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.617 qpair failed and we were unable to recover it. 00:38:34.617 [2024-12-09 09:56:09.922255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.617 [2024-12-09 09:56:09.922263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.617 qpair failed and we were unable to recover it. 00:38:34.617 [2024-12-09 09:56:09.922646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.617 [2024-12-09 09:56:09.922657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.617 qpair failed and we were unable to recover it. 00:38:34.617 [2024-12-09 09:56:09.922950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.617 [2024-12-09 09:56:09.922958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.617 qpair failed and we were unable to recover it. 00:38:34.617 [2024-12-09 09:56:09.923276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.617 [2024-12-09 09:56:09.923283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.617 qpair failed and we were unable to recover it. 00:38:34.617 [2024-12-09 09:56:09.923580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.617 [2024-12-09 09:56:09.923588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.617 qpair failed and we were unable to recover it. 00:38:34.617 [2024-12-09 09:56:09.923899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.617 [2024-12-09 09:56:09.923907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.617 qpair failed and we were unable to recover it. 00:38:34.617 [2024-12-09 09:56:09.924236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.617 [2024-12-09 09:56:09.924243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.617 qpair failed and we were unable to recover it. 00:38:34.617 [2024-12-09 09:56:09.924548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.617 [2024-12-09 09:56:09.924555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.617 qpair failed and we were unable to recover it. 00:38:34.617 [2024-12-09 09:56:09.924758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.617 [2024-12-09 09:56:09.924766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.617 qpair failed and we were unable to recover it. 00:38:34.617 [2024-12-09 09:56:09.925123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.617 [2024-12-09 09:56:09.925131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.617 qpair failed and we were unable to recover it. 00:38:34.617 [2024-12-09 09:56:09.925438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.617 [2024-12-09 09:56:09.925445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.617 qpair failed and we were unable to recover it. 00:38:34.617 [2024-12-09 09:56:09.925708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.617 [2024-12-09 09:56:09.925717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.617 qpair failed and we were unable to recover it. 00:38:34.617 [2024-12-09 09:56:09.926008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.617 [2024-12-09 09:56:09.926016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.617 qpair failed and we were unable to recover it. 00:38:34.617 [2024-12-09 09:56:09.926332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.617 [2024-12-09 09:56:09.926340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.617 qpair failed and we were unable to recover it. 00:38:34.617 [2024-12-09 09:56:09.926678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.617 [2024-12-09 09:56:09.926686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.617 qpair failed and we were unable to recover it. 00:38:34.617 [2024-12-09 09:56:09.926983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.617 [2024-12-09 09:56:09.926990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.617 qpair failed and we were unable to recover it. 00:38:34.617 [2024-12-09 09:56:09.927306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.617 [2024-12-09 09:56:09.927313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.617 qpair failed and we were unable to recover it. 00:38:34.617 [2024-12-09 09:56:09.927652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.617 [2024-12-09 09:56:09.927662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.617 qpair failed and we were unable to recover it. 00:38:34.617 [2024-12-09 09:56:09.927993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.617 [2024-12-09 09:56:09.928001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.617 qpair failed and we were unable to recover it. 00:38:34.617 [2024-12-09 09:56:09.928320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.617 [2024-12-09 09:56:09.928328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.617 qpair failed and we were unable to recover it. 00:38:34.617 [2024-12-09 09:56:09.928522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.617 [2024-12-09 09:56:09.928530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.617 qpair failed and we were unable to recover it. 00:38:34.617 [2024-12-09 09:56:09.928916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.617 [2024-12-09 09:56:09.928924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.617 qpair failed and we were unable to recover it. 00:38:34.617 [2024-12-09 09:56:09.929215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.617 [2024-12-09 09:56:09.929222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.617 qpair failed and we were unable to recover it. 00:38:34.617 [2024-12-09 09:56:09.929550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.617 [2024-12-09 09:56:09.929557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.617 qpair failed and we were unable to recover it. 00:38:34.617 [2024-12-09 09:56:09.929860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.617 [2024-12-09 09:56:09.929868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.617 qpair failed and we were unable to recover it. 00:38:34.618 [2024-12-09 09:56:09.930186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.618 [2024-12-09 09:56:09.930194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.618 qpair failed and we were unable to recover it. 00:38:34.618 [2024-12-09 09:56:09.930495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.618 [2024-12-09 09:56:09.930502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.618 qpair failed and we were unable to recover it. 00:38:34.618 [2024-12-09 09:56:09.930708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.618 [2024-12-09 09:56:09.930716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.618 qpair failed and we were unable to recover it. 00:38:34.618 [2024-12-09 09:56:09.931101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.618 [2024-12-09 09:56:09.931108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.618 qpair failed and we were unable to recover it. 00:38:34.618 [2024-12-09 09:56:09.931410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.618 [2024-12-09 09:56:09.931417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.618 qpair failed and we were unable to recover it. 00:38:34.618 [2024-12-09 09:56:09.931641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.618 [2024-12-09 09:56:09.931649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.618 qpair failed and we were unable to recover it. 00:38:34.618 [2024-12-09 09:56:09.931988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.618 [2024-12-09 09:56:09.931995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.618 qpair failed and we were unable to recover it. 00:38:34.618 [2024-12-09 09:56:09.932308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.618 [2024-12-09 09:56:09.932316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.618 qpair failed and we were unable to recover it. 00:38:34.618 [2024-12-09 09:56:09.932643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.618 [2024-12-09 09:56:09.932651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.618 qpair failed and we were unable to recover it. 00:38:34.618 [2024-12-09 09:56:09.932942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.618 [2024-12-09 09:56:09.932949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.618 qpair failed and we were unable to recover it. 00:38:34.618 [2024-12-09 09:56:09.933265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.618 [2024-12-09 09:56:09.933273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.618 qpair failed and we were unable to recover it. 00:38:34.618 [2024-12-09 09:56:09.933625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.618 [2024-12-09 09:56:09.933632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.618 qpair failed and we were unable to recover it. 00:38:34.618 [2024-12-09 09:56:09.933994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.618 [2024-12-09 09:56:09.934001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.618 qpair failed and we were unable to recover it. 00:38:34.618 [2024-12-09 09:56:09.934321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.618 [2024-12-09 09:56:09.934329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.618 qpair failed and we were unable to recover it. 00:38:34.618 [2024-12-09 09:56:09.934616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.618 [2024-12-09 09:56:09.934623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.618 qpair failed and we were unable to recover it. 00:38:34.618 [2024-12-09 09:56:09.934946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.618 [2024-12-09 09:56:09.934955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.618 qpair failed and we were unable to recover it. 00:38:34.618 [2024-12-09 09:56:09.935263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.618 [2024-12-09 09:56:09.935271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.618 qpair failed and we were unable to recover it. 00:38:34.618 [2024-12-09 09:56:09.935580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.618 [2024-12-09 09:56:09.935588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.618 qpair failed and we were unable to recover it. 00:38:34.618 [2024-12-09 09:56:09.935921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.618 [2024-12-09 09:56:09.935928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.618 qpair failed and we were unable to recover it. 00:38:34.618 [2024-12-09 09:56:09.936237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.618 [2024-12-09 09:56:09.936245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.618 qpair failed and we were unable to recover it. 00:38:34.618 [2024-12-09 09:56:09.936459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.618 [2024-12-09 09:56:09.936467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.618 qpair failed and we were unable to recover it. 00:38:34.618 [2024-12-09 09:56:09.936710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.618 [2024-12-09 09:56:09.936717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.618 qpair failed and we were unable to recover it. 00:38:34.618 [2024-12-09 09:56:09.937020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.618 [2024-12-09 09:56:09.937027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.618 qpair failed and we were unable to recover it. 00:38:34.618 [2024-12-09 09:56:09.937355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.618 [2024-12-09 09:56:09.937362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.618 qpair failed and we were unable to recover it. 00:38:34.618 [2024-12-09 09:56:09.937555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.618 [2024-12-09 09:56:09.937562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.618 qpair failed and we were unable to recover it. 00:38:34.618 [2024-12-09 09:56:09.937913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.618 [2024-12-09 09:56:09.937921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.618 qpair failed and we were unable to recover it. 00:38:34.618 [2024-12-09 09:56:09.938238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.618 [2024-12-09 09:56:09.938245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.618 qpair failed and we were unable to recover it. 00:38:34.618 [2024-12-09 09:56:09.938570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.618 [2024-12-09 09:56:09.938577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.618 qpair failed and we were unable to recover it. 00:38:34.618 [2024-12-09 09:56:09.938909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.618 [2024-12-09 09:56:09.938918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.618 qpair failed and we were unable to recover it. 00:38:34.618 [2024-12-09 09:56:09.939252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.618 [2024-12-09 09:56:09.939260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.618 qpair failed and we were unable to recover it. 00:38:34.618 [2024-12-09 09:56:09.939577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.618 [2024-12-09 09:56:09.939585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.618 qpair failed and we were unable to recover it. 00:38:34.618 [2024-12-09 09:56:09.939883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.618 [2024-12-09 09:56:09.939890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.618 qpair failed and we were unable to recover it. 00:38:34.618 [2024-12-09 09:56:09.940205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.618 [2024-12-09 09:56:09.940214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.618 qpair failed and we were unable to recover it. 00:38:34.618 [2024-12-09 09:56:09.940384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.618 [2024-12-09 09:56:09.940391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.618 qpair failed and we were unable to recover it. 00:38:34.618 [2024-12-09 09:56:09.940775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.618 [2024-12-09 09:56:09.940782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.618 qpair failed and we were unable to recover it. 00:38:34.618 [2024-12-09 09:56:09.941094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.618 [2024-12-09 09:56:09.941101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.618 qpair failed and we were unable to recover it. 00:38:34.618 [2024-12-09 09:56:09.941310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.618 [2024-12-09 09:56:09.941317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.618 qpair failed and we were unable to recover it. 00:38:34.618 [2024-12-09 09:56:09.941535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.618 [2024-12-09 09:56:09.941542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.618 qpair failed and we were unable to recover it. 00:38:34.618 [2024-12-09 09:56:09.941845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.619 [2024-12-09 09:56:09.941853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.619 qpair failed and we were unable to recover it. 00:38:34.619 [2024-12-09 09:56:09.942197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.619 [2024-12-09 09:56:09.942204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.619 qpair failed and we were unable to recover it. 00:38:34.619 [2024-12-09 09:56:09.942381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.619 [2024-12-09 09:56:09.942388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.619 qpair failed and we were unable to recover it. 00:38:34.619 [2024-12-09 09:56:09.942779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.619 [2024-12-09 09:56:09.942786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.619 qpair failed and we were unable to recover it. 00:38:34.619 [2024-12-09 09:56:09.943071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.619 [2024-12-09 09:56:09.943079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.619 qpair failed and we were unable to recover it. 00:38:34.619 [2024-12-09 09:56:09.943400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.619 [2024-12-09 09:56:09.943408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.619 qpair failed and we were unable to recover it. 00:38:34.619 [2024-12-09 09:56:09.943717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.619 [2024-12-09 09:56:09.943724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.619 qpair failed and we were unable to recover it. 00:38:34.619 [2024-12-09 09:56:09.944042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.619 [2024-12-09 09:56:09.944049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.619 qpair failed and we were unable to recover it. 00:38:34.619 [2024-12-09 09:56:09.944382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.619 [2024-12-09 09:56:09.944389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.619 qpair failed and we were unable to recover it. 00:38:34.619 [2024-12-09 09:56:09.944709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.619 [2024-12-09 09:56:09.944716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.619 qpair failed and we were unable to recover it. 00:38:34.619 [2024-12-09 09:56:09.944946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.619 [2024-12-09 09:56:09.944953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.619 qpair failed and we were unable to recover it. 00:38:34.619 [2024-12-09 09:56:09.945259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.619 [2024-12-09 09:56:09.945266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.619 qpair failed and we were unable to recover it. 00:38:34.619 [2024-12-09 09:56:09.945561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.619 [2024-12-09 09:56:09.945568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.619 qpair failed and we were unable to recover it. 00:38:34.619 [2024-12-09 09:56:09.945905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.619 [2024-12-09 09:56:09.945913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.619 qpair failed and we were unable to recover it. 00:38:34.619 [2024-12-09 09:56:09.946225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.619 [2024-12-09 09:56:09.946232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.619 qpair failed and we were unable to recover it. 00:38:34.619 [2024-12-09 09:56:09.946439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.619 [2024-12-09 09:56:09.946446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.619 qpair failed and we were unable to recover it. 00:38:34.619 [2024-12-09 09:56:09.946765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.619 [2024-12-09 09:56:09.946772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.619 qpair failed and we were unable to recover it. 00:38:34.619 [2024-12-09 09:56:09.947099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.619 [2024-12-09 09:56:09.947106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.619 qpair failed and we were unable to recover it. 00:38:34.619 [2024-12-09 09:56:09.947436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.619 [2024-12-09 09:56:09.947443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.619 qpair failed and we were unable to recover it. 00:38:34.619 [2024-12-09 09:56:09.947769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.619 [2024-12-09 09:56:09.947776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.619 qpair failed and we were unable to recover it. 00:38:34.619 [2024-12-09 09:56:09.948125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.619 [2024-12-09 09:56:09.948132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.619 qpair failed and we were unable to recover it. 00:38:34.619 [2024-12-09 09:56:09.948453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.619 [2024-12-09 09:56:09.948460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.619 qpair failed and we were unable to recover it. 00:38:34.619 [2024-12-09 09:56:09.948685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.619 [2024-12-09 09:56:09.948693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.619 qpair failed and we were unable to recover it. 00:38:34.619 [2024-12-09 09:56:09.949022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.619 [2024-12-09 09:56:09.949029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.619 qpair failed and we were unable to recover it. 00:38:34.619 [2024-12-09 09:56:09.949224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.619 [2024-12-09 09:56:09.949231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.619 qpair failed and we were unable to recover it. 00:38:34.619 [2024-12-09 09:56:09.949422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.619 [2024-12-09 09:56:09.949429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.619 qpair failed and we were unable to recover it. 00:38:34.619 [2024-12-09 09:56:09.949705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.619 [2024-12-09 09:56:09.949713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.619 qpair failed and we were unable to recover it. 00:38:34.619 [2024-12-09 09:56:09.950002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.619 [2024-12-09 09:56:09.950009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.619 qpair failed and we were unable to recover it. 00:38:34.619 [2024-12-09 09:56:09.950331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.619 [2024-12-09 09:56:09.950338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.619 qpair failed and we were unable to recover it. 00:38:34.619 [2024-12-09 09:56:09.950556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.619 [2024-12-09 09:56:09.950563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.619 qpair failed and we were unable to recover it. 00:38:34.619 [2024-12-09 09:56:09.950841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.619 [2024-12-09 09:56:09.950849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.619 qpair failed and we were unable to recover it. 00:38:34.619 [2024-12-09 09:56:09.951136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.619 [2024-12-09 09:56:09.951143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.619 qpair failed and we were unable to recover it. 00:38:34.619 [2024-12-09 09:56:09.951475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.619 [2024-12-09 09:56:09.951483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.619 qpair failed and we were unable to recover it. 00:38:34.619 [2024-12-09 09:56:09.951806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.619 [2024-12-09 09:56:09.951814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.619 qpair failed and we were unable to recover it. 00:38:34.619 [2024-12-09 09:56:09.952032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.619 [2024-12-09 09:56:09.952042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.619 qpair failed and we were unable to recover it. 00:38:34.619 [2024-12-09 09:56:09.952360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.619 [2024-12-09 09:56:09.952368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.619 qpair failed and we were unable to recover it. 00:38:34.619 [2024-12-09 09:56:09.952699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.619 [2024-12-09 09:56:09.952706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.619 qpair failed and we were unable to recover it. 00:38:34.619 [2024-12-09 09:56:09.953034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.619 [2024-12-09 09:56:09.953041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.619 qpair failed and we were unable to recover it. 00:38:34.619 [2024-12-09 09:56:09.953363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.620 [2024-12-09 09:56:09.953370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.620 qpair failed and we were unable to recover it. 00:38:34.620 [2024-12-09 09:56:09.953725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.620 [2024-12-09 09:56:09.953732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.620 qpair failed and we were unable to recover it. 00:38:34.620 [2024-12-09 09:56:09.954062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.620 [2024-12-09 09:56:09.954069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.620 qpair failed and we were unable to recover it. 00:38:34.620 [2024-12-09 09:56:09.954403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.620 [2024-12-09 09:56:09.954410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.620 qpair failed and we were unable to recover it. 00:38:34.620 [2024-12-09 09:56:09.954701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.620 [2024-12-09 09:56:09.954710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.620 qpair failed and we were unable to recover it. 00:38:34.620 [2024-12-09 09:56:09.955022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.620 [2024-12-09 09:56:09.955029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.620 qpair failed and we were unable to recover it. 00:38:34.620 [2024-12-09 09:56:09.955321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.620 [2024-12-09 09:56:09.955329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.620 qpair failed and we were unable to recover it. 00:38:34.620 [2024-12-09 09:56:09.955647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.620 [2024-12-09 09:56:09.955655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.620 qpair failed and we were unable to recover it. 00:38:34.620 [2024-12-09 09:56:09.955946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.620 [2024-12-09 09:56:09.955953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.620 qpair failed and we were unable to recover it. 00:38:34.620 [2024-12-09 09:56:09.956255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.620 [2024-12-09 09:56:09.956263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.620 qpair failed and we were unable to recover it. 00:38:34.620 [2024-12-09 09:56:09.956579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.620 [2024-12-09 09:56:09.956587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.620 qpair failed and we were unable to recover it. 00:38:34.620 [2024-12-09 09:56:09.956895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.620 [2024-12-09 09:56:09.956902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.620 qpair failed and we were unable to recover it. 00:38:34.620 [2024-12-09 09:56:09.957229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.620 [2024-12-09 09:56:09.957236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.620 qpair failed and we were unable to recover it. 00:38:34.620 [2024-12-09 09:56:09.957562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.620 [2024-12-09 09:56:09.957570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.620 qpair failed and we were unable to recover it. 00:38:34.620 [2024-12-09 09:56:09.957893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.620 [2024-12-09 09:56:09.957902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.620 qpair failed and we were unable to recover it. 00:38:34.620 [2024-12-09 09:56:09.958235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.620 [2024-12-09 09:56:09.958243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.620 qpair failed and we were unable to recover it. 00:38:34.620 [2024-12-09 09:56:09.958412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.620 [2024-12-09 09:56:09.958420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.620 qpair failed and we were unable to recover it. 00:38:34.620 [2024-12-09 09:56:09.958721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.620 [2024-12-09 09:56:09.958728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.620 qpair failed and we were unable to recover it. 00:38:34.620 [2024-12-09 09:56:09.959063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.620 [2024-12-09 09:56:09.959071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.620 qpair failed and we were unable to recover it. 00:38:34.620 [2024-12-09 09:56:09.959253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.620 [2024-12-09 09:56:09.959261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.620 qpair failed and we were unable to recover it. 00:38:34.620 [2024-12-09 09:56:09.959571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.620 [2024-12-09 09:56:09.959579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.620 qpair failed and we were unable to recover it. 00:38:34.620 [2024-12-09 09:56:09.959893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.620 [2024-12-09 09:56:09.959900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.620 qpair failed and we were unable to recover it. 00:38:34.620 [2024-12-09 09:56:09.960206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.620 [2024-12-09 09:56:09.960213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.620 qpair failed and we were unable to recover it. 00:38:34.620 [2024-12-09 09:56:09.960561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.620 [2024-12-09 09:56:09.960568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.620 qpair failed and we were unable to recover it. 00:38:34.620 [2024-12-09 09:56:09.960929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.620 [2024-12-09 09:56:09.960937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.620 qpair failed and we were unable to recover it. 00:38:34.620 [2024-12-09 09:56:09.961291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.620 [2024-12-09 09:56:09.961298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.620 qpair failed and we were unable to recover it. 00:38:34.620 [2024-12-09 09:56:09.961360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.620 [2024-12-09 09:56:09.961367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.620 qpair failed and we were unable to recover it. 00:38:34.620 [2024-12-09 09:56:09.961728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.620 [2024-12-09 09:56:09.961735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.620 qpair failed and we were unable to recover it. 00:38:34.620 [2024-12-09 09:56:09.962062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.620 [2024-12-09 09:56:09.962068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.620 qpair failed and we were unable to recover it. 00:38:34.620 [2024-12-09 09:56:09.962417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.620 [2024-12-09 09:56:09.962424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.620 qpair failed and we were unable to recover it. 00:38:34.620 [2024-12-09 09:56:09.962750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.620 [2024-12-09 09:56:09.962758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.620 qpair failed and we were unable to recover it. 00:38:34.620 [2024-12-09 09:56:09.962934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.620 [2024-12-09 09:56:09.962942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.620 qpair failed and we were unable to recover it. 00:38:34.620 [2024-12-09 09:56:09.963271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.620 [2024-12-09 09:56:09.963279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.620 qpair failed and we were unable to recover it. 00:38:34.620 [2024-12-09 09:56:09.963600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.620 [2024-12-09 09:56:09.963607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.620 qpair failed and we were unable to recover it. 00:38:34.620 [2024-12-09 09:56:09.963934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.620 [2024-12-09 09:56:09.963941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.620 qpair failed and we were unable to recover it. 00:38:34.620 [2024-12-09 09:56:09.964259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.620 [2024-12-09 09:56:09.964266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.620 qpair failed and we were unable to recover it. 00:38:34.620 [2024-12-09 09:56:09.964585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.620 [2024-12-09 09:56:09.964593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.620 qpair failed and we were unable to recover it. 00:38:34.621 [2024-12-09 09:56:09.964933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.621 [2024-12-09 09:56:09.964940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.621 qpair failed and we were unable to recover it. 00:38:34.621 [2024-12-09 09:56:09.965253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.621 [2024-12-09 09:56:09.965260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.621 qpair failed and we were unable to recover it. 00:38:34.621 [2024-12-09 09:56:09.965429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.621 [2024-12-09 09:56:09.965436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.621 qpair failed and we were unable to recover it. 00:38:34.621 [2024-12-09 09:56:09.965648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.621 [2024-12-09 09:56:09.965656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.621 qpair failed and we were unable to recover it. 00:38:34.621 [2024-12-09 09:56:09.965836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.621 [2024-12-09 09:56:09.965842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.621 qpair failed and we were unable to recover it. 00:38:34.621 [2024-12-09 09:56:09.966151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.621 [2024-12-09 09:56:09.966158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.621 qpair failed and we were unable to recover it. 00:38:34.621 [2024-12-09 09:56:09.966491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.621 [2024-12-09 09:56:09.966499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.621 qpair failed and we were unable to recover it. 00:38:34.621 [2024-12-09 09:56:09.966656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.621 [2024-12-09 09:56:09.966664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.621 qpair failed and we were unable to recover it. 00:38:34.621 [2024-12-09 09:56:09.966946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.621 [2024-12-09 09:56:09.966953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.621 qpair failed and we were unable to recover it. 00:38:34.621 [2024-12-09 09:56:09.967255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.621 [2024-12-09 09:56:09.967262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.621 qpair failed and we were unable to recover it. 00:38:34.621 [2024-12-09 09:56:09.967580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.621 [2024-12-09 09:56:09.967587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.621 qpair failed and we were unable to recover it. 00:38:34.621 [2024-12-09 09:56:09.967919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.621 [2024-12-09 09:56:09.967927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.621 qpair failed and we were unable to recover it. 00:38:34.621 [2024-12-09 09:56:09.968268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.621 [2024-12-09 09:56:09.968275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.621 qpair failed and we were unable to recover it. 00:38:34.621 [2024-12-09 09:56:09.968685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.621 [2024-12-09 09:56:09.968692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.621 qpair failed and we were unable to recover it. 00:38:34.621 [2024-12-09 09:56:09.969002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.621 [2024-12-09 09:56:09.969008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.621 qpair failed and we were unable to recover it. 00:38:34.621 [2024-12-09 09:56:09.969323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.621 [2024-12-09 09:56:09.969330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.621 qpair failed and we were unable to recover it. 00:38:34.621 [2024-12-09 09:56:09.969665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.621 [2024-12-09 09:56:09.969673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.621 qpair failed and we were unable to recover it. 00:38:34.621 [2024-12-09 09:56:09.969871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.621 [2024-12-09 09:56:09.969877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.621 qpair failed and we were unable to recover it. 00:38:34.621 [2024-12-09 09:56:09.970159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.621 [2024-12-09 09:56:09.970166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.621 qpair failed and we were unable to recover it. 00:38:34.621 [2024-12-09 09:56:09.970486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.621 [2024-12-09 09:56:09.970493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.621 qpair failed and we were unable to recover it. 00:38:34.621 [2024-12-09 09:56:09.970811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.621 [2024-12-09 09:56:09.970818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.621 qpair failed and we were unable to recover it. 00:38:34.621 [2024-12-09 09:56:09.971173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.621 [2024-12-09 09:56:09.971180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.621 qpair failed and we were unable to recover it. 00:38:34.621 [2024-12-09 09:56:09.971517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.621 [2024-12-09 09:56:09.971523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.621 qpair failed and we were unable to recover it. 00:38:34.621 [2024-12-09 09:56:09.971834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.621 [2024-12-09 09:56:09.971841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.621 qpair failed and we were unable to recover it. 00:38:34.621 [2024-12-09 09:56:09.972186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.621 [2024-12-09 09:56:09.972194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.621 qpair failed and we were unable to recover it. 00:38:34.621 [2024-12-09 09:56:09.972521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.621 [2024-12-09 09:56:09.972527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.621 qpair failed and we were unable to recover it. 00:38:34.621 [2024-12-09 09:56:09.972853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.621 [2024-12-09 09:56:09.972860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.621 qpair failed and we were unable to recover it. 00:38:34.621 [2024-12-09 09:56:09.973188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.621 [2024-12-09 09:56:09.973194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.621 qpair failed and we were unable to recover it. 00:38:34.621 [2024-12-09 09:56:09.973541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.621 [2024-12-09 09:56:09.973548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.621 qpair failed and we were unable to recover it. 00:38:34.621 [2024-12-09 09:56:09.973901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.621 [2024-12-09 09:56:09.973908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.621 qpair failed and we were unable to recover it. 00:38:34.621 [2024-12-09 09:56:09.974213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.621 [2024-12-09 09:56:09.974220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.621 qpair failed and we were unable to recover it. 00:38:34.621 [2024-12-09 09:56:09.974288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.621 [2024-12-09 09:56:09.974294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.621 qpair failed and we were unable to recover it. 00:38:34.621 [2024-12-09 09:56:09.974618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.621 [2024-12-09 09:56:09.974624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.621 qpair failed and we were unable to recover it. 00:38:34.621 [2024-12-09 09:56:09.974948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.621 [2024-12-09 09:56:09.974955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.621 qpair failed and we were unable to recover it. 00:38:34.621 [2024-12-09 09:56:09.975228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.621 [2024-12-09 09:56:09.975236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.621 qpair failed and we were unable to recover it. 00:38:34.621 [2024-12-09 09:56:09.975601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.621 [2024-12-09 09:56:09.975608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.621 qpair failed and we were unable to recover it. 00:38:34.621 [2024-12-09 09:56:09.975921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.621 [2024-12-09 09:56:09.975928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.621 qpair failed and we were unable to recover it. 00:38:34.621 [2024-12-09 09:56:09.976288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.621 [2024-12-09 09:56:09.976296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.621 qpair failed and we were unable to recover it. 00:38:34.622 [2024-12-09 09:56:09.976496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.622 [2024-12-09 09:56:09.976504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.622 qpair failed and we were unable to recover it. 00:38:34.622 [2024-12-09 09:56:09.976808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.622 [2024-12-09 09:56:09.976816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.622 qpair failed and we were unable to recover it. 00:38:34.622 [2024-12-09 09:56:09.977138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.622 [2024-12-09 09:56:09.977145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.622 qpair failed and we were unable to recover it. 00:38:34.622 [2024-12-09 09:56:09.977469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.622 [2024-12-09 09:56:09.977476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.622 qpair failed and we were unable to recover it. 00:38:34.622 [2024-12-09 09:56:09.977664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.622 [2024-12-09 09:56:09.977672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.622 qpair failed and we were unable to recover it. 00:38:34.622 [2024-12-09 09:56:09.977965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.622 [2024-12-09 09:56:09.977973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.622 qpair failed and we were unable to recover it. 00:38:34.622 [2024-12-09 09:56:09.978303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.622 [2024-12-09 09:56:09.978311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.622 qpair failed and we were unable to recover it. 00:38:34.622 [2024-12-09 09:56:09.978625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.622 [2024-12-09 09:56:09.978632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.622 qpair failed and we were unable to recover it. 00:38:34.622 [2024-12-09 09:56:09.978790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.622 [2024-12-09 09:56:09.978797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.622 qpair failed and we were unable to recover it. 00:38:34.622 [2024-12-09 09:56:09.979087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.622 [2024-12-09 09:56:09.979094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.622 qpair failed and we were unable to recover it. 00:38:34.622 [2024-12-09 09:56:09.979250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.622 [2024-12-09 09:56:09.979257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.622 qpair failed and we were unable to recover it. 00:38:34.622 [2024-12-09 09:56:09.979536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.622 [2024-12-09 09:56:09.979543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.622 qpair failed and we were unable to recover it. 00:38:34.622 [2024-12-09 09:56:09.979856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.622 [2024-12-09 09:56:09.979864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.622 qpair failed and we were unable to recover it. 00:38:34.622 [2024-12-09 09:56:09.980185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.622 [2024-12-09 09:56:09.980192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.622 qpair failed and we were unable to recover it. 00:38:34.622 [2024-12-09 09:56:09.980520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.622 [2024-12-09 09:56:09.980527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.622 qpair failed and we were unable to recover it. 00:38:34.622 [2024-12-09 09:56:09.980836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.622 [2024-12-09 09:56:09.980843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.622 qpair failed and we were unable to recover it. 00:38:34.622 [2024-12-09 09:56:09.981183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.622 [2024-12-09 09:56:09.981190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.622 qpair failed and we were unable to recover it. 00:38:34.622 [2024-12-09 09:56:09.981529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.622 [2024-12-09 09:56:09.981537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.622 qpair failed and we were unable to recover it. 00:38:34.622 [2024-12-09 09:56:09.981857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.622 [2024-12-09 09:56:09.981866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.622 qpair failed and we were unable to recover it. 00:38:34.622 [2024-12-09 09:56:09.982183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.622 [2024-12-09 09:56:09.982190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.622 qpair failed and we were unable to recover it. 00:38:34.622 [2024-12-09 09:56:09.982554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.622 [2024-12-09 09:56:09.982561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.622 qpair failed and we were unable to recover it. 00:38:34.622 [2024-12-09 09:56:09.982875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.622 [2024-12-09 09:56:09.982882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.622 qpair failed and we were unable to recover it. 00:38:34.622 [2024-12-09 09:56:09.983221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.622 [2024-12-09 09:56:09.983228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.622 qpair failed and we were unable to recover it. 00:38:34.622 [2024-12-09 09:56:09.983560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.622 [2024-12-09 09:56:09.983566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.622 qpair failed and we were unable to recover it. 00:38:34.622 [2024-12-09 09:56:09.983902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.622 [2024-12-09 09:56:09.983910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.622 qpair failed and we were unable to recover it. 00:38:34.622 [2024-12-09 09:56:09.984103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.622 [2024-12-09 09:56:09.984110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.622 qpair failed and we were unable to recover it. 00:38:34.622 [2024-12-09 09:56:09.984471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.622 [2024-12-09 09:56:09.984478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.622 qpair failed and we were unable to recover it. 00:38:34.622 [2024-12-09 09:56:09.984793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.622 [2024-12-09 09:56:09.984801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.622 qpair failed and we were unable to recover it. 00:38:34.622 [2024-12-09 09:56:09.985119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.622 [2024-12-09 09:56:09.985127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.622 qpair failed and we were unable to recover it. 00:38:34.622 [2024-12-09 09:56:09.985463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.622 [2024-12-09 09:56:09.985470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.622 qpair failed and we were unable to recover it. 00:38:34.622 [2024-12-09 09:56:09.985788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.622 [2024-12-09 09:56:09.985795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.622 qpair failed and we were unable to recover it. 00:38:34.622 [2024-12-09 09:56:09.986113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.622 [2024-12-09 09:56:09.986120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.622 qpair failed and we were unable to recover it. 00:38:34.622 [2024-12-09 09:56:09.986430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.622 [2024-12-09 09:56:09.986436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.622 qpair failed and we were unable to recover it. 00:38:34.622 [2024-12-09 09:56:09.986790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.622 [2024-12-09 09:56:09.986798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.622 qpair failed and we were unable to recover it. 00:38:34.623 [2024-12-09 09:56:09.986994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.623 [2024-12-09 09:56:09.987002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.623 qpair failed and we were unable to recover it. 00:38:34.623 [2024-12-09 09:56:09.987300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.623 [2024-12-09 09:56:09.987307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.623 qpair failed and we were unable to recover it. 00:38:34.623 [2024-12-09 09:56:09.987601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.623 [2024-12-09 09:56:09.987608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.623 qpair failed and we were unable to recover it. 00:38:34.623 [2024-12-09 09:56:09.987908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.623 [2024-12-09 09:56:09.987915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.623 qpair failed and we were unable to recover it. 00:38:34.623 [2024-12-09 09:56:09.988227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.623 [2024-12-09 09:56:09.988234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.623 qpair failed and we were unable to recover it. 00:38:34.623 [2024-12-09 09:56:09.988541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.623 [2024-12-09 09:56:09.988549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.623 qpair failed and we were unable to recover it. 00:38:34.623 [2024-12-09 09:56:09.988847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.623 [2024-12-09 09:56:09.988854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.623 qpair failed and we were unable to recover it. 00:38:34.623 [2024-12-09 09:56:09.989178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.623 [2024-12-09 09:56:09.989187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.623 qpair failed and we were unable to recover it. 00:38:34.623 [2024-12-09 09:56:09.989382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.623 [2024-12-09 09:56:09.989390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.623 qpair failed and we were unable to recover it. 00:38:34.623 [2024-12-09 09:56:09.989766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.623 [2024-12-09 09:56:09.989773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.623 qpair failed and we were unable to recover it. 00:38:34.623 [2024-12-09 09:56:09.990094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.623 [2024-12-09 09:56:09.990101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.623 qpair failed and we were unable to recover it. 00:38:34.623 [2024-12-09 09:56:09.990516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.623 [2024-12-09 09:56:09.990523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.623 qpair failed and we were unable to recover it. 00:38:34.623 [2024-12-09 09:56:09.990833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.623 [2024-12-09 09:56:09.990840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.623 qpair failed and we were unable to recover it. 00:38:34.623 [2024-12-09 09:56:09.991175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.623 [2024-12-09 09:56:09.991182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.623 qpair failed and we were unable to recover it. 00:38:34.623 [2024-12-09 09:56:09.991489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.623 [2024-12-09 09:56:09.991497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.623 qpair failed and we were unable to recover it. 00:38:34.623 [2024-12-09 09:56:09.991813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.623 [2024-12-09 09:56:09.991820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.623 qpair failed and we were unable to recover it. 00:38:34.623 [2024-12-09 09:56:09.992139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.623 [2024-12-09 09:56:09.992146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.623 qpair failed and we were unable to recover it. 00:38:34.623 [2024-12-09 09:56:09.992479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.623 [2024-12-09 09:56:09.992486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.623 qpair failed and we were unable to recover it. 00:38:34.623 [2024-12-09 09:56:09.992833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.623 [2024-12-09 09:56:09.992840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.623 qpair failed and we were unable to recover it. 00:38:34.623 [2024-12-09 09:56:09.993177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.623 [2024-12-09 09:56:09.993185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.623 qpair failed and we were unable to recover it. 00:38:34.623 [2024-12-09 09:56:09.993243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.623 [2024-12-09 09:56:09.993250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.623 qpair failed and we were unable to recover it. 00:38:34.623 [2024-12-09 09:56:09.993561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.623 [2024-12-09 09:56:09.993568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.623 qpair failed and we were unable to recover it. 00:38:34.623 [2024-12-09 09:56:09.993871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.623 [2024-12-09 09:56:09.993878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.623 qpair failed and we were unable to recover it. 00:38:34.623 [2024-12-09 09:56:09.994213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.623 [2024-12-09 09:56:09.994220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.623 qpair failed and we were unable to recover it. 00:38:34.623 [2024-12-09 09:56:09.994555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.623 [2024-12-09 09:56:09.994561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.623 qpair failed and we were unable to recover it. 00:38:34.623 [2024-12-09 09:56:09.994872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.623 [2024-12-09 09:56:09.994879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.623 qpair failed and we were unable to recover it. 00:38:34.623 [2024-12-09 09:56:09.995211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.623 [2024-12-09 09:56:09.995218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.623 qpair failed and we were unable to recover it. 00:38:34.623 [2024-12-09 09:56:09.995517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.623 [2024-12-09 09:56:09.995524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.623 qpair failed and we were unable to recover it. 00:38:34.623 [2024-12-09 09:56:09.995845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.623 [2024-12-09 09:56:09.995852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.623 qpair failed and we were unable to recover it. 00:38:34.623 [2024-12-09 09:56:09.996180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.623 [2024-12-09 09:56:09.996188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.623 qpair failed and we were unable to recover it. 00:38:34.623 [2024-12-09 09:56:09.996523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.623 [2024-12-09 09:56:09.996530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.623 qpair failed and we were unable to recover it. 00:38:34.623 [2024-12-09 09:56:09.996909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.623 [2024-12-09 09:56:09.996917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.623 qpair failed and we were unable to recover it. 00:38:34.623 [2024-12-09 09:56:09.997079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.623 [2024-12-09 09:56:09.997087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.623 qpair failed and we were unable to recover it. 00:38:34.623 [2024-12-09 09:56:09.997278] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc81ff0 is same with the state(6) to be set 00:38:34.623 [2024-12-09 09:56:09.997985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.623 [2024-12-09 09:56:09.998089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf4000b90 with addr=10.0.0.2, port=4420 00:38:34.623 qpair failed and we were unable to recover it. 00:38:34.623 [2024-12-09 09:56:09.998375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.623 [2024-12-09 09:56:09.998413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf4000b90 with addr=10.0.0.2, port=4420 00:38:34.623 qpair failed and we were unable to recover it. 00:38:34.623 [2024-12-09 09:56:09.998873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.623 [2024-12-09 09:56:09.998965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf4000b90 with addr=10.0.0.2, port=4420 00:38:34.623 qpair failed and we were unable to recover it. 00:38:34.623 [2024-12-09 09:56:09.999365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.623 [2024-12-09 09:56:09.999374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.623 qpair failed and we were unable to recover it. 00:38:34.623 [2024-12-09 09:56:09.999557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.624 [2024-12-09 09:56:09.999566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.624 qpair failed and we were unable to recover it. 00:38:34.624 [2024-12-09 09:56:09.999872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.624 [2024-12-09 09:56:09.999879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.624 qpair failed and we were unable to recover it. 00:38:34.624 [2024-12-09 09:56:10.000212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.624 [2024-12-09 09:56:10.000220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.624 qpair failed and we were unable to recover it. 00:38:34.624 [2024-12-09 09:56:10.000441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.624 [2024-12-09 09:56:10.000449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.624 qpair failed and we were unable to recover it. 00:38:34.624 [2024-12-09 09:56:10.000808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.624 [2024-12-09 09:56:10.000815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.624 qpair failed and we were unable to recover it. 00:38:34.624 [2024-12-09 09:56:10.001150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.624 [2024-12-09 09:56:10.001158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.624 qpair failed and we were unable to recover it. 00:38:34.624 [2024-12-09 09:56:10.001361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.624 [2024-12-09 09:56:10.001369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.624 qpair failed and we were unable to recover it. 00:38:34.624 [2024-12-09 09:56:10.001678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.624 [2024-12-09 09:56:10.001686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.624 qpair failed and we were unable to recover it. 00:38:34.624 [2024-12-09 09:56:10.002447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.624 [2024-12-09 09:56:10.002459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b9[2024-12-09 09:56:10.002451] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:34.624 0 with addr=10.0.0.2, port=4420 00:38:34.624 qpair failed and we were unable to recover it. 00:38:34.624 [2024-12-09 09:56:10.002851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.624 [2024-12-09 09:56:10.002860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.624 qpair failed and we were unable to recover it. 00:38:34.624 [2024-12-09 09:56:10.002941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.624 [2024-12-09 09:56:10.002949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.624 qpair failed and we were unable to recover it. 00:38:34.624 [2024-12-09 09:56:10.003174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.624 [2024-12-09 09:56:10.003182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.624 qpair failed and we were unable to recover it. 00:38:34.624 [2024-12-09 09:56:10.003352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.624 [2024-12-09 09:56:10.003359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.624 qpair failed and we were unable to recover it. 00:38:34.624 [2024-12-09 09:56:10.003707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.624 [2024-12-09 09:56:10.003715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.624 qpair failed and we were unable to recover it. 00:38:34.624 [2024-12-09 09:56:10.003948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.624 [2024-12-09 09:56:10.003955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.624 qpair failed and we were unable to recover it. 00:38:34.624 [2024-12-09 09:56:10.004207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.624 [2024-12-09 09:56:10.004215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.624 qpair failed and we were unable to recover it. 00:38:34.624 [2024-12-09 09:56:10.004560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.624 [2024-12-09 09:56:10.004568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.624 qpair failed and we were unable to recover it. 00:38:34.624 [2024-12-09 09:56:10.004908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.624 [2024-12-09 09:56:10.004916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.624 qpair failed and we were unable to recover it. 00:38:34.624 [2024-12-09 09:56:10.005322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.624 [2024-12-09 09:56:10.005330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.624 qpair failed and we were unable to recover it. 00:38:34.624 [2024-12-09 09:56:10.005655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.624 [2024-12-09 09:56:10.005663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.624 qpair failed and we were unable to recover it. 00:38:34.624 [2024-12-09 09:56:10.006042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.624 [2024-12-09 09:56:10.006050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.624 qpair failed and we were unable to recover it. 00:38:34.624 [2024-12-09 09:56:10.006230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.624 [2024-12-09 09:56:10.006238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.624 qpair failed and we were unable to recover it. 00:38:34.624 [2024-12-09 09:56:10.006329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.624 [2024-12-09 09:56:10.006337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.624 qpair failed and we were unable to recover it. 00:38:34.624 [2024-12-09 09:56:10.006703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.624 [2024-12-09 09:56:10.006712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.624 qpair failed and we were unable to recover it. 00:38:34.624 [2024-12-09 09:56:10.007010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.624 [2024-12-09 09:56:10.007018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.624 qpair failed and we were unable to recover it. 00:38:34.624 [2024-12-09 09:56:10.007365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.624 [2024-12-09 09:56:10.007373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.624 qpair failed and we were unable to recover it. 00:38:34.624 [2024-12-09 09:56:10.007694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.624 [2024-12-09 09:56:10.007703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.624 qpair failed and we were unable to recover it. 00:38:34.624 [2024-12-09 09:56:10.008008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.624 [2024-12-09 09:56:10.008016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.624 qpair failed and we were unable to recover it. 00:38:34.624 [2024-12-09 09:56:10.008220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.624 [2024-12-09 09:56:10.008229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.624 qpair failed and we were unable to recover it. 00:38:34.624 [2024-12-09 09:56:10.008564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.624 [2024-12-09 09:56:10.008572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.624 qpair failed and we were unable to recover it. 00:38:34.624 [2024-12-09 09:56:10.008905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.624 [2024-12-09 09:56:10.008913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.624 qpair failed and we were unable to recover it. 00:38:34.624 [2024-12-09 09:56:10.009089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.624 [2024-12-09 09:56:10.009098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.624 qpair failed and we were unable to recover it. 00:38:34.624 [2024-12-09 09:56:10.009267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.624 [2024-12-09 09:56:10.009275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.624 qpair failed and we were unable to recover it. 00:38:34.624 [2024-12-09 09:56:10.009464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.624 [2024-12-09 09:56:10.009472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.624 qpair failed and we were unable to recover it. 00:38:34.624 [2024-12-09 09:56:10.009791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.624 [2024-12-09 09:56:10.009800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.624 qpair failed and we were unable to recover it. 00:38:34.624 [2024-12-09 09:56:10.010134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.624 [2024-12-09 09:56:10.010142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.624 qpair failed and we were unable to recover it. 00:38:34.624 [2024-12-09 09:56:10.010481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.624 [2024-12-09 09:56:10.010491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.624 qpair failed and we were unable to recover it. 00:38:34.624 [2024-12-09 09:56:10.010819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.624 [2024-12-09 09:56:10.010832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.624 qpair failed and we were unable to recover it. 00:38:34.625 [2024-12-09 09:56:10.011091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.625 [2024-12-09 09:56:10.011106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.625 qpair failed and we were unable to recover it. 00:38:34.625 [2024-12-09 09:56:10.011332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.625 [2024-12-09 09:56:10.011343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.625 qpair failed and we were unable to recover it. 00:38:34.625 [2024-12-09 09:56:10.011670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.625 [2024-12-09 09:56:10.011686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.625 qpair failed and we were unable to recover it. 00:38:34.625 [2024-12-09 09:56:10.011977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.625 [2024-12-09 09:56:10.011990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.625 qpair failed and we were unable to recover it. 00:38:34.625 [2024-12-09 09:56:10.012255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.625 [2024-12-09 09:56:10.012269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.625 qpair failed and we were unable to recover it. 00:38:34.625 [2024-12-09 09:56:10.012375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.625 [2024-12-09 09:56:10.012387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.625 qpair failed and we were unable to recover it. 00:38:34.625 [2024-12-09 09:56:10.012607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.625 [2024-12-09 09:56:10.012628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.625 qpair failed and we were unable to recover it. 00:38:34.625 [2024-12-09 09:56:10.013090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.625 [2024-12-09 09:56:10.013104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.625 qpair failed and we were unable to recover it. 00:38:34.625 [2024-12-09 09:56:10.013370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.625 [2024-12-09 09:56:10.013380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.625 qpair failed and we were unable to recover it. 00:38:34.625 [2024-12-09 09:56:10.013661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.625 [2024-12-09 09:56:10.013672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.625 qpair failed and we were unable to recover it. 00:38:34.625 [2024-12-09 09:56:10.013860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.625 [2024-12-09 09:56:10.013870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.625 qpair failed and we were unable to recover it. 00:38:34.625 [2024-12-09 09:56:10.014152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.625 [2024-12-09 09:56:10.014160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.625 qpair failed and we were unable to recover it. 00:38:34.625 [2024-12-09 09:56:10.014427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.625 [2024-12-09 09:56:10.014435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.625 qpair failed and we were unable to recover it. 00:38:34.625 [2024-12-09 09:56:10.014793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.625 [2024-12-09 09:56:10.014802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.625 qpair failed and we were unable to recover it. 00:38:34.625 [2024-12-09 09:56:10.015004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.625 [2024-12-09 09:56:10.015012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.625 qpair failed and we were unable to recover it. 00:38:34.625 [2024-12-09 09:56:10.015286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.625 [2024-12-09 09:56:10.015294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.625 qpair failed and we were unable to recover it. 00:38:34.625 [2024-12-09 09:56:10.015580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.625 [2024-12-09 09:56:10.015589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.625 qpair failed and we were unable to recover it. 00:38:34.625 [2024-12-09 09:56:10.015828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.625 [2024-12-09 09:56:10.015836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.625 qpair failed and we were unable to recover it. 00:38:34.625 [2024-12-09 09:56:10.016208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.625 [2024-12-09 09:56:10.016216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.625 qpair failed and we were unable to recover it. 00:38:34.625 [2024-12-09 09:56:10.016524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.625 [2024-12-09 09:56:10.016533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.625 qpair failed and we were unable to recover it. 00:38:34.625 [2024-12-09 09:56:10.016856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.625 [2024-12-09 09:56:10.016865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.625 qpair failed and we were unable to recover it. 00:38:34.625 [2024-12-09 09:56:10.017239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.625 [2024-12-09 09:56:10.017248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.625 qpair failed and we were unable to recover it. 00:38:34.625 [2024-12-09 09:56:10.017565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.625 [2024-12-09 09:56:10.017574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.625 qpair failed and we were unable to recover it. 00:38:34.625 [2024-12-09 09:56:10.017648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.625 [2024-12-09 09:56:10.017656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.625 qpair failed and we were unable to recover it. 00:38:34.625 [2024-12-09 09:56:10.017983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.625 [2024-12-09 09:56:10.017992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.625 qpair failed and we were unable to recover it. 00:38:34.625 [2024-12-09 09:56:10.018402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.625 [2024-12-09 09:56:10.018411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.625 qpair failed and we were unable to recover it. 00:38:34.625 [2024-12-09 09:56:10.018708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.625 [2024-12-09 09:56:10.018717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.625 qpair failed and we were unable to recover it. 00:38:34.625 [2024-12-09 09:56:10.018895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.625 [2024-12-09 09:56:10.018902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.625 qpair failed and we were unable to recover it. 00:38:34.625 [2024-12-09 09:56:10.019189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.625 [2024-12-09 09:56:10.019197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.625 qpair failed and we were unable to recover it. 00:38:34.625 [2024-12-09 09:56:10.019509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.625 [2024-12-09 09:56:10.019517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.625 qpair failed and we were unable to recover it. 00:38:34.625 [2024-12-09 09:56:10.019826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.625 [2024-12-09 09:56:10.019834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.625 qpair failed and we were unable to recover it. 00:38:34.625 [2024-12-09 09:56:10.020010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.625 [2024-12-09 09:56:10.020018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.625 qpair failed and we were unable to recover it. 00:38:34.625 [2024-12-09 09:56:10.020295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.625 [2024-12-09 09:56:10.020304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.625 qpair failed and we were unable to recover it. 00:38:34.625 [2024-12-09 09:56:10.020521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.625 [2024-12-09 09:56:10.020529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.625 qpair failed and we were unable to recover it. 00:38:34.625 [2024-12-09 09:56:10.020839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.625 [2024-12-09 09:56:10.020846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.625 qpair failed and we were unable to recover it. 00:38:34.625 [2024-12-09 09:56:10.021155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.625 [2024-12-09 09:56:10.021164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.625 qpair failed and we were unable to recover it. 00:38:34.625 [2024-12-09 09:56:10.021501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.625 [2024-12-09 09:56:10.021509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.625 qpair failed and we were unable to recover it. 00:38:34.625 [2024-12-09 09:56:10.021710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.625 [2024-12-09 09:56:10.021719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.625 qpair failed and we were unable to recover it. 00:38:34.626 [2024-12-09 09:56:10.022114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.626 [2024-12-09 09:56:10.022124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.626 qpair failed and we were unable to recover it. 00:38:34.626 [2024-12-09 09:56:10.022318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.626 [2024-12-09 09:56:10.022326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.626 qpair failed and we were unable to recover it. 00:38:34.626 [2024-12-09 09:56:10.022482] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:34.626 [2024-12-09 09:56:10.022513] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:34.626 [2024-12-09 09:56:10.022521] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:34.626 [2024-12-09 09:56:10.022528] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:34.626 [2024-12-09 09:56:10.022534] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:34.626 [2024-12-09 09:56:10.022542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.626 [2024-12-09 09:56:10.022549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.626 qpair failed and we were unable to recover it. 00:38:34.626 [2024-12-09 09:56:10.022867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.626 [2024-12-09 09:56:10.022875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.626 qpair failed and we were unable to recover it. 00:38:34.626 [2024-12-09 09:56:10.023254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.626 [2024-12-09 09:56:10.023261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.626 qpair failed and we were unable to recover it. 00:38:34.626 [2024-12-09 09:56:10.023454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.626 [2024-12-09 09:56:10.023461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.626 qpair failed and we were unable to recover it. 00:38:34.626 [2024-12-09 09:56:10.023778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.626 [2024-12-09 09:56:10.023785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.626 qpair failed and we were unable to recover it. 00:38:34.626 [2024-12-09 09:56:10.024164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.626 [2024-12-09 09:56:10.024171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.626 qpair failed and we were unable to recover it. 00:38:34.626 [2024-12-09 09:56:10.024064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:38:34.626 [2024-12-09 09:56:10.024263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:38:34.626 [2024-12-09 09:56:10.024399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:38:34.626 [2024-12-09 09:56:10.024496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.626 [2024-12-09 09:56:10.024400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:38:34.626 [2024-12-09 09:56:10.024504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.626 qpair failed and we were unable to recover it. 00:38:34.626 [2024-12-09 09:56:10.024824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.626 [2024-12-09 09:56:10.024832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.626 qpair failed and we were unable to recover it. 00:38:34.626 [2024-12-09 09:56:10.025060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.626 [2024-12-09 09:56:10.025070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.626 qpair failed and we were unable to recover it. 00:38:34.626 [2024-12-09 09:56:10.025268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.626 [2024-12-09 09:56:10.025277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.626 qpair failed and we were unable to recover it. 00:38:34.626 [2024-12-09 09:56:10.025510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.626 [2024-12-09 09:56:10.025517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.626 qpair failed and we were unable to recover it. 00:38:34.626 [2024-12-09 09:56:10.025891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.626 [2024-12-09 09:56:10.025899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.626 qpair failed and we were unable to recover it. 00:38:34.626 [2024-12-09 09:56:10.026083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.626 [2024-12-09 09:56:10.026090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.626 qpair failed and we were unable to recover it. 00:38:34.626 [2024-12-09 09:56:10.026487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.626 [2024-12-09 09:56:10.026495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.626 qpair failed and we were unable to recover it. 00:38:34.626 [2024-12-09 09:56:10.026692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.626 [2024-12-09 09:56:10.026700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.626 qpair failed and we were unable to recover it. 00:38:34.626 [2024-12-09 09:56:10.026912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.626 [2024-12-09 09:56:10.026919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.626 qpair failed and we were unable to recover it. 00:38:34.626 [2024-12-09 09:56:10.027236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.626 [2024-12-09 09:56:10.027244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.626 qpair failed and we were unable to recover it. 00:38:34.626 [2024-12-09 09:56:10.027544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.626 [2024-12-09 09:56:10.027552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.626 qpair failed and we were unable to recover it. 00:38:34.626 [2024-12-09 09:56:10.027758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.626 [2024-12-09 09:56:10.027772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.626 qpair failed and we were unable to recover it. 00:38:34.626 [2024-12-09 09:56:10.028127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.626 [2024-12-09 09:56:10.028135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.626 qpair failed and we were unable to recover it. 00:38:34.626 [2024-12-09 09:56:10.028405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.626 [2024-12-09 09:56:10.028412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.626 qpair failed and we were unable to recover it. 00:38:34.626 [2024-12-09 09:56:10.028721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.626 [2024-12-09 09:56:10.028729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.626 qpair failed and we were unable to recover it. 00:38:34.626 [2024-12-09 09:56:10.029075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.626 [2024-12-09 09:56:10.029083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.626 qpair failed and we were unable to recover it. 00:38:34.626 [2024-12-09 09:56:10.029385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.626 [2024-12-09 09:56:10.029392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.626 qpair failed and we were unable to recover it. 00:38:34.626 [2024-12-09 09:56:10.029449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.626 [2024-12-09 09:56:10.029457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.626 qpair failed and we were unable to recover it. 00:38:34.626 [2024-12-09 09:56:10.029746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.626 [2024-12-09 09:56:10.029755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.626 qpair failed and we were unable to recover it. 00:38:34.626 [2024-12-09 09:56:10.029947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.626 [2024-12-09 09:56:10.029956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.626 qpair failed and we were unable to recover it. 00:38:34.626 [2024-12-09 09:56:10.030115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.626 [2024-12-09 09:56:10.030123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.626 qpair failed and we were unable to recover it. 00:38:34.626 [2024-12-09 09:56:10.030224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.627 [2024-12-09 09:56:10.030233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.627 qpair failed and we were unable to recover it. 00:38:34.627 [2024-12-09 09:56:10.030447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.627 [2024-12-09 09:56:10.030455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.627 qpair failed and we were unable to recover it. 00:38:34.627 [2024-12-09 09:56:10.030752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.627 [2024-12-09 09:56:10.030761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.627 qpair failed and we were unable to recover it. 00:38:34.627 [2024-12-09 09:56:10.031073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.627 [2024-12-09 09:56:10.031081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.627 qpair failed and we were unable to recover it. 00:38:34.627 [2024-12-09 09:56:10.031275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.627 [2024-12-09 09:56:10.031283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.627 qpair failed and we were unable to recover it. 00:38:34.627 [2024-12-09 09:56:10.031629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.627 [2024-12-09 09:56:10.031642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.627 qpair failed and we were unable to recover it. 00:38:34.627 [2024-12-09 09:56:10.032011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.627 [2024-12-09 09:56:10.032019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.627 qpair failed and we were unable to recover it. 00:38:34.627 [2024-12-09 09:56:10.032293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.627 [2024-12-09 09:56:10.032300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.627 qpair failed and we were unable to recover it. 00:38:34.627 [2024-12-09 09:56:10.032352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.627 [2024-12-09 09:56:10.032360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.627 qpair failed and we were unable to recover it. 00:38:34.627 [2024-12-09 09:56:10.032650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.627 [2024-12-09 09:56:10.032658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.627 qpair failed and we were unable to recover it. 00:38:34.627 [2024-12-09 09:56:10.032960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.627 [2024-12-09 09:56:10.032968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.627 qpair failed and we were unable to recover it. 00:38:34.627 [2024-12-09 09:56:10.033279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.627 [2024-12-09 09:56:10.033287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.627 qpair failed and we were unable to recover it. 00:38:34.627 [2024-12-09 09:56:10.033581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.627 [2024-12-09 09:56:10.033588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.627 qpair failed and we were unable to recover it. 00:38:34.627 [2024-12-09 09:56:10.033652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.627 [2024-12-09 09:56:10.033660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.627 qpair failed and we were unable to recover it. 00:38:34.627 [2024-12-09 09:56:10.033946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.627 [2024-12-09 09:56:10.033954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.627 qpair failed and we were unable to recover it. 00:38:34.627 [2024-12-09 09:56:10.034289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.627 [2024-12-09 09:56:10.034298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.627 qpair failed and we were unable to recover it. 00:38:34.627 [2024-12-09 09:56:10.034635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.627 [2024-12-09 09:56:10.034648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.627 qpair failed and we were unable to recover it. 00:38:34.627 [2024-12-09 09:56:10.034853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.627 [2024-12-09 09:56:10.034862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.627 qpair failed and we were unable to recover it. 00:38:34.627 [2024-12-09 09:56:10.035249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.627 [2024-12-09 09:56:10.035256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.627 qpair failed and we were unable to recover it. 00:38:34.627 [2024-12-09 09:56:10.035545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.627 [2024-12-09 09:56:10.035552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.627 qpair failed and we were unable to recover it. 00:38:34.627 [2024-12-09 09:56:10.035875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.627 [2024-12-09 09:56:10.035885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.627 qpair failed and we were unable to recover it. 00:38:34.627 [2024-12-09 09:56:10.036210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.627 [2024-12-09 09:56:10.036217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.627 qpair failed and we were unable to recover it. 00:38:34.627 [2024-12-09 09:56:10.036511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.627 [2024-12-09 09:56:10.036518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.627 qpair failed and we were unable to recover it. 00:38:34.627 [2024-12-09 09:56:10.036848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.627 [2024-12-09 09:56:10.036855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.627 qpair failed and we were unable to recover it. 00:38:34.627 [2024-12-09 09:56:10.037171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.627 [2024-12-09 09:56:10.037179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.627 qpair failed and we were unable to recover it. 00:38:34.627 [2024-12-09 09:56:10.037468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.627 [2024-12-09 09:56:10.037475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.627 qpair failed and we were unable to recover it. 00:38:34.627 [2024-12-09 09:56:10.037644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.627 [2024-12-09 09:56:10.037653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.627 qpair failed and we were unable to recover it. 00:38:34.627 [2024-12-09 09:56:10.037957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.627 [2024-12-09 09:56:10.037965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.627 qpair failed and we were unable to recover it. 00:38:34.627 [2024-12-09 09:56:10.038181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.627 [2024-12-09 09:56:10.038190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.627 qpair failed and we were unable to recover it. 00:38:34.627 [2024-12-09 09:56:10.038475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.627 [2024-12-09 09:56:10.038483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.627 qpair failed and we were unable to recover it. 00:38:34.627 [2024-12-09 09:56:10.038672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.627 [2024-12-09 09:56:10.038682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.627 qpair failed and we were unable to recover it. 00:38:34.627 [2024-12-09 09:56:10.038992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.627 [2024-12-09 09:56:10.039000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.627 qpair failed and we were unable to recover it. 00:38:34.627 [2024-12-09 09:56:10.039270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.627 [2024-12-09 09:56:10.039278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.627 qpair failed and we were unable to recover it. 00:38:34.627 [2024-12-09 09:56:10.039497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.627 [2024-12-09 09:56:10.039505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.627 qpair failed and we were unable to recover it. 00:38:34.627 [2024-12-09 09:56:10.039755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.627 [2024-12-09 09:56:10.039763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.627 qpair failed and we were unable to recover it. 00:38:34.627 [2024-12-09 09:56:10.040115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.627 [2024-12-09 09:56:10.040123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.627 qpair failed and we were unable to recover it. 00:38:34.627 [2024-12-09 09:56:10.040458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.627 [2024-12-09 09:56:10.040466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.627 qpair failed and we were unable to recover it. 00:38:34.627 [2024-12-09 09:56:10.040682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.627 [2024-12-09 09:56:10.040690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.627 qpair failed and we were unable to recover it. 00:38:34.627 [2024-12-09 09:56:10.041041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.628 [2024-12-09 09:56:10.041049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.628 qpair failed and we were unable to recover it. 00:38:34.905 [2024-12-09 09:56:10.041352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.905 [2024-12-09 09:56:10.041360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.905 qpair failed and we were unable to recover it. 00:38:34.905 [2024-12-09 09:56:10.041557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.905 [2024-12-09 09:56:10.041568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.905 qpair failed and we were unable to recover it. 00:38:34.905 [2024-12-09 09:56:10.041770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.905 [2024-12-09 09:56:10.041779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.905 qpair failed and we were unable to recover it. 00:38:34.905 [2024-12-09 09:56:10.042178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.905 [2024-12-09 09:56:10.042186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.905 qpair failed and we were unable to recover it. 00:38:34.905 [2024-12-09 09:56:10.042502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.905 [2024-12-09 09:56:10.042509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.905 qpair failed and we were unable to recover it. 00:38:34.905 [2024-12-09 09:56:10.042818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.905 [2024-12-09 09:56:10.042826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.905 qpair failed and we were unable to recover it. 00:38:34.905 [2024-12-09 09:56:10.043168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.905 [2024-12-09 09:56:10.043176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.905 qpair failed and we were unable to recover it. 00:38:34.905 [2024-12-09 09:56:10.043474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.905 [2024-12-09 09:56:10.043482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.905 qpair failed and we were unable to recover it. 00:38:34.905 [2024-12-09 09:56:10.043686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.905 [2024-12-09 09:56:10.043694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.905 qpair failed and we were unable to recover it. 00:38:34.905 [2024-12-09 09:56:10.043973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.905 [2024-12-09 09:56:10.043982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.905 qpair failed and we were unable to recover it. 00:38:34.905 [2024-12-09 09:56:10.044299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.905 [2024-12-09 09:56:10.044308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.905 qpair failed and we were unable to recover it. 00:38:34.905 [2024-12-09 09:56:10.044617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.905 [2024-12-09 09:56:10.044626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.905 qpair failed and we were unable to recover it. 00:38:34.905 [2024-12-09 09:56:10.044946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.905 [2024-12-09 09:56:10.044954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.905 qpair failed and we were unable to recover it. 00:38:34.906 [2024-12-09 09:56:10.045108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.906 [2024-12-09 09:56:10.045117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.906 qpair failed and we were unable to recover it. 00:38:34.906 [2024-12-09 09:56:10.045446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.906 [2024-12-09 09:56:10.045454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.906 qpair failed and we were unable to recover it. 00:38:34.906 [2024-12-09 09:56:10.045633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.906 [2024-12-09 09:56:10.045645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.906 qpair failed and we were unable to recover it. 00:38:34.906 [2024-12-09 09:56:10.045805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.906 [2024-12-09 09:56:10.045812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.906 qpair failed and we were unable to recover it. 00:38:34.906 [2024-12-09 09:56:10.046046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.906 [2024-12-09 09:56:10.046054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.906 qpair failed and we were unable to recover it. 00:38:34.906 [2024-12-09 09:56:10.046371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.906 [2024-12-09 09:56:10.046379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.906 qpair failed and we were unable to recover it. 00:38:34.906 [2024-12-09 09:56:10.046579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.906 [2024-12-09 09:56:10.046587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.906 qpair failed and we were unable to recover it. 00:38:34.906 [2024-12-09 09:56:10.046952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.906 [2024-12-09 09:56:10.046960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.906 qpair failed and we were unable to recover it. 00:38:34.906 [2024-12-09 09:56:10.047135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.906 [2024-12-09 09:56:10.047146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.906 qpair failed and we were unable to recover it. 00:38:34.906 [2024-12-09 09:56:10.047417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.906 [2024-12-09 09:56:10.047425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.906 qpair failed and we were unable to recover it. 00:38:34.906 [2024-12-09 09:56:10.047569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.906 [2024-12-09 09:56:10.047576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.906 qpair failed and we were unable to recover it. 00:38:34.906 [2024-12-09 09:56:10.047894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.906 [2024-12-09 09:56:10.047902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.906 qpair failed and we were unable to recover it. 00:38:34.906 [2024-12-09 09:56:10.048191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.906 [2024-12-09 09:56:10.048199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.906 qpair failed and we were unable to recover it. 00:38:34.906 [2024-12-09 09:56:10.048511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.906 [2024-12-09 09:56:10.048519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.906 qpair failed and we were unable to recover it. 00:38:34.906 [2024-12-09 09:56:10.048826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.906 [2024-12-09 09:56:10.048835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.906 qpair failed and we were unable to recover it. 00:38:34.906 [2024-12-09 09:56:10.049151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.906 [2024-12-09 09:56:10.049160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.906 qpair failed and we were unable to recover it. 00:38:34.906 [2024-12-09 09:56:10.049339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.906 [2024-12-09 09:56:10.049348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.906 qpair failed and we were unable to recover it. 00:38:34.906 [2024-12-09 09:56:10.049548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.906 [2024-12-09 09:56:10.049555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.906 qpair failed and we were unable to recover it. 00:38:34.906 [2024-12-09 09:56:10.049748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.906 [2024-12-09 09:56:10.049755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.906 qpair failed and we were unable to recover it. 00:38:34.906 [2024-12-09 09:56:10.050059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.906 [2024-12-09 09:56:10.050067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.906 qpair failed and we were unable to recover it. 00:38:34.906 [2024-12-09 09:56:10.050242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.906 [2024-12-09 09:56:10.050250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.906 qpair failed and we were unable to recover it. 00:38:34.906 [2024-12-09 09:56:10.050534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.906 [2024-12-09 09:56:10.050543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.906 qpair failed and we were unable to recover it. 00:38:34.906 [2024-12-09 09:56:10.050832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.906 [2024-12-09 09:56:10.050841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.906 qpair failed and we were unable to recover it. 00:38:34.906 [2024-12-09 09:56:10.051136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.906 [2024-12-09 09:56:10.051143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.906 qpair failed and we were unable to recover it. 00:38:34.906 [2024-12-09 09:56:10.051308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.906 [2024-12-09 09:56:10.051323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.906 qpair failed and we were unable to recover it. 00:38:34.906 [2024-12-09 09:56:10.051624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.906 [2024-12-09 09:56:10.051632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.906 qpair failed and we were unable to recover it. 00:38:34.906 [2024-12-09 09:56:10.051945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.906 [2024-12-09 09:56:10.051954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.906 qpair failed and we were unable to recover it. 00:38:34.906 [2024-12-09 09:56:10.052248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.906 [2024-12-09 09:56:10.052256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.906 qpair failed and we were unable to recover it. 00:38:34.906 [2024-12-09 09:56:10.052556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.906 [2024-12-09 09:56:10.052565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.906 qpair failed and we were unable to recover it. 00:38:34.906 [2024-12-09 09:56:10.052870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.906 [2024-12-09 09:56:10.052878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.906 qpair failed and we were unable to recover it. 00:38:34.906 [2024-12-09 09:56:10.053059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.906 [2024-12-09 09:56:10.053066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.906 qpair failed and we were unable to recover it. 00:38:34.906 [2024-12-09 09:56:10.053350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.906 [2024-12-09 09:56:10.053357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.906 qpair failed and we were unable to recover it. 00:38:34.906 [2024-12-09 09:56:10.053652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.906 [2024-12-09 09:56:10.053659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.906 qpair failed and we were unable to recover it. 00:38:34.906 [2024-12-09 09:56:10.053969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.906 [2024-12-09 09:56:10.053976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.906 qpair failed and we were unable to recover it. 00:38:34.906 [2024-12-09 09:56:10.054244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.906 [2024-12-09 09:56:10.054252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.906 qpair failed and we were unable to recover it. 00:38:34.906 [2024-12-09 09:56:10.054452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.906 [2024-12-09 09:56:10.054460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.906 qpair failed and we were unable to recover it. 00:38:34.906 [2024-12-09 09:56:10.054748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.906 [2024-12-09 09:56:10.054756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.906 qpair failed and we were unable to recover it. 00:38:34.906 [2024-12-09 09:56:10.055133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.907 [2024-12-09 09:56:10.055140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.907 qpair failed and we were unable to recover it. 00:38:34.907 [2024-12-09 09:56:10.055418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.907 [2024-12-09 09:56:10.055424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.907 qpair failed and we were unable to recover it. 00:38:34.907 [2024-12-09 09:56:10.055624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.907 [2024-12-09 09:56:10.055632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.907 qpair failed and we were unable to recover it. 00:38:34.907 [2024-12-09 09:56:10.055693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.907 [2024-12-09 09:56:10.055702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.907 qpair failed and we were unable to recover it. 00:38:34.907 [2024-12-09 09:56:10.055979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.907 [2024-12-09 09:56:10.055986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.907 qpair failed and we were unable to recover it. 00:38:34.907 [2024-12-09 09:56:10.056282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.907 [2024-12-09 09:56:10.056289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.907 qpair failed and we were unable to recover it. 00:38:34.907 [2024-12-09 09:56:10.056468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.907 [2024-12-09 09:56:10.056476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.907 qpair failed and we were unable to recover it. 00:38:34.907 [2024-12-09 09:56:10.056776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.907 [2024-12-09 09:56:10.056784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.907 qpair failed and we were unable to recover it. 00:38:34.907 [2024-12-09 09:56:10.057079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.907 [2024-12-09 09:56:10.057087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.907 qpair failed and we were unable to recover it. 00:38:34.907 [2024-12-09 09:56:10.057340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.907 [2024-12-09 09:56:10.057347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.907 qpair failed and we were unable to recover it. 00:38:34.907 [2024-12-09 09:56:10.057658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.907 [2024-12-09 09:56:10.057666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.907 qpair failed and we were unable to recover it. 00:38:34.907 [2024-12-09 09:56:10.057951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.907 [2024-12-09 09:56:10.057959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.907 qpair failed and we were unable to recover it. 00:38:34.907 [2024-12-09 09:56:10.058253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.907 [2024-12-09 09:56:10.058260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.907 qpair failed and we were unable to recover it. 00:38:34.907 [2024-12-09 09:56:10.058551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.907 [2024-12-09 09:56:10.058558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.907 qpair failed and we were unable to recover it. 00:38:34.907 [2024-12-09 09:56:10.058802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.907 [2024-12-09 09:56:10.058810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.907 qpair failed and we were unable to recover it. 00:38:34.907 [2024-12-09 09:56:10.059223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.907 [2024-12-09 09:56:10.059230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.907 qpair failed and we were unable to recover it. 00:38:34.907 [2024-12-09 09:56:10.059503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.907 [2024-12-09 09:56:10.059511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.907 qpair failed and we were unable to recover it. 00:38:34.907 [2024-12-09 09:56:10.059794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.907 [2024-12-09 09:56:10.059801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.907 qpair failed and we were unable to recover it. 00:38:34.907 [2024-12-09 09:56:10.060105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.907 [2024-12-09 09:56:10.060112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.907 qpair failed and we were unable to recover it. 00:38:34.907 [2024-12-09 09:56:10.060399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.907 [2024-12-09 09:56:10.060407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.907 qpair failed and we were unable to recover it. 00:38:34.907 [2024-12-09 09:56:10.060722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.907 [2024-12-09 09:56:10.060730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.907 qpair failed and we were unable to recover it. 00:38:34.907 [2024-12-09 09:56:10.060995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.907 [2024-12-09 09:56:10.061002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.907 qpair failed and we were unable to recover it. 00:38:34.907 [2024-12-09 09:56:10.061192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.907 [2024-12-09 09:56:10.061199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.907 qpair failed and we were unable to recover it. 00:38:34.907 [2024-12-09 09:56:10.061523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.907 [2024-12-09 09:56:10.061530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.907 qpair failed and we were unable to recover it. 00:38:34.907 [2024-12-09 09:56:10.061814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.907 [2024-12-09 09:56:10.061822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.907 qpair failed and we were unable to recover it. 00:38:34.907 [2024-12-09 09:56:10.062113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.907 [2024-12-09 09:56:10.062121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.907 qpair failed and we were unable to recover it. 00:38:34.907 [2024-12-09 09:56:10.062419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.907 [2024-12-09 09:56:10.062426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.907 qpair failed and we were unable to recover it. 00:38:34.907 [2024-12-09 09:56:10.062711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.907 [2024-12-09 09:56:10.062719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.907 qpair failed and we were unable to recover it. 00:38:34.907 [2024-12-09 09:56:10.062877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.907 [2024-12-09 09:56:10.062884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.907 qpair failed and we were unable to recover it. 00:38:34.907 [2024-12-09 09:56:10.063162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.907 [2024-12-09 09:56:10.063170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.907 qpair failed and we were unable to recover it. 00:38:34.907 [2024-12-09 09:56:10.063454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.907 [2024-12-09 09:56:10.063462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.907 qpair failed and we were unable to recover it. 00:38:34.907 [2024-12-09 09:56:10.063749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.907 [2024-12-09 09:56:10.063757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.907 qpair failed and we were unable to recover it. 00:38:34.907 [2024-12-09 09:56:10.063953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.907 [2024-12-09 09:56:10.063960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.907 qpair failed and we were unable to recover it. 00:38:34.907 [2024-12-09 09:56:10.064299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.907 [2024-12-09 09:56:10.064306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.907 qpair failed and we were unable to recover it. 00:38:34.907 [2024-12-09 09:56:10.064618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.907 [2024-12-09 09:56:10.064625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.907 qpair failed and we were unable to recover it. 00:38:34.907 [2024-12-09 09:56:10.064948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.907 [2024-12-09 09:56:10.064956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.907 qpair failed and we were unable to recover it. 00:38:34.907 [2024-12-09 09:56:10.065160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.907 [2024-12-09 09:56:10.065167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.907 qpair failed and we were unable to recover it. 00:38:34.907 [2024-12-09 09:56:10.065399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.907 [2024-12-09 09:56:10.065406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.907 qpair failed and we were unable to recover it. 00:38:34.907 [2024-12-09 09:56:10.065758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.908 [2024-12-09 09:56:10.065766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.908 qpair failed and we were unable to recover it. 00:38:34.908 [2024-12-09 09:56:10.065950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.908 [2024-12-09 09:56:10.065957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.908 qpair failed and we were unable to recover it. 00:38:34.908 [2024-12-09 09:56:10.066189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.908 [2024-12-09 09:56:10.066196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.908 qpair failed and we were unable to recover it. 00:38:34.908 [2024-12-09 09:56:10.066554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.908 [2024-12-09 09:56:10.066563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.908 qpair failed and we were unable to recover it. 00:38:34.908 [2024-12-09 09:56:10.066832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.908 [2024-12-09 09:56:10.066841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.908 qpair failed and we were unable to recover it. 00:38:34.908 [2024-12-09 09:56:10.067110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.908 [2024-12-09 09:56:10.067118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.908 qpair failed and we were unable to recover it. 00:38:34.908 [2024-12-09 09:56:10.067435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.908 [2024-12-09 09:56:10.067442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.908 qpair failed and we were unable to recover it. 00:38:34.908 [2024-12-09 09:56:10.067730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.908 [2024-12-09 09:56:10.067738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.908 qpair failed and we were unable to recover it. 00:38:34.908 [2024-12-09 09:56:10.067903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.908 [2024-12-09 09:56:10.067910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.908 qpair failed and we were unable to recover it. 00:38:34.908 [2024-12-09 09:56:10.068192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.908 [2024-12-09 09:56:10.068199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.908 qpair failed and we were unable to recover it. 00:38:34.908 [2024-12-09 09:56:10.068518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.908 [2024-12-09 09:56:10.068525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.908 qpair failed and we were unable to recover it. 00:38:34.908 [2024-12-09 09:56:10.068852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.908 [2024-12-09 09:56:10.068860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.908 qpair failed and we were unable to recover it. 00:38:34.908 [2024-12-09 09:56:10.069162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.908 [2024-12-09 09:56:10.069169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.908 qpair failed and we were unable to recover it. 00:38:34.908 [2024-12-09 09:56:10.069375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.908 [2024-12-09 09:56:10.069383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.908 qpair failed and we were unable to recover it. 00:38:34.908 [2024-12-09 09:56:10.069429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.908 [2024-12-09 09:56:10.069436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.908 qpair failed and we were unable to recover it. 00:38:34.908 [2024-12-09 09:56:10.069750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.908 [2024-12-09 09:56:10.069758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.908 qpair failed and we were unable to recover it. 00:38:34.908 [2024-12-09 09:56:10.069974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.908 [2024-12-09 09:56:10.069982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.908 qpair failed and we were unable to recover it. 00:38:34.908 [2024-12-09 09:56:10.070155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.908 [2024-12-09 09:56:10.070163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.908 qpair failed and we were unable to recover it. 00:38:34.908 [2024-12-09 09:56:10.070330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.908 [2024-12-09 09:56:10.070338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.908 qpair failed and we were unable to recover it. 00:38:34.908 [2024-12-09 09:56:10.070649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.908 [2024-12-09 09:56:10.070657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.908 qpair failed and we were unable to recover it. 00:38:34.908 [2024-12-09 09:56:10.070936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.908 [2024-12-09 09:56:10.070943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.908 qpair failed and we were unable to recover it. 00:38:34.908 [2024-12-09 09:56:10.071317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.908 [2024-12-09 09:56:10.071324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.908 qpair failed and we were unable to recover it. 00:38:34.908 [2024-12-09 09:56:10.071618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.908 [2024-12-09 09:56:10.071625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.908 qpair failed and we were unable to recover it. 00:38:34.908 [2024-12-09 09:56:10.071843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.908 [2024-12-09 09:56:10.071851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.908 qpair failed and we were unable to recover it. 00:38:34.908 [2024-12-09 09:56:10.072028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.908 [2024-12-09 09:56:10.072035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.908 qpair failed and we were unable to recover it. 00:38:34.908 [2024-12-09 09:56:10.072209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.908 [2024-12-09 09:56:10.072217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.908 qpair failed and we were unable to recover it. 00:38:34.908 [2024-12-09 09:56:10.072487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.908 [2024-12-09 09:56:10.072494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.908 qpair failed and we were unable to recover it. 00:38:34.908 [2024-12-09 09:56:10.072790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.908 [2024-12-09 09:56:10.072797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.908 qpair failed and we were unable to recover it. 00:38:34.908 [2024-12-09 09:56:10.073103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.908 [2024-12-09 09:56:10.073110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.908 qpair failed and we were unable to recover it. 00:38:34.908 [2024-12-09 09:56:10.073409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.908 [2024-12-09 09:56:10.073416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.908 qpair failed and we were unable to recover it. 00:38:34.908 [2024-12-09 09:56:10.073732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.908 [2024-12-09 09:56:10.073739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.908 qpair failed and we were unable to recover it. 00:38:34.908 [2024-12-09 09:56:10.073780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.908 [2024-12-09 09:56:10.073786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.908 qpair failed and we were unable to recover it. 00:38:34.908 [2024-12-09 09:56:10.074079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.908 [2024-12-09 09:56:10.074088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.908 qpair failed and we were unable to recover it. 00:38:34.908 [2024-12-09 09:56:10.074128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.908 [2024-12-09 09:56:10.074135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.908 qpair failed and we were unable to recover it. 00:38:34.908 [2024-12-09 09:56:10.074445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.908 [2024-12-09 09:56:10.074452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.908 qpair failed and we were unable to recover it. 00:38:34.908 [2024-12-09 09:56:10.074748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.908 [2024-12-09 09:56:10.074755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.908 qpair failed and we were unable to recover it. 00:38:34.908 [2024-12-09 09:56:10.075083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.908 [2024-12-09 09:56:10.075090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.908 qpair failed and we were unable to recover it. 00:38:34.908 [2024-12-09 09:56:10.075410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.908 [2024-12-09 09:56:10.075418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.908 qpair failed and we were unable to recover it. 00:38:34.909 [2024-12-09 09:56:10.075465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.909 [2024-12-09 09:56:10.075472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.909 qpair failed and we were unable to recover it. 00:38:34.909 [2024-12-09 09:56:10.075709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.909 [2024-12-09 09:56:10.075716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.909 qpair failed and we were unable to recover it. 00:38:34.909 [2024-12-09 09:56:10.075931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.909 [2024-12-09 09:56:10.075939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.909 qpair failed and we were unable to recover it. 00:38:34.909 [2024-12-09 09:56:10.076279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.909 [2024-12-09 09:56:10.076287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.909 qpair failed and we were unable to recover it. 00:38:34.909 [2024-12-09 09:56:10.076596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.909 [2024-12-09 09:56:10.076603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.909 qpair failed and we were unable to recover it. 00:38:34.909 [2024-12-09 09:56:10.076758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.909 [2024-12-09 09:56:10.076767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.909 qpair failed and we were unable to recover it. 00:38:34.909 [2024-12-09 09:56:10.076944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.909 [2024-12-09 09:56:10.076951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.909 qpair failed and we were unable to recover it. 00:38:34.909 [2024-12-09 09:56:10.077279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.909 [2024-12-09 09:56:10.077286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.909 qpair failed and we were unable to recover it. 00:38:34.909 [2024-12-09 09:56:10.077561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.909 [2024-12-09 09:56:10.077569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.909 qpair failed and we were unable to recover it. 00:38:34.909 [2024-12-09 09:56:10.077753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.909 [2024-12-09 09:56:10.077761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.909 qpair failed and we were unable to recover it. 00:38:34.909 [2024-12-09 09:56:10.078106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.909 [2024-12-09 09:56:10.078113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.909 qpair failed and we were unable to recover it. 00:38:34.909 [2024-12-09 09:56:10.078423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.909 [2024-12-09 09:56:10.078430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.909 qpair failed and we were unable to recover it. 00:38:34.909 [2024-12-09 09:56:10.078725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.909 [2024-12-09 09:56:10.078733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.909 qpair failed and we were unable to recover it. 00:38:34.909 [2024-12-09 09:56:10.078995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.909 [2024-12-09 09:56:10.079003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.909 qpair failed and we were unable to recover it. 00:38:34.909 [2024-12-09 09:56:10.079309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.909 [2024-12-09 09:56:10.079317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.909 qpair failed and we were unable to recover it. 00:38:34.909 [2024-12-09 09:56:10.079592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.909 [2024-12-09 09:56:10.079599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.909 qpair failed and we were unable to recover it. 00:38:34.909 [2024-12-09 09:56:10.079916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.909 [2024-12-09 09:56:10.079924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.909 qpair failed and we were unable to recover it. 00:38:34.909 [2024-12-09 09:56:10.080097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.909 [2024-12-09 09:56:10.080104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.909 qpair failed and we were unable to recover it. 00:38:34.909 [2024-12-09 09:56:10.080414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.909 [2024-12-09 09:56:10.080421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.909 qpair failed and we were unable to recover it. 00:38:34.909 [2024-12-09 09:56:10.080716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.909 [2024-12-09 09:56:10.080723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.909 qpair failed and we were unable to recover it. 00:38:34.909 [2024-12-09 09:56:10.081047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.909 [2024-12-09 09:56:10.081054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.909 qpair failed and we were unable to recover it. 00:38:34.909 [2024-12-09 09:56:10.081352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.909 [2024-12-09 09:56:10.081360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.909 qpair failed and we were unable to recover it. 00:38:34.909 [2024-12-09 09:56:10.081721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.909 [2024-12-09 09:56:10.081729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.909 qpair failed and we were unable to recover it. 00:38:34.909 [2024-12-09 09:56:10.081778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.909 [2024-12-09 09:56:10.081784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.909 qpair failed and we were unable to recover it. 00:38:34.909 [2024-12-09 09:56:10.082086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.909 [2024-12-09 09:56:10.082093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.909 qpair failed and we were unable to recover it. 00:38:34.909 [2024-12-09 09:56:10.082416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.909 [2024-12-09 09:56:10.082423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.909 qpair failed and we were unable to recover it. 00:38:34.909 [2024-12-09 09:56:10.082656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.909 [2024-12-09 09:56:10.082663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.909 qpair failed and we were unable to recover it. 00:38:34.909 [2024-12-09 09:56:10.082955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.909 [2024-12-09 09:56:10.082962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.909 qpair failed and we were unable to recover it. 00:38:34.909 [2024-12-09 09:56:10.083231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.909 [2024-12-09 09:56:10.083238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.909 qpair failed and we were unable to recover it. 00:38:34.909 [2024-12-09 09:56:10.083560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.909 [2024-12-09 09:56:10.083567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.909 qpair failed and we were unable to recover it. 00:38:34.909 [2024-12-09 09:56:10.083982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.909 [2024-12-09 09:56:10.083990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.909 qpair failed and we were unable to recover it. 00:38:34.909 [2024-12-09 09:56:10.084272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.909 [2024-12-09 09:56:10.084280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.909 qpair failed and we were unable to recover it. 00:38:34.909 [2024-12-09 09:56:10.084564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.909 [2024-12-09 09:56:10.084572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.909 qpair failed and we were unable to recover it. 00:38:34.909 [2024-12-09 09:56:10.084616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.909 [2024-12-09 09:56:10.084626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.909 qpair failed and we were unable to recover it. 00:38:34.909 [2024-12-09 09:56:10.084945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.909 [2024-12-09 09:56:10.084952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.909 qpair failed and we were unable to recover it. 00:38:34.909 [2024-12-09 09:56:10.085229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.909 [2024-12-09 09:56:10.085236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.909 qpair failed and we were unable to recover it. 00:38:34.909 [2024-12-09 09:56:10.085543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.909 [2024-12-09 09:56:10.085557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.909 qpair failed and we were unable to recover it. 00:38:34.909 [2024-12-09 09:56:10.085871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.909 [2024-12-09 09:56:10.085878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.909 qpair failed and we were unable to recover it. 00:38:34.910 [2024-12-09 09:56:10.086061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.910 [2024-12-09 09:56:10.086068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.910 qpair failed and we were unable to recover it. 00:38:34.910 [2024-12-09 09:56:10.086419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.910 [2024-12-09 09:56:10.086426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.910 qpair failed and we were unable to recover it. 00:38:34.910 [2024-12-09 09:56:10.086721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.910 [2024-12-09 09:56:10.086728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.910 qpair failed and we were unable to recover it. 00:38:34.910 [2024-12-09 09:56:10.087063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.910 [2024-12-09 09:56:10.087070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.910 qpair failed and we were unable to recover it. 00:38:34.910 [2024-12-09 09:56:10.087339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.910 [2024-12-09 09:56:10.087348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.910 qpair failed and we were unable to recover it. 00:38:34.910 [2024-12-09 09:56:10.087673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.910 [2024-12-09 09:56:10.087681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.910 qpair failed and we were unable to recover it. 00:38:34.910 [2024-12-09 09:56:10.087975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.910 [2024-12-09 09:56:10.087983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.910 qpair failed and we were unable to recover it. 00:38:34.910 [2024-12-09 09:56:10.088335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.910 [2024-12-09 09:56:10.088342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.910 qpair failed and we were unable to recover it. 00:38:34.910 [2024-12-09 09:56:10.088657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.910 [2024-12-09 09:56:10.088665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.910 qpair failed and we were unable to recover it. 00:38:34.910 [2024-12-09 09:56:10.088828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.910 [2024-12-09 09:56:10.088834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.910 qpair failed and we were unable to recover it. 00:38:34.910 [2024-12-09 09:56:10.089097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.910 [2024-12-09 09:56:10.089104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.910 qpair failed and we were unable to recover it. 00:38:34.910 [2024-12-09 09:56:10.089403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.910 [2024-12-09 09:56:10.089410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.910 qpair failed and we were unable to recover it. 00:38:34.910 [2024-12-09 09:56:10.089730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.910 [2024-12-09 09:56:10.089738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.910 qpair failed and we were unable to recover it. 00:38:34.910 [2024-12-09 09:56:10.089913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.910 [2024-12-09 09:56:10.089920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.910 qpair failed and we were unable to recover it. 00:38:34.910 [2024-12-09 09:56:10.090076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.910 [2024-12-09 09:56:10.090084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.910 qpair failed and we were unable to recover it. 00:38:34.910 [2024-12-09 09:56:10.090372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.910 [2024-12-09 09:56:10.090378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.910 qpair failed and we were unable to recover it. 00:38:34.910 [2024-12-09 09:56:10.090648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.910 [2024-12-09 09:56:10.090656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.910 qpair failed and we were unable to recover it. 00:38:34.910 [2024-12-09 09:56:10.090853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.910 [2024-12-09 09:56:10.090860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.910 qpair failed and we were unable to recover it. 00:38:34.910 [2024-12-09 09:56:10.091171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.910 [2024-12-09 09:56:10.091179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.910 qpair failed and we were unable to recover it. 00:38:34.910 [2024-12-09 09:56:10.091478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.910 [2024-12-09 09:56:10.091486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.910 qpair failed and we were unable to recover it. 00:38:34.910 [2024-12-09 09:56:10.091756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.910 [2024-12-09 09:56:10.091764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.910 qpair failed and we were unable to recover it. 00:38:34.910 [2024-12-09 09:56:10.092069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.910 [2024-12-09 09:56:10.092076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.910 qpair failed and we were unable to recover it. 00:38:34.910 [2024-12-09 09:56:10.092270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.910 [2024-12-09 09:56:10.092278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.910 qpair failed and we were unable to recover it. 00:38:34.910 [2024-12-09 09:56:10.092575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.910 [2024-12-09 09:56:10.092582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.910 qpair failed and we were unable to recover it. 00:38:34.910 [2024-12-09 09:56:10.092745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.910 [2024-12-09 09:56:10.092752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.910 qpair failed and we were unable to recover it. 00:38:34.910 [2024-12-09 09:56:10.092904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.910 [2024-12-09 09:56:10.092911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.910 qpair failed and we were unable to recover it. 00:38:34.910 [2024-12-09 09:56:10.093187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.910 [2024-12-09 09:56:10.093194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.910 qpair failed and we were unable to recover it. 00:38:34.910 [2024-12-09 09:56:10.093462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.910 [2024-12-09 09:56:10.093469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.910 qpair failed and we were unable to recover it. 00:38:34.910 [2024-12-09 09:56:10.093801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.910 [2024-12-09 09:56:10.093810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.910 qpair failed and we were unable to recover it. 00:38:34.910 [2024-12-09 09:56:10.094153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.910 [2024-12-09 09:56:10.094160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.910 qpair failed and we were unable to recover it. 00:38:34.910 [2024-12-09 09:56:10.094320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.910 [2024-12-09 09:56:10.094329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.910 qpair failed and we were unable to recover it. 00:38:34.910 [2024-12-09 09:56:10.094651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.910 [2024-12-09 09:56:10.094659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.910 qpair failed and we were unable to recover it. 00:38:34.911 [2024-12-09 09:56:10.094938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.911 [2024-12-09 09:56:10.094944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.911 qpair failed and we were unable to recover it. 00:38:34.911 [2024-12-09 09:56:10.095136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.911 [2024-12-09 09:56:10.095145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.911 qpair failed and we were unable to recover it. 00:38:34.911 [2024-12-09 09:56:10.095460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.911 [2024-12-09 09:56:10.095467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.911 qpair failed and we were unable to recover it. 00:38:34.911 [2024-12-09 09:56:10.095792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.911 [2024-12-09 09:56:10.095800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.911 qpair failed and we were unable to recover it. 00:38:34.911 [2024-12-09 09:56:10.096102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.911 [2024-12-09 09:56:10.096109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.911 qpair failed and we were unable to recover it. 00:38:34.911 [2024-12-09 09:56:10.096434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.911 [2024-12-09 09:56:10.096442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.911 qpair failed and we were unable to recover it. 00:38:34.911 [2024-12-09 09:56:10.096742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.911 [2024-12-09 09:56:10.096750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.911 qpair failed and we were unable to recover it. 00:38:34.911 [2024-12-09 09:56:10.097064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.911 [2024-12-09 09:56:10.097071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.911 qpair failed and we were unable to recover it. 00:38:34.911 [2024-12-09 09:56:10.097252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.911 [2024-12-09 09:56:10.097260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.911 qpair failed and we were unable to recover it. 00:38:34.911 [2024-12-09 09:56:10.097425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.911 [2024-12-09 09:56:10.097432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.911 qpair failed and we were unable to recover it. 00:38:34.911 [2024-12-09 09:56:10.097763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.911 [2024-12-09 09:56:10.097771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.911 qpair failed and we were unable to recover it. 00:38:34.911 [2024-12-09 09:56:10.098092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.911 [2024-12-09 09:56:10.098099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.911 qpair failed and we were unable to recover it. 00:38:34.911 [2024-12-09 09:56:10.098429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.911 [2024-12-09 09:56:10.098437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.911 qpair failed and we were unable to recover it. 00:38:34.911 [2024-12-09 09:56:10.098772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.911 [2024-12-09 09:56:10.098780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.911 qpair failed and we were unable to recover it. 00:38:34.911 [2024-12-09 09:56:10.098969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.911 [2024-12-09 09:56:10.098977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.911 qpair failed and we were unable to recover it. 00:38:34.911 [2024-12-09 09:56:10.099144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.911 [2024-12-09 09:56:10.099151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.911 qpair failed and we were unable to recover it. 00:38:34.911 [2024-12-09 09:56:10.099512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.911 [2024-12-09 09:56:10.099519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.911 qpair failed and we were unable to recover it. 00:38:34.911 [2024-12-09 09:56:10.099801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.911 [2024-12-09 09:56:10.099808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.911 qpair failed and we were unable to recover it. 00:38:34.911 [2024-12-09 09:56:10.100048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.911 [2024-12-09 09:56:10.100056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.911 qpair failed and we were unable to recover it. 00:38:34.911 [2024-12-09 09:56:10.100331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.911 [2024-12-09 09:56:10.100338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.911 qpair failed and we were unable to recover it. 00:38:34.911 [2024-12-09 09:56:10.100620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.911 [2024-12-09 09:56:10.100628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.911 qpair failed and we were unable to recover it. 00:38:34.911 [2024-12-09 09:56:10.100838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.911 [2024-12-09 09:56:10.100846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.911 qpair failed and we were unable to recover it. 00:38:34.911 [2024-12-09 09:56:10.101131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.911 [2024-12-09 09:56:10.101139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.911 qpair failed and we were unable to recover it. 00:38:34.911 [2024-12-09 09:56:10.101345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.911 [2024-12-09 09:56:10.101353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.911 qpair failed and we were unable to recover it. 00:38:34.911 [2024-12-09 09:56:10.101646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.911 [2024-12-09 09:56:10.101654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.911 qpair failed and we were unable to recover it. 00:38:34.911 [2024-12-09 09:56:10.101837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.911 [2024-12-09 09:56:10.101844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.911 qpair failed and we were unable to recover it. 00:38:34.911 [2024-12-09 09:56:10.102143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.911 [2024-12-09 09:56:10.102150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.911 qpair failed and we were unable to recover it. 00:38:34.911 [2024-12-09 09:56:10.102464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.911 [2024-12-09 09:56:10.102471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.911 qpair failed and we were unable to recover it. 00:38:34.911 [2024-12-09 09:56:10.102659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.911 [2024-12-09 09:56:10.102667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.911 qpair failed and we were unable to recover it. 00:38:34.911 [2024-12-09 09:56:10.103024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.911 [2024-12-09 09:56:10.103031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.911 qpair failed and we were unable to recover it. 00:38:34.911 [2024-12-09 09:56:10.103301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.911 [2024-12-09 09:56:10.103308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.911 qpair failed and we were unable to recover it. 00:38:34.911 [2024-12-09 09:56:10.103634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.911 [2024-12-09 09:56:10.103647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.911 qpair failed and we were unable to recover it. 00:38:34.911 [2024-12-09 09:56:10.103825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.911 [2024-12-09 09:56:10.103833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.911 qpair failed and we were unable to recover it. 00:38:34.911 [2024-12-09 09:56:10.104119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.911 [2024-12-09 09:56:10.104127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.911 qpair failed and we were unable to recover it. 00:38:34.911 [2024-12-09 09:56:10.104302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.911 [2024-12-09 09:56:10.104310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.911 qpair failed and we were unable to recover it. 00:38:34.911 [2024-12-09 09:56:10.104594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.911 [2024-12-09 09:56:10.104601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.911 qpair failed and we were unable to recover it. 00:38:34.911 [2024-12-09 09:56:10.104915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.911 [2024-12-09 09:56:10.104922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.911 qpair failed and we were unable to recover it. 00:38:34.911 [2024-12-09 09:56:10.105291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.911 [2024-12-09 09:56:10.105298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.911 qpair failed and we were unable to recover it. 00:38:34.912 [2024-12-09 09:56:10.105575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.912 [2024-12-09 09:56:10.105582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.912 qpair failed and we were unable to recover it. 00:38:34.912 [2024-12-09 09:56:10.105923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.912 [2024-12-09 09:56:10.105932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.912 qpair failed and we were unable to recover it. 00:38:34.912 [2024-12-09 09:56:10.106239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.912 [2024-12-09 09:56:10.106246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.912 qpair failed and we were unable to recover it. 00:38:34.912 [2024-12-09 09:56:10.106405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.912 [2024-12-09 09:56:10.106413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.912 qpair failed and we were unable to recover it. 00:38:34.912 [2024-12-09 09:56:10.106770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.912 [2024-12-09 09:56:10.106778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.912 qpair failed and we were unable to recover it. 00:38:34.912 [2024-12-09 09:56:10.107082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.912 [2024-12-09 09:56:10.107089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.912 qpair failed and we were unable to recover it. 00:38:34.912 [2024-12-09 09:56:10.107367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.912 [2024-12-09 09:56:10.107374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.912 qpair failed and we were unable to recover it. 00:38:34.912 [2024-12-09 09:56:10.107686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.912 [2024-12-09 09:56:10.107694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.912 qpair failed and we were unable to recover it. 00:38:34.912 [2024-12-09 09:56:10.107873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.912 [2024-12-09 09:56:10.107881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.912 qpair failed and we were unable to recover it. 00:38:34.912 [2024-12-09 09:56:10.107920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.912 [2024-12-09 09:56:10.107927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.912 qpair failed and we were unable to recover it. 00:38:34.912 [2024-12-09 09:56:10.108263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.912 [2024-12-09 09:56:10.108270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.912 qpair failed and we were unable to recover it. 00:38:34.912 [2024-12-09 09:56:10.108586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.912 [2024-12-09 09:56:10.108593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.912 qpair failed and we were unable to recover it. 00:38:34.912 [2024-12-09 09:56:10.108683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.912 [2024-12-09 09:56:10.108690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.912 qpair failed and we were unable to recover it. 00:38:34.912 [2024-12-09 09:56:10.109024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.912 [2024-12-09 09:56:10.109031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.912 qpair failed and we were unable to recover it. 00:38:34.912 [2024-12-09 09:56:10.109327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.912 [2024-12-09 09:56:10.109337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.912 qpair failed and we were unable to recover it. 00:38:34.912 [2024-12-09 09:56:10.109669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.912 [2024-12-09 09:56:10.109677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.912 qpair failed and we were unable to recover it. 00:38:34.912 [2024-12-09 09:56:10.109824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.912 [2024-12-09 09:56:10.109831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.912 qpair failed and we were unable to recover it. 00:38:34.912 [2024-12-09 09:56:10.110003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.912 [2024-12-09 09:56:10.110011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.912 qpair failed and we were unable to recover it. 00:38:34.912 [2024-12-09 09:56:10.110293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.912 [2024-12-09 09:56:10.110300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.912 qpair failed and we were unable to recover it. 00:38:34.912 [2024-12-09 09:56:10.110575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.912 [2024-12-09 09:56:10.110582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.912 qpair failed and we were unable to recover it. 00:38:34.912 [2024-12-09 09:56:10.110805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.912 [2024-12-09 09:56:10.110813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.912 qpair failed and we were unable to recover it. 00:38:34.912 [2024-12-09 09:56:10.111195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.912 [2024-12-09 09:56:10.111202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.912 qpair failed and we were unable to recover it. 00:38:34.912 [2024-12-09 09:56:10.111485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.912 [2024-12-09 09:56:10.111492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.912 qpair failed and we were unable to recover it. 00:38:34.912 [2024-12-09 09:56:10.111658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.912 [2024-12-09 09:56:10.111666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.912 qpair failed and we were unable to recover it. 00:38:34.912 [2024-12-09 09:56:10.112032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.912 [2024-12-09 09:56:10.112040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.912 qpair failed and we were unable to recover it. 00:38:34.912 [2024-12-09 09:56:10.112310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.912 [2024-12-09 09:56:10.112317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.912 qpair failed and we were unable to recover it. 00:38:34.912 [2024-12-09 09:56:10.112643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.912 [2024-12-09 09:56:10.112650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.912 qpair failed and we were unable to recover it. 00:38:34.912 [2024-12-09 09:56:10.112941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.912 [2024-12-09 09:56:10.112949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.912 qpair failed and we were unable to recover it. 00:38:34.912 [2024-12-09 09:56:10.113261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.912 [2024-12-09 09:56:10.113268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.912 qpair failed and we were unable to recover it. 00:38:34.912 [2024-12-09 09:56:10.113443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.912 [2024-12-09 09:56:10.113451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.912 qpair failed and we were unable to recover it. 00:38:34.912 [2024-12-09 09:56:10.113769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.912 [2024-12-09 09:56:10.113776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.912 qpair failed and we were unable to recover it. 00:38:34.912 [2024-12-09 09:56:10.114082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.912 [2024-12-09 09:56:10.114089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.912 qpair failed and we were unable to recover it. 00:38:34.912 [2024-12-09 09:56:10.114426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.912 [2024-12-09 09:56:10.114433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.912 qpair failed and we were unable to recover it. 00:38:34.912 [2024-12-09 09:56:10.114746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.912 [2024-12-09 09:56:10.114753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.912 qpair failed and we were unable to recover it. 00:38:34.912 [2024-12-09 09:56:10.114950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.912 [2024-12-09 09:56:10.114958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.912 qpair failed and we were unable to recover it. 00:38:34.912 [2024-12-09 09:56:10.115326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.912 [2024-12-09 09:56:10.115334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.912 qpair failed and we were unable to recover it. 00:38:34.912 [2024-12-09 09:56:10.115626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.912 [2024-12-09 09:56:10.115633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.912 qpair failed and we were unable to recover it. 00:38:34.912 [2024-12-09 09:56:10.115972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.912 [2024-12-09 09:56:10.115980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.912 qpair failed and we were unable to recover it. 00:38:34.913 [2024-12-09 09:56:10.116266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.913 [2024-12-09 09:56:10.116274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.913 qpair failed and we were unable to recover it. 00:38:34.913 [2024-12-09 09:56:10.116463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.913 [2024-12-09 09:56:10.116471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.913 qpair failed and we were unable to recover it. 00:38:34.913 [2024-12-09 09:56:10.116707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.913 [2024-12-09 09:56:10.116715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.913 qpair failed and we were unable to recover it. 00:38:34.913 [2024-12-09 09:56:10.116924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.913 [2024-12-09 09:56:10.116932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.913 qpair failed and we were unable to recover it. 00:38:34.913 [2024-12-09 09:56:10.117120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.913 [2024-12-09 09:56:10.117127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.913 qpair failed and we were unable to recover it. 00:38:34.913 [2024-12-09 09:56:10.117288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.913 [2024-12-09 09:56:10.117296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.913 qpair failed and we were unable to recover it. 00:38:34.913 [2024-12-09 09:56:10.117613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.913 [2024-12-09 09:56:10.117621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.913 qpair failed and we were unable to recover it. 00:38:34.913 [2024-12-09 09:56:10.117932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.913 [2024-12-09 09:56:10.117940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.913 qpair failed and we were unable to recover it. 00:38:34.913 [2024-12-09 09:56:10.118140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.913 [2024-12-09 09:56:10.118147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.913 qpair failed and we were unable to recover it. 00:38:34.913 [2024-12-09 09:56:10.118425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.913 [2024-12-09 09:56:10.118432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.913 qpair failed and we were unable to recover it. 00:38:34.913 [2024-12-09 09:56:10.118729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.913 [2024-12-09 09:56:10.118737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.913 qpair failed and we were unable to recover it. 00:38:34.913 [2024-12-09 09:56:10.119050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.913 [2024-12-09 09:56:10.119057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.913 qpair failed and we were unable to recover it. 00:38:34.913 [2024-12-09 09:56:10.119359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.913 [2024-12-09 09:56:10.119366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.913 qpair failed and we were unable to recover it. 00:38:34.913 [2024-12-09 09:56:10.119716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.913 [2024-12-09 09:56:10.119724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.913 qpair failed and we were unable to recover it. 00:38:34.913 [2024-12-09 09:56:10.120118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.913 [2024-12-09 09:56:10.120125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.913 qpair failed and we were unable to recover it. 00:38:34.913 [2024-12-09 09:56:10.120167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.913 [2024-12-09 09:56:10.120174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.913 qpair failed and we were unable to recover it. 00:38:34.913 [2024-12-09 09:56:10.120467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.913 [2024-12-09 09:56:10.120476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.913 qpair failed and we were unable to recover it. 00:38:34.913 [2024-12-09 09:56:10.120633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.913 [2024-12-09 09:56:10.120644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.913 qpair failed and we were unable to recover it. 00:38:34.913 [2024-12-09 09:56:10.120952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.913 [2024-12-09 09:56:10.120959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.913 qpair failed and we were unable to recover it. 00:38:34.913 [2024-12-09 09:56:10.121250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.913 [2024-12-09 09:56:10.121258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.913 qpair failed and we were unable to recover it. 00:38:34.913 [2024-12-09 09:56:10.121527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.913 [2024-12-09 09:56:10.121534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.913 qpair failed and we were unable to recover it. 00:38:34.913 [2024-12-09 09:56:10.121841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.913 [2024-12-09 09:56:10.121851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.913 qpair failed and we were unable to recover it. 00:38:34.913 [2024-12-09 09:56:10.122160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.913 [2024-12-09 09:56:10.122167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.913 qpair failed and we were unable to recover it. 00:38:34.913 [2024-12-09 09:56:10.122498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.913 [2024-12-09 09:56:10.122505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.913 qpair failed and we were unable to recover it. 00:38:34.913 [2024-12-09 09:56:10.122810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.913 [2024-12-09 09:56:10.122817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.913 qpair failed and we were unable to recover it. 00:38:34.913 [2024-12-09 09:56:10.123036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.913 [2024-12-09 09:56:10.123043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.913 qpair failed and we were unable to recover it. 00:38:34.913 [2024-12-09 09:56:10.123207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.913 [2024-12-09 09:56:10.123214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.913 qpair failed and we were unable to recover it. 00:38:34.913 [2024-12-09 09:56:10.123504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.913 [2024-12-09 09:56:10.123511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.913 qpair failed and we were unable to recover it. 00:38:34.913 [2024-12-09 09:56:10.123809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.913 [2024-12-09 09:56:10.123818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.913 qpair failed and we were unable to recover it. 00:38:34.913 [2024-12-09 09:56:10.124150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.913 [2024-12-09 09:56:10.124158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.913 qpair failed and we were unable to recover it. 00:38:34.913 [2024-12-09 09:56:10.124462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.913 [2024-12-09 09:56:10.124469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.913 qpair failed and we were unable to recover it. 00:38:34.913 [2024-12-09 09:56:10.124508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.913 [2024-12-09 09:56:10.124514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.913 qpair failed and we were unable to recover it. 00:38:34.913 [2024-12-09 09:56:10.124841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.913 [2024-12-09 09:56:10.124848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.913 qpair failed and we were unable to recover it. 00:38:34.913 [2024-12-09 09:56:10.125173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.913 [2024-12-09 09:56:10.125180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.913 qpair failed and we were unable to recover it. 00:38:34.913 [2024-12-09 09:56:10.125518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.913 [2024-12-09 09:56:10.125525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.913 qpair failed and we were unable to recover it. 00:38:34.913 [2024-12-09 09:56:10.125850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.913 [2024-12-09 09:56:10.125857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.913 qpair failed and we were unable to recover it. 00:38:34.913 [2024-12-09 09:56:10.126195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.913 [2024-12-09 09:56:10.126203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.913 qpair failed and we were unable to recover it. 00:38:34.913 [2024-12-09 09:56:10.126504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.914 [2024-12-09 09:56:10.126512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.914 qpair failed and we were unable to recover it. 00:38:34.914 [2024-12-09 09:56:10.126659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.914 [2024-12-09 09:56:10.126667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.914 qpair failed and we were unable to recover it. 00:38:34.914 [2024-12-09 09:56:10.126978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.914 [2024-12-09 09:56:10.126986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.914 qpair failed and we were unable to recover it. 00:38:34.914 [2024-12-09 09:56:10.127292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.914 [2024-12-09 09:56:10.127299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.914 qpair failed and we were unable to recover it. 00:38:34.914 [2024-12-09 09:56:10.127598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.914 [2024-12-09 09:56:10.127605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.914 qpair failed and we were unable to recover it. 00:38:34.914 [2024-12-09 09:56:10.127911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.914 [2024-12-09 09:56:10.127920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.914 qpair failed and we were unable to recover it. 00:38:34.914 [2024-12-09 09:56:10.128208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.914 [2024-12-09 09:56:10.128215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.914 qpair failed and we were unable to recover it. 00:38:34.914 [2024-12-09 09:56:10.128486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.914 [2024-12-09 09:56:10.128493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.914 qpair failed and we were unable to recover it. 00:38:34.914 [2024-12-09 09:56:10.128782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.914 [2024-12-09 09:56:10.128791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.914 qpair failed and we were unable to recover it. 00:38:34.914 [2024-12-09 09:56:10.129093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.914 [2024-12-09 09:56:10.129100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.914 qpair failed and we were unable to recover it. 00:38:34.914 [2024-12-09 09:56:10.129383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.914 [2024-12-09 09:56:10.129390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.914 qpair failed and we were unable to recover it. 00:38:34.914 [2024-12-09 09:56:10.129704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.914 [2024-12-09 09:56:10.129712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.914 qpair failed and we were unable to recover it. 00:38:34.914 [2024-12-09 09:56:10.130052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.914 [2024-12-09 09:56:10.130059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.914 qpair failed and we were unable to recover it. 00:38:34.914 [2024-12-09 09:56:10.130372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.914 [2024-12-09 09:56:10.130379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.914 qpair failed and we were unable to recover it. 00:38:34.914 [2024-12-09 09:56:10.130689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.914 [2024-12-09 09:56:10.130698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.914 qpair failed and we were unable to recover it. 00:38:34.914 [2024-12-09 09:56:10.131001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.914 [2024-12-09 09:56:10.131009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.914 qpair failed and we were unable to recover it. 00:38:34.914 [2024-12-09 09:56:10.131329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.914 [2024-12-09 09:56:10.131336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.914 qpair failed and we were unable to recover it. 00:38:34.914 [2024-12-09 09:56:10.131623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.914 [2024-12-09 09:56:10.131630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.914 qpair failed and we were unable to recover it. 00:38:34.914 [2024-12-09 09:56:10.131961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.914 [2024-12-09 09:56:10.131968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.914 qpair failed and we were unable to recover it. 00:38:34.914 [2024-12-09 09:56:10.132283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.914 [2024-12-09 09:56:10.132293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.914 qpair failed and we were unable to recover it. 00:38:34.914 [2024-12-09 09:56:10.132614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.914 [2024-12-09 09:56:10.132621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.914 qpair failed and we were unable to recover it. 00:38:34.914 [2024-12-09 09:56:10.132791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.914 [2024-12-09 09:56:10.132799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.914 qpair failed and we were unable to recover it. 00:38:34.914 [2024-12-09 09:56:10.133184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.914 [2024-12-09 09:56:10.133191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.914 qpair failed and we were unable to recover it. 00:38:34.914 [2024-12-09 09:56:10.133361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.914 [2024-12-09 09:56:10.133369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.914 qpair failed and we were unable to recover it. 00:38:34.914 [2024-12-09 09:56:10.133694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.914 [2024-12-09 09:56:10.133702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.914 qpair failed and we were unable to recover it. 00:38:34.914 [2024-12-09 09:56:10.134004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.914 [2024-12-09 09:56:10.134011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.914 qpair failed and we were unable to recover it. 00:38:34.914 [2024-12-09 09:56:10.134336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.914 [2024-12-09 09:56:10.134343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.914 qpair failed and we were unable to recover it. 00:38:34.914 [2024-12-09 09:56:10.134653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.914 [2024-12-09 09:56:10.134660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.914 qpair failed and we were unable to recover it. 00:38:34.914 [2024-12-09 09:56:10.134972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.914 [2024-12-09 09:56:10.134979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.914 qpair failed and we were unable to recover it. 00:38:34.914 [2024-12-09 09:56:10.135304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.914 [2024-12-09 09:56:10.135311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.914 qpair failed and we were unable to recover it. 00:38:34.914 [2024-12-09 09:56:10.135591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.914 [2024-12-09 09:56:10.135598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.914 qpair failed and we were unable to recover it. 00:38:34.914 [2024-12-09 09:56:10.135905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.914 [2024-12-09 09:56:10.135913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.914 qpair failed and we were unable to recover it. 00:38:34.914 [2024-12-09 09:56:10.136097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.914 [2024-12-09 09:56:10.136104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.914 qpair failed and we were unable to recover it. 00:38:34.914 [2024-12-09 09:56:10.136374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.914 [2024-12-09 09:56:10.136382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.914 qpair failed and we were unable to recover it. 00:38:34.915 [2024-12-09 09:56:10.136652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.915 [2024-12-09 09:56:10.136660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.915 qpair failed and we were unable to recover it. 00:38:34.915 [2024-12-09 09:56:10.136931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.915 [2024-12-09 09:56:10.136938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.915 qpair failed and we were unable to recover it. 00:38:34.915 [2024-12-09 09:56:10.137205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.915 [2024-12-09 09:56:10.137213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.915 qpair failed and we were unable to recover it. 00:38:34.915 [2024-12-09 09:56:10.137519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.915 [2024-12-09 09:56:10.137526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.915 qpair failed and we were unable to recover it. 00:38:34.915 [2024-12-09 09:56:10.137817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.915 [2024-12-09 09:56:10.137824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.915 qpair failed and we were unable to recover it. 00:38:34.915 [2024-12-09 09:56:10.138053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.915 [2024-12-09 09:56:10.138061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.915 qpair failed and we were unable to recover it. 00:38:34.915 [2024-12-09 09:56:10.138354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.915 [2024-12-09 09:56:10.138361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.915 qpair failed and we were unable to recover it. 00:38:34.915 [2024-12-09 09:56:10.138668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.915 [2024-12-09 09:56:10.138675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.915 qpair failed and we were unable to recover it. 00:38:34.915 [2024-12-09 09:56:10.139047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.915 [2024-12-09 09:56:10.139054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.915 qpair failed and we were unable to recover it. 00:38:34.915 [2024-12-09 09:56:10.139367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.915 [2024-12-09 09:56:10.139374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.915 qpair failed and we were unable to recover it. 00:38:34.915 [2024-12-09 09:56:10.139700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.915 [2024-12-09 09:56:10.139708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.915 qpair failed and we were unable to recover it. 00:38:34.915 [2024-12-09 09:56:10.139994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.915 [2024-12-09 09:56:10.140001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.915 qpair failed and we were unable to recover it. 00:38:34.915 [2024-12-09 09:56:10.140218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.915 [2024-12-09 09:56:10.140225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.915 qpair failed and we were unable to recover it. 00:38:34.915 [2024-12-09 09:56:10.140559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.915 [2024-12-09 09:56:10.140567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.915 qpair failed and we were unable to recover it. 00:38:34.915 [2024-12-09 09:56:10.140768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.915 [2024-12-09 09:56:10.140777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.915 qpair failed and we were unable to recover it. 00:38:34.915 [2024-12-09 09:56:10.141057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.915 [2024-12-09 09:56:10.141066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.915 qpair failed and we were unable to recover it. 00:38:34.915 [2024-12-09 09:56:10.141364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.915 [2024-12-09 09:56:10.141371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.915 qpair failed and we were unable to recover it. 00:38:34.915 [2024-12-09 09:56:10.141534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.915 [2024-12-09 09:56:10.141542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.915 qpair failed and we were unable to recover it. 00:38:34.915 [2024-12-09 09:56:10.141845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.915 [2024-12-09 09:56:10.141853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.915 qpair failed and we were unable to recover it. 00:38:34.915 [2024-12-09 09:56:10.142184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.915 [2024-12-09 09:56:10.142191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.915 qpair failed and we were unable to recover it. 00:38:34.915 [2024-12-09 09:56:10.142383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.915 [2024-12-09 09:56:10.142390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.915 qpair failed and we were unable to recover it. 00:38:34.915 [2024-12-09 09:56:10.142739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.915 [2024-12-09 09:56:10.142746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.915 qpair failed and we were unable to recover it. 00:38:34.915 [2024-12-09 09:56:10.142926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.915 [2024-12-09 09:56:10.142933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.915 qpair failed and we were unable to recover it. 00:38:34.915 [2024-12-09 09:56:10.143092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.915 [2024-12-09 09:56:10.143099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.915 qpair failed and we were unable to recover it. 00:38:34.915 [2024-12-09 09:56:10.143486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.915 [2024-12-09 09:56:10.143493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.915 qpair failed and we were unable to recover it. 00:38:34.915 [2024-12-09 09:56:10.143968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.915 [2024-12-09 09:56:10.143977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.915 qpair failed and we were unable to recover it. 00:38:34.915 [2024-12-09 09:56:10.144297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.915 [2024-12-09 09:56:10.144304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.915 qpair failed and we were unable to recover it. 00:38:34.915 [2024-12-09 09:56:10.144484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.915 [2024-12-09 09:56:10.144491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.915 qpair failed and we were unable to recover it. 00:38:34.915 [2024-12-09 09:56:10.144784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.915 [2024-12-09 09:56:10.144791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.915 qpair failed and we were unable to recover it. 00:38:34.915 [2024-12-09 09:56:10.145083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.915 [2024-12-09 09:56:10.145091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.915 qpair failed and we were unable to recover it. 00:38:34.915 [2024-12-09 09:56:10.145326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.915 [2024-12-09 09:56:10.145333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.915 qpair failed and we were unable to recover it. 00:38:34.915 [2024-12-09 09:56:10.145655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.915 [2024-12-09 09:56:10.145663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.915 qpair failed and we were unable to recover it. 00:38:34.915 [2024-12-09 09:56:10.145994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.915 [2024-12-09 09:56:10.146001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.915 qpair failed and we were unable to recover it. 00:38:34.915 [2024-12-09 09:56:10.146279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.915 [2024-12-09 09:56:10.146285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.915 qpair failed and we were unable to recover it. 00:38:34.915 [2024-12-09 09:56:10.146604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.915 [2024-12-09 09:56:10.146611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.915 qpair failed and we were unable to recover it. 00:38:34.915 [2024-12-09 09:56:10.146853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.915 [2024-12-09 09:56:10.146860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.915 qpair failed and we were unable to recover it. 00:38:34.915 [2024-12-09 09:56:10.147156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.915 [2024-12-09 09:56:10.147163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.915 qpair failed and we were unable to recover it. 00:38:34.915 [2024-12-09 09:56:10.147490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.915 [2024-12-09 09:56:10.147496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.915 qpair failed and we were unable to recover it. 00:38:34.916 [2024-12-09 09:56:10.147840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.916 [2024-12-09 09:56:10.147847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.916 qpair failed and we were unable to recover it. 00:38:34.916 [2024-12-09 09:56:10.148146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.916 [2024-12-09 09:56:10.148154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.916 qpair failed and we were unable to recover it. 00:38:34.916 [2024-12-09 09:56:10.148480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.916 [2024-12-09 09:56:10.148488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.916 qpair failed and we were unable to recover it. 00:38:34.916 [2024-12-09 09:56:10.148817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.916 [2024-12-09 09:56:10.148824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.916 qpair failed and we were unable to recover it. 00:38:34.916 [2024-12-09 09:56:10.149149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.916 [2024-12-09 09:56:10.149157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.916 qpair failed and we were unable to recover it. 00:38:34.916 [2024-12-09 09:56:10.149332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.916 [2024-12-09 09:56:10.149339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.916 qpair failed and we were unable to recover it. 00:38:34.916 [2024-12-09 09:56:10.149675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.916 [2024-12-09 09:56:10.149682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.916 qpair failed and we were unable to recover it. 00:38:34.916 [2024-12-09 09:56:10.149993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.916 [2024-12-09 09:56:10.150000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.916 qpair failed and we were unable to recover it. 00:38:34.916 [2024-12-09 09:56:10.150197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.916 [2024-12-09 09:56:10.150205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.916 qpair failed and we were unable to recover it. 00:38:34.916 [2024-12-09 09:56:10.150507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.916 [2024-12-09 09:56:10.150513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.916 qpair failed and we were unable to recover it. 00:38:34.916 [2024-12-09 09:56:10.150797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.916 [2024-12-09 09:56:10.150805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.916 qpair failed and we were unable to recover it. 00:38:34.916 [2024-12-09 09:56:10.151140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.916 [2024-12-09 09:56:10.151147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.916 qpair failed and we were unable to recover it. 00:38:34.916 [2024-12-09 09:56:10.151428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.916 [2024-12-09 09:56:10.151434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.916 qpair failed and we were unable to recover it. 00:38:34.916 [2024-12-09 09:56:10.151748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.916 [2024-12-09 09:56:10.151756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.916 qpair failed and we were unable to recover it. 00:38:34.916 [2024-12-09 09:56:10.152099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.916 [2024-12-09 09:56:10.152106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.916 qpair failed and we were unable to recover it. 00:38:34.916 [2024-12-09 09:56:10.152373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.916 [2024-12-09 09:56:10.152379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.916 qpair failed and we were unable to recover it. 00:38:34.916 [2024-12-09 09:56:10.152708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.916 [2024-12-09 09:56:10.152716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.916 qpair failed and we were unable to recover it. 00:38:34.916 [2024-12-09 09:56:10.153053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.916 [2024-12-09 09:56:10.153060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.916 qpair failed and we were unable to recover it. 00:38:34.916 [2024-12-09 09:56:10.153346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.916 [2024-12-09 09:56:10.153353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.916 qpair failed and we were unable to recover it. 00:38:34.916 [2024-12-09 09:56:10.153675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.916 [2024-12-09 09:56:10.153682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.916 qpair failed and we were unable to recover it. 00:38:34.916 [2024-12-09 09:56:10.154009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.916 [2024-12-09 09:56:10.154017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.916 qpair failed and we were unable to recover it. 00:38:34.916 [2024-12-09 09:56:10.154342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.916 [2024-12-09 09:56:10.154350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.916 qpair failed and we were unable to recover it. 00:38:34.916 [2024-12-09 09:56:10.154676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.916 [2024-12-09 09:56:10.154684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.916 qpair failed and we were unable to recover it. 00:38:34.916 [2024-12-09 09:56:10.155090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.916 [2024-12-09 09:56:10.155097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.916 qpair failed and we were unable to recover it. 00:38:34.916 [2024-12-09 09:56:10.155278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.916 [2024-12-09 09:56:10.155285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.916 qpair failed and we were unable to recover it. 00:38:34.916 [2024-12-09 09:56:10.155649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.916 [2024-12-09 09:56:10.155656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.916 qpair failed and we were unable to recover it. 00:38:34.916 [2024-12-09 09:56:10.155839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.916 [2024-12-09 09:56:10.155846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.916 qpair failed and we were unable to recover it. 00:38:34.916 [2024-12-09 09:56:10.156169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.916 [2024-12-09 09:56:10.156177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.916 qpair failed and we were unable to recover it. 00:38:34.916 [2024-12-09 09:56:10.156492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.916 [2024-12-09 09:56:10.156499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.916 qpair failed and we were unable to recover it. 00:38:34.916 [2024-12-09 09:56:10.156809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.916 [2024-12-09 09:56:10.156816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.916 qpair failed and we were unable to recover it. 00:38:34.916 [2024-12-09 09:56:10.157107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.916 [2024-12-09 09:56:10.157114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.916 qpair failed and we were unable to recover it. 00:38:34.916 [2024-12-09 09:56:10.157445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.916 [2024-12-09 09:56:10.157452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.916 qpair failed and we were unable to recover it. 00:38:34.916 [2024-12-09 09:56:10.157622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.916 [2024-12-09 09:56:10.157630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.916 qpair failed and we were unable to recover it. 00:38:34.916 [2024-12-09 09:56:10.157856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.916 [2024-12-09 09:56:10.157863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.916 qpair failed and we were unable to recover it. 00:38:34.916 [2024-12-09 09:56:10.158150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.916 [2024-12-09 09:56:10.158159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.916 qpair failed and we were unable to recover it. 00:38:34.916 [2024-12-09 09:56:10.158455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.916 [2024-12-09 09:56:10.158462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.916 qpair failed and we were unable to recover it. 00:38:34.916 [2024-12-09 09:56:10.158672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.916 [2024-12-09 09:56:10.158679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.916 qpair failed and we were unable to recover it. 00:38:34.916 [2024-12-09 09:56:10.159026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.916 [2024-12-09 09:56:10.159033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.916 qpair failed and we were unable to recover it. 00:38:34.917 [2024-12-09 09:56:10.159313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.917 [2024-12-09 09:56:10.159320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.917 qpair failed and we were unable to recover it. 00:38:34.917 [2024-12-09 09:56:10.159510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.917 [2024-12-09 09:56:10.159517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.917 qpair failed and we were unable to recover it. 00:38:34.917 [2024-12-09 09:56:10.159891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.917 [2024-12-09 09:56:10.159898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.917 qpair failed and we were unable to recover it. 00:38:34.917 [2024-12-09 09:56:10.160188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.917 [2024-12-09 09:56:10.160195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.917 qpair failed and we were unable to recover it. 00:38:34.917 [2024-12-09 09:56:10.160522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.917 [2024-12-09 09:56:10.160529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.917 qpair failed and we were unable to recover it. 00:38:34.917 [2024-12-09 09:56:10.160718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.917 [2024-12-09 09:56:10.160726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.917 qpair failed and we were unable to recover it. 00:38:34.917 [2024-12-09 09:56:10.160959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.917 [2024-12-09 09:56:10.160966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.917 qpair failed and we were unable to recover it. 00:38:34.917 [2024-12-09 09:56:10.161270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.917 [2024-12-09 09:56:10.161277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.917 qpair failed and we were unable to recover it. 00:38:34.917 [2024-12-09 09:56:10.161599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.917 [2024-12-09 09:56:10.161607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.917 qpair failed and we were unable to recover it. 00:38:34.917 [2024-12-09 09:56:10.161941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.917 [2024-12-09 09:56:10.161949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.917 qpair failed and we were unable to recover it. 00:38:34.917 [2024-12-09 09:56:10.162282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.917 [2024-12-09 09:56:10.162289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.917 qpair failed and we were unable to recover it. 00:38:34.917 [2024-12-09 09:56:10.162591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.917 [2024-12-09 09:56:10.162599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.917 qpair failed and we were unable to recover it. 00:38:34.917 [2024-12-09 09:56:10.162912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.917 [2024-12-09 09:56:10.162919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.917 qpair failed and we were unable to recover it. 00:38:34.917 [2024-12-09 09:56:10.163196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.917 [2024-12-09 09:56:10.163204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.917 qpair failed and we were unable to recover it. 00:38:34.917 [2024-12-09 09:56:10.163505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.917 [2024-12-09 09:56:10.163513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.917 qpair failed and we were unable to recover it. 00:38:34.917 [2024-12-09 09:56:10.163798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.917 [2024-12-09 09:56:10.163806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.917 qpair failed and we were unable to recover it. 00:38:34.917 [2024-12-09 09:56:10.164104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.917 [2024-12-09 09:56:10.164111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.917 qpair failed and we were unable to recover it. 00:38:34.917 [2024-12-09 09:56:10.164385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.917 [2024-12-09 09:56:10.164392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.917 qpair failed and we were unable to recover it. 00:38:34.917 [2024-12-09 09:56:10.164688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.917 [2024-12-09 09:56:10.164697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.917 qpair failed and we were unable to recover it. 00:38:34.917 [2024-12-09 09:56:10.165014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.917 [2024-12-09 09:56:10.165021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.917 qpair failed and we were unable to recover it. 00:38:34.917 [2024-12-09 09:56:10.165334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.917 [2024-12-09 09:56:10.165342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.917 qpair failed and we were unable to recover it. 00:38:34.917 [2024-12-09 09:56:10.165631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.917 [2024-12-09 09:56:10.165644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.917 qpair failed and we were unable to recover it. 00:38:34.917 [2024-12-09 09:56:10.165815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.917 [2024-12-09 09:56:10.165822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.917 qpair failed and we were unable to recover it. 00:38:34.917 [2024-12-09 09:56:10.166097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.917 [2024-12-09 09:56:10.166103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.917 qpair failed and we were unable to recover it. 00:38:34.917 [2024-12-09 09:56:10.166372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.917 [2024-12-09 09:56:10.166379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.917 qpair failed and we were unable to recover it. 00:38:34.917 [2024-12-09 09:56:10.166682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.917 [2024-12-09 09:56:10.166689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.917 qpair failed and we were unable to recover it. 00:38:34.917 [2024-12-09 09:56:10.167001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.917 [2024-12-09 09:56:10.167008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.917 qpair failed and we were unable to recover it. 00:38:34.917 [2024-12-09 09:56:10.167183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.917 [2024-12-09 09:56:10.167190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.917 qpair failed and we were unable to recover it. 00:38:34.917 [2024-12-09 09:56:10.167476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.917 [2024-12-09 09:56:10.167483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.917 qpair failed and we were unable to recover it. 00:38:34.917 [2024-12-09 09:56:10.167798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.917 [2024-12-09 09:56:10.167807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.917 qpair failed and we were unable to recover it. 00:38:34.917 [2024-12-09 09:56:10.168167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.917 [2024-12-09 09:56:10.168174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.917 qpair failed and we were unable to recover it. 00:38:34.917 [2024-12-09 09:56:10.168446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.917 [2024-12-09 09:56:10.168454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.917 qpair failed and we were unable to recover it. 00:38:34.917 [2024-12-09 09:56:10.168653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.917 [2024-12-09 09:56:10.168660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.917 qpair failed and we were unable to recover it. 00:38:34.917 [2024-12-09 09:56:10.168815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.917 [2024-12-09 09:56:10.168822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.917 qpair failed and we were unable to recover it. 00:38:34.917 [2024-12-09 09:56:10.169146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.917 [2024-12-09 09:56:10.169153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.917 qpair failed and we were unable to recover it. 00:38:34.917 [2024-12-09 09:56:10.169317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.917 [2024-12-09 09:56:10.169325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.917 qpair failed and we were unable to recover it. 00:38:34.917 [2024-12-09 09:56:10.169498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.917 [2024-12-09 09:56:10.169505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.917 qpair failed and we were unable to recover it. 00:38:34.917 [2024-12-09 09:56:10.169832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.917 [2024-12-09 09:56:10.169839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.917 qpair failed and we were unable to recover it. 00:38:34.918 [2024-12-09 09:56:10.170109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.918 [2024-12-09 09:56:10.170124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.918 qpair failed and we were unable to recover it. 00:38:34.918 [2024-12-09 09:56:10.170295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.918 [2024-12-09 09:56:10.170302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.918 qpair failed and we were unable to recover it. 00:38:34.918 [2024-12-09 09:56:10.170618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.918 [2024-12-09 09:56:10.170626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.918 qpair failed and we were unable to recover it. 00:38:34.918 [2024-12-09 09:56:10.170816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.918 [2024-12-09 09:56:10.170823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.918 qpair failed and we were unable to recover it. 00:38:34.918 [2024-12-09 09:56:10.171140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.918 [2024-12-09 09:56:10.171147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.918 qpair failed and we were unable to recover it. 00:38:34.918 [2024-12-09 09:56:10.171321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.918 [2024-12-09 09:56:10.171328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.918 qpair failed and we were unable to recover it. 00:38:34.918 [2024-12-09 09:56:10.171669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.918 [2024-12-09 09:56:10.171676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.918 qpair failed and we were unable to recover it. 00:38:34.918 [2024-12-09 09:56:10.171999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.918 [2024-12-09 09:56:10.172005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.918 qpair failed and we were unable to recover it. 00:38:34.918 [2024-12-09 09:56:10.172204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.918 [2024-12-09 09:56:10.172218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.918 qpair failed and we were unable to recover it. 00:38:34.918 [2024-12-09 09:56:10.172601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.918 [2024-12-09 09:56:10.172608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.918 qpair failed and we were unable to recover it. 00:38:34.918 [2024-12-09 09:56:10.172834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.918 [2024-12-09 09:56:10.172841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.918 qpair failed and we were unable to recover it. 00:38:34.918 [2024-12-09 09:56:10.173121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.918 [2024-12-09 09:56:10.173128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.918 qpair failed and we were unable to recover it. 00:38:34.918 [2024-12-09 09:56:10.173411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.918 [2024-12-09 09:56:10.173418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.918 qpair failed and we were unable to recover it. 00:38:34.918 [2024-12-09 09:56:10.173595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.918 [2024-12-09 09:56:10.173602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.918 qpair failed and we were unable to recover it. 00:38:34.918 [2024-12-09 09:56:10.173952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.918 [2024-12-09 09:56:10.173959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.918 qpair failed and we were unable to recover it. 00:38:34.918 [2024-12-09 09:56:10.174266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.918 [2024-12-09 09:56:10.174272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.918 qpair failed and we were unable to recover it. 00:38:34.918 [2024-12-09 09:56:10.174586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.918 [2024-12-09 09:56:10.174592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.918 qpair failed and we were unable to recover it. 00:38:34.918 [2024-12-09 09:56:10.174920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.918 [2024-12-09 09:56:10.174927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.918 qpair failed and we were unable to recover it. 00:38:34.918 [2024-12-09 09:56:10.175248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.918 [2024-12-09 09:56:10.175256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.918 qpair failed and we were unable to recover it. 00:38:34.918 [2024-12-09 09:56:10.175429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.918 [2024-12-09 09:56:10.175435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.918 qpair failed and we were unable to recover it. 00:38:34.918 [2024-12-09 09:56:10.175750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.918 [2024-12-09 09:56:10.175757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.918 qpair failed and we were unable to recover it. 00:38:34.918 [2024-12-09 09:56:10.176049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.918 [2024-12-09 09:56:10.176056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.918 qpair failed and we were unable to recover it. 00:38:34.918 [2024-12-09 09:56:10.176209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.918 [2024-12-09 09:56:10.176215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.918 qpair failed and we were unable to recover it. 00:38:34.918 [2024-12-09 09:56:10.176501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.918 [2024-12-09 09:56:10.176508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.918 qpair failed and we were unable to recover it. 00:38:34.918 [2024-12-09 09:56:10.176718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.918 [2024-12-09 09:56:10.176726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.918 qpair failed and we were unable to recover it. 00:38:34.918 [2024-12-09 09:56:10.177055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.918 [2024-12-09 09:56:10.177062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.918 qpair failed and we were unable to recover it. 00:38:34.918 [2024-12-09 09:56:10.177360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.918 [2024-12-09 09:56:10.177368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.918 qpair failed and we were unable to recover it. 00:38:34.918 [2024-12-09 09:56:10.177636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.918 [2024-12-09 09:56:10.177647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.918 qpair failed and we were unable to recover it. 00:38:34.918 [2024-12-09 09:56:10.177813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.918 [2024-12-09 09:56:10.177820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.918 qpair failed and we were unable to recover it. 00:38:34.918 [2024-12-09 09:56:10.178152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.918 [2024-12-09 09:56:10.178159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.918 qpair failed and we were unable to recover it. 00:38:34.918 [2024-12-09 09:56:10.178455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.918 [2024-12-09 09:56:10.178462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.918 qpair failed and we were unable to recover it. 00:38:34.918 [2024-12-09 09:56:10.178780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.918 [2024-12-09 09:56:10.178789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.918 qpair failed and we were unable to recover it. 00:38:34.918 [2024-12-09 09:56:10.179086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.918 [2024-12-09 09:56:10.179093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.918 qpair failed and we were unable to recover it. 00:38:34.918 [2024-12-09 09:56:10.179302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.918 [2024-12-09 09:56:10.179310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.918 qpair failed and we were unable to recover it. 00:38:34.918 [2024-12-09 09:56:10.179550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.918 [2024-12-09 09:56:10.179558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.918 qpair failed and we were unable to recover it. 00:38:34.918 [2024-12-09 09:56:10.179751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.918 [2024-12-09 09:56:10.179759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.918 qpair failed and we were unable to recover it. 00:38:34.918 [2024-12-09 09:56:10.180042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.918 [2024-12-09 09:56:10.180050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.918 qpair failed and we were unable to recover it. 00:38:34.918 [2024-12-09 09:56:10.180235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.918 [2024-12-09 09:56:10.180243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.918 qpair failed and we were unable to recover it. 00:38:34.919 [2024-12-09 09:56:10.180448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.919 [2024-12-09 09:56:10.180455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.919 qpair failed and we were unable to recover it. 00:38:34.919 [2024-12-09 09:56:10.180649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.919 [2024-12-09 09:56:10.180656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.919 qpair failed and we were unable to recover it. 00:38:34.919 [2024-12-09 09:56:10.180990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.919 [2024-12-09 09:56:10.180997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.919 qpair failed and we were unable to recover it. 00:38:34.919 [2024-12-09 09:56:10.181155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.919 [2024-12-09 09:56:10.181162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.919 qpair failed and we were unable to recover it. 00:38:34.919 [2024-12-09 09:56:10.181505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.919 [2024-12-09 09:56:10.181512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.919 qpair failed and we were unable to recover it. 00:38:34.919 [2024-12-09 09:56:10.181847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.919 [2024-12-09 09:56:10.181854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.919 qpair failed and we were unable to recover it. 00:38:34.919 [2024-12-09 09:56:10.182021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.919 [2024-12-09 09:56:10.182029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.919 qpair failed and we were unable to recover it. 00:38:34.919 [2024-12-09 09:56:10.182328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.919 [2024-12-09 09:56:10.182335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.919 qpair failed and we were unable to recover it. 00:38:34.919 [2024-12-09 09:56:10.182373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.919 [2024-12-09 09:56:10.182379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.919 qpair failed and we were unable to recover it. 00:38:34.919 [2024-12-09 09:56:10.182648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.919 [2024-12-09 09:56:10.182655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.919 qpair failed and we were unable to recover it. 00:38:34.919 [2024-12-09 09:56:10.182693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.919 [2024-12-09 09:56:10.182699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.919 qpair failed and we were unable to recover it. 00:38:34.919 [2024-12-09 09:56:10.182989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.919 [2024-12-09 09:56:10.182995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.919 qpair failed and we were unable to recover it. 00:38:34.919 [2024-12-09 09:56:10.183035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.919 [2024-12-09 09:56:10.183042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.919 qpair failed and we were unable to recover it. 00:38:34.919 [2024-12-09 09:56:10.183337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.919 [2024-12-09 09:56:10.183344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.919 qpair failed and we were unable to recover it. 00:38:34.919 [2024-12-09 09:56:10.183647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.919 [2024-12-09 09:56:10.183655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.919 qpair failed and we were unable to recover it. 00:38:34.919 [2024-12-09 09:56:10.183940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.919 [2024-12-09 09:56:10.183946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.919 qpair failed and we were unable to recover it. 00:38:34.919 [2024-12-09 09:56:10.184262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.919 [2024-12-09 09:56:10.184268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.919 qpair failed and we were unable to recover it. 00:38:34.919 [2024-12-09 09:56:10.184602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.919 [2024-12-09 09:56:10.184609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.919 qpair failed and we were unable to recover it. 00:38:34.919 [2024-12-09 09:56:10.184943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.919 [2024-12-09 09:56:10.184950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.919 qpair failed and we were unable to recover it. 00:38:34.919 [2024-12-09 09:56:10.185101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.919 [2024-12-09 09:56:10.185108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.919 qpair failed and we were unable to recover it. 00:38:34.919 [2024-12-09 09:56:10.185401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.919 [2024-12-09 09:56:10.185408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.919 qpair failed and we were unable to recover it. 00:38:34.919 [2024-12-09 09:56:10.185580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.919 [2024-12-09 09:56:10.185587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.919 qpair failed and we were unable to recover it. 00:38:34.919 [2024-12-09 09:56:10.185865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.919 [2024-12-09 09:56:10.185872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.919 qpair failed and we were unable to recover it. 00:38:34.919 [2024-12-09 09:56:10.186162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.919 [2024-12-09 09:56:10.186168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.919 qpair failed and we were unable to recover it. 00:38:34.919 [2024-12-09 09:56:10.186475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.919 [2024-12-09 09:56:10.186481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.919 qpair failed and we were unable to recover it. 00:38:34.919 [2024-12-09 09:56:10.186794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.919 [2024-12-09 09:56:10.186802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.919 qpair failed and we were unable to recover it. 00:38:34.919 [2024-12-09 09:56:10.186978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.919 [2024-12-09 09:56:10.186985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.919 qpair failed and we were unable to recover it. 00:38:34.919 [2024-12-09 09:56:10.187172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.919 [2024-12-09 09:56:10.187180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.919 qpair failed and we were unable to recover it. 00:38:34.919 [2024-12-09 09:56:10.187369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.919 [2024-12-09 09:56:10.187376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.919 qpair failed and we were unable to recover it. 00:38:34.919 [2024-12-09 09:56:10.187687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.919 [2024-12-09 09:56:10.187695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.919 qpair failed and we were unable to recover it. 00:38:34.919 [2024-12-09 09:56:10.187963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.919 [2024-12-09 09:56:10.187970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.919 qpair failed and we were unable to recover it. 00:38:34.919 [2024-12-09 09:56:10.188263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.919 [2024-12-09 09:56:10.188270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.919 qpair failed and we were unable to recover it. 00:38:34.919 [2024-12-09 09:56:10.188488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.920 [2024-12-09 09:56:10.188495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.920 qpair failed and we were unable to recover it. 00:38:34.920 [2024-12-09 09:56:10.188689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.920 [2024-12-09 09:56:10.188698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.920 qpair failed and we were unable to recover it. 00:38:34.920 [2024-12-09 09:56:10.189011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.920 [2024-12-09 09:56:10.189018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.920 qpair failed and we were unable to recover it. 00:38:34.920 [2024-12-09 09:56:10.189318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.920 [2024-12-09 09:56:10.189326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.920 qpair failed and we were unable to recover it. 00:38:34.920 [2024-12-09 09:56:10.189610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.920 [2024-12-09 09:56:10.189617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.920 qpair failed and we were unable to recover it. 00:38:34.920 [2024-12-09 09:56:10.189797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.920 [2024-12-09 09:56:10.189804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.920 qpair failed and we were unable to recover it. 00:38:34.920 [2024-12-09 09:56:10.190199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.920 [2024-12-09 09:56:10.190207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.920 qpair failed and we were unable to recover it. 00:38:34.920 [2024-12-09 09:56:10.190369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.920 [2024-12-09 09:56:10.190377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.920 qpair failed and we were unable to recover it. 00:38:34.920 [2024-12-09 09:56:10.190682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.920 [2024-12-09 09:56:10.190689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.920 qpair failed and we were unable to recover it. 00:38:34.920 [2024-12-09 09:56:10.190998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.920 [2024-12-09 09:56:10.191005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.920 qpair failed and we were unable to recover it. 00:38:34.920 [2024-12-09 09:56:10.191183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.920 [2024-12-09 09:56:10.191190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.920 qpair failed and we were unable to recover it. 00:38:34.920 [2024-12-09 09:56:10.191470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.920 [2024-12-09 09:56:10.191476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.920 qpair failed and we were unable to recover it. 00:38:34.920 [2024-12-09 09:56:10.191701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.920 [2024-12-09 09:56:10.191709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.920 qpair failed and we were unable to recover it. 00:38:34.920 [2024-12-09 09:56:10.192062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.920 [2024-12-09 09:56:10.192069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.920 qpair failed and we were unable to recover it. 00:38:34.920 [2024-12-09 09:56:10.192365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.920 [2024-12-09 09:56:10.192372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.920 qpair failed and we were unable to recover it. 00:38:34.920 [2024-12-09 09:56:10.192744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.920 [2024-12-09 09:56:10.192751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.920 qpair failed and we were unable to recover it. 00:38:34.920 [2024-12-09 09:56:10.192938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.920 [2024-12-09 09:56:10.192947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.920 qpair failed and we were unable to recover it. 00:38:34.920 [2024-12-09 09:56:10.193225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.920 [2024-12-09 09:56:10.193233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.920 qpair failed and we were unable to recover it. 00:38:34.920 [2024-12-09 09:56:10.193561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.920 [2024-12-09 09:56:10.193567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.920 qpair failed and we were unable to recover it. 00:38:34.920 [2024-12-09 09:56:10.193851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.920 [2024-12-09 09:56:10.193858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.920 qpair failed and we were unable to recover it. 00:38:34.920 [2024-12-09 09:56:10.194215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.920 [2024-12-09 09:56:10.194222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.920 qpair failed and we were unable to recover it. 00:38:34.920 [2024-12-09 09:56:10.194547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.920 [2024-12-09 09:56:10.194554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.920 qpair failed and we were unable to recover it. 00:38:34.920 [2024-12-09 09:56:10.194948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.920 [2024-12-09 09:56:10.194955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.920 qpair failed and we were unable to recover it. 00:38:34.920 [2024-12-09 09:56:10.195239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.920 [2024-12-09 09:56:10.195246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.920 qpair failed and we were unable to recover it. 00:38:34.920 [2024-12-09 09:56:10.195558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.920 [2024-12-09 09:56:10.195566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.920 qpair failed and we were unable to recover it. 00:38:34.920 [2024-12-09 09:56:10.195757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.920 [2024-12-09 09:56:10.195765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.920 qpair failed and we were unable to recover it. 00:38:34.920 [2024-12-09 09:56:10.196073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.920 [2024-12-09 09:56:10.196080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.920 qpair failed and we were unable to recover it. 00:38:34.920 [2024-12-09 09:56:10.196380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.920 [2024-12-09 09:56:10.196387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.920 qpair failed and we were unable to recover it. 00:38:34.920 [2024-12-09 09:56:10.196551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.920 [2024-12-09 09:56:10.196559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.920 qpair failed and we were unable to recover it. 00:38:34.920 [2024-12-09 09:56:10.196847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.920 [2024-12-09 09:56:10.196855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.920 qpair failed and we were unable to recover it. 00:38:34.920 [2024-12-09 09:56:10.197171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.920 [2024-12-09 09:56:10.197179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.920 qpair failed and we were unable to recover it. 00:38:34.920 [2024-12-09 09:56:10.197484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.920 [2024-12-09 09:56:10.197491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.920 qpair failed and we were unable to recover it. 00:38:34.920 [2024-12-09 09:56:10.197775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.920 [2024-12-09 09:56:10.197782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.920 qpair failed and we were unable to recover it. 00:38:34.920 [2024-12-09 09:56:10.198126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.920 [2024-12-09 09:56:10.198133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.920 qpair failed and we were unable to recover it. 00:38:34.920 [2024-12-09 09:56:10.198434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.920 [2024-12-09 09:56:10.198440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.920 qpair failed and we were unable to recover it. 00:38:34.920 [2024-12-09 09:56:10.198661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.920 [2024-12-09 09:56:10.198668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.920 qpair failed and we were unable to recover it. 00:38:34.920 [2024-12-09 09:56:10.198981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.920 [2024-12-09 09:56:10.198988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.920 qpair failed and we were unable to recover it. 00:38:34.920 [2024-12-09 09:56:10.199302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.920 [2024-12-09 09:56:10.199309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.920 qpair failed and we were unable to recover it. 00:38:34.920 [2024-12-09 09:56:10.199488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.921 [2024-12-09 09:56:10.199494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.921 qpair failed and we were unable to recover it. 00:38:34.921 [2024-12-09 09:56:10.199794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.921 [2024-12-09 09:56:10.199802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.921 qpair failed and we were unable to recover it. 00:38:34.921 [2024-12-09 09:56:10.199995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.921 [2024-12-09 09:56:10.200003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.921 qpair failed and we were unable to recover it. 00:38:34.921 [2024-12-09 09:56:10.200278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.921 [2024-12-09 09:56:10.200285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.921 qpair failed and we were unable to recover it. 00:38:34.921 [2024-12-09 09:56:10.200573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.921 [2024-12-09 09:56:10.200580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.921 qpair failed and we were unable to recover it. 00:38:34.921 [2024-12-09 09:56:10.200769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.921 [2024-12-09 09:56:10.200777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.921 qpair failed and we were unable to recover it. 00:38:34.921 [2024-12-09 09:56:10.201002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.921 [2024-12-09 09:56:10.201009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.921 qpair failed and we were unable to recover it. 00:38:34.921 [2024-12-09 09:56:10.201183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.921 [2024-12-09 09:56:10.201190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.921 qpair failed and we were unable to recover it. 00:38:34.921 [2024-12-09 09:56:10.201511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.921 [2024-12-09 09:56:10.201518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.921 qpair failed and we were unable to recover it. 00:38:34.921 [2024-12-09 09:56:10.201843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.921 [2024-12-09 09:56:10.201850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.921 qpair failed and we were unable to recover it. 00:38:34.921 [2024-12-09 09:56:10.202132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.921 [2024-12-09 09:56:10.202139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.921 qpair failed and we were unable to recover it. 00:38:34.921 [2024-12-09 09:56:10.202293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.921 [2024-12-09 09:56:10.202301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.921 qpair failed and we were unable to recover it. 00:38:34.921 [2024-12-09 09:56:10.202482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.921 [2024-12-09 09:56:10.202489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.921 qpair failed and we were unable to recover it. 00:38:34.921 [2024-12-09 09:56:10.202862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.921 [2024-12-09 09:56:10.202869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.921 qpair failed and we were unable to recover it. 00:38:34.921 [2024-12-09 09:56:10.203168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.921 [2024-12-09 09:56:10.203176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.921 qpair failed and we were unable to recover it. 00:38:34.921 [2024-12-09 09:56:10.203330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.921 [2024-12-09 09:56:10.203338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.921 qpair failed and we were unable to recover it. 00:38:34.921 [2024-12-09 09:56:10.203614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.921 [2024-12-09 09:56:10.203622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.921 qpair failed and we were unable to recover it. 00:38:34.921 [2024-12-09 09:56:10.203944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.921 [2024-12-09 09:56:10.203951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.921 qpair failed and we were unable to recover it. 00:38:34.921 [2024-12-09 09:56:10.204270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.921 [2024-12-09 09:56:10.204278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.921 qpair failed and we were unable to recover it. 00:38:34.921 [2024-12-09 09:56:10.204451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.921 [2024-12-09 09:56:10.204458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.921 qpair failed and we were unable to recover it. 00:38:34.921 [2024-12-09 09:56:10.204648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.921 [2024-12-09 09:56:10.204656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.921 qpair failed and we were unable to recover it. 00:38:34.921 [2024-12-09 09:56:10.204971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.921 [2024-12-09 09:56:10.204978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.921 qpair failed and we were unable to recover it. 00:38:34.921 [2024-12-09 09:56:10.205145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.921 [2024-12-09 09:56:10.205152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.921 qpair failed and we were unable to recover it. 00:38:34.921 [2024-12-09 09:56:10.205305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.921 [2024-12-09 09:56:10.205312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.921 qpair failed and we were unable to recover it. 00:38:34.921 [2024-12-09 09:56:10.205352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.921 [2024-12-09 09:56:10.205359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.921 qpair failed and we were unable to recover it. 00:38:34.921 [2024-12-09 09:56:10.205518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.921 [2024-12-09 09:56:10.205525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.921 qpair failed and we were unable to recover it. 00:38:34.921 [2024-12-09 09:56:10.205747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.921 [2024-12-09 09:56:10.205754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.921 qpair failed and we were unable to recover it. 00:38:34.921 [2024-12-09 09:56:10.206044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.921 [2024-12-09 09:56:10.206051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.921 qpair failed and we were unable to recover it. 00:38:34.921 [2024-12-09 09:56:10.206353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.921 [2024-12-09 09:56:10.206359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.921 qpair failed and we were unable to recover it. 00:38:34.921 [2024-12-09 09:56:10.206684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.921 [2024-12-09 09:56:10.206691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.921 qpair failed and we were unable to recover it. 00:38:34.921 [2024-12-09 09:56:10.206955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.921 [2024-12-09 09:56:10.206963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.921 qpair failed and we were unable to recover it. 00:38:34.921 [2024-12-09 09:56:10.207162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.921 [2024-12-09 09:56:10.207169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.921 qpair failed and we were unable to recover it. 00:38:34.921 [2024-12-09 09:56:10.207474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.921 [2024-12-09 09:56:10.207480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.921 qpair failed and we were unable to recover it. 00:38:34.921 [2024-12-09 09:56:10.207660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.921 [2024-12-09 09:56:10.207667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.921 qpair failed and we were unable to recover it. 00:38:34.921 [2024-12-09 09:56:10.207952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.921 [2024-12-09 09:56:10.207958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.921 qpair failed and we were unable to recover it. 00:38:34.921 [2024-12-09 09:56:10.208160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.921 [2024-12-09 09:56:10.208169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.921 qpair failed and we were unable to recover it. 00:38:34.921 [2024-12-09 09:56:10.208446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.921 [2024-12-09 09:56:10.208453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.921 qpair failed and we were unable to recover it. 00:38:34.921 [2024-12-09 09:56:10.208749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.921 [2024-12-09 09:56:10.208757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.921 qpair failed and we were unable to recover it. 00:38:34.921 [2024-12-09 09:56:10.209056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.922 [2024-12-09 09:56:10.209063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.922 qpair failed and we were unable to recover it. 00:38:34.922 [2024-12-09 09:56:10.209235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.922 [2024-12-09 09:56:10.209242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.922 qpair failed and we were unable to recover it. 00:38:34.922 [2024-12-09 09:56:10.209543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.922 [2024-12-09 09:56:10.209549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.922 qpair failed and we were unable to recover it. 00:38:34.922 [2024-12-09 09:56:10.209819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.922 [2024-12-09 09:56:10.209826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.922 qpair failed and we were unable to recover it. 00:38:34.922 [2024-12-09 09:56:10.210138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.922 [2024-12-09 09:56:10.210145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.922 qpair failed and we were unable to recover it. 00:38:34.922 [2024-12-09 09:56:10.210490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.922 [2024-12-09 09:56:10.210497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.922 qpair failed and we were unable to recover it. 00:38:34.922 [2024-12-09 09:56:10.210681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.922 [2024-12-09 09:56:10.210688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.922 qpair failed and we were unable to recover it. 00:38:34.922 [2024-12-09 09:56:10.211031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.922 [2024-12-09 09:56:10.211037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.922 qpair failed and we were unable to recover it. 00:38:34.922 [2024-12-09 09:56:10.211353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.922 [2024-12-09 09:56:10.211359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.922 qpair failed and we were unable to recover it. 00:38:34.922 [2024-12-09 09:56:10.211691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.922 [2024-12-09 09:56:10.211698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.922 qpair failed and we were unable to recover it. 00:38:34.922 [2024-12-09 09:56:10.211889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.922 [2024-12-09 09:56:10.211896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.922 qpair failed and we were unable to recover it. 00:38:34.922 [2024-12-09 09:56:10.212071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.922 [2024-12-09 09:56:10.212078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.922 qpair failed and we were unable to recover it. 00:38:34.922 [2024-12-09 09:56:10.212249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.922 [2024-12-09 09:56:10.212256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.922 qpair failed and we were unable to recover it. 00:38:34.922 [2024-12-09 09:56:10.212531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.922 [2024-12-09 09:56:10.212538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.922 qpair failed and we were unable to recover it. 00:38:34.922 [2024-12-09 09:56:10.212748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.922 [2024-12-09 09:56:10.212757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.922 qpair failed and we were unable to recover it. 00:38:34.922 [2024-12-09 09:56:10.212975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.922 [2024-12-09 09:56:10.212983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.922 qpair failed and we were unable to recover it. 00:38:34.922 [2024-12-09 09:56:10.213272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.922 [2024-12-09 09:56:10.213279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.922 qpair failed and we were unable to recover it. 00:38:34.922 [2024-12-09 09:56:10.213439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.922 [2024-12-09 09:56:10.213446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.922 qpair failed and we were unable to recover it. 00:38:34.922 [2024-12-09 09:56:10.213601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.922 [2024-12-09 09:56:10.213609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.922 qpair failed and we were unable to recover it. 00:38:34.922 [2024-12-09 09:56:10.213795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.922 [2024-12-09 09:56:10.213803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.922 qpair failed and we were unable to recover it. 00:38:34.922 [2024-12-09 09:56:10.213983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.922 [2024-12-09 09:56:10.213990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.922 qpair failed and we were unable to recover it. 00:38:34.922 [2024-12-09 09:56:10.214300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.922 [2024-12-09 09:56:10.214307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.922 qpair failed and we were unable to recover it. 00:38:34.922 [2024-12-09 09:56:10.214610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.922 [2024-12-09 09:56:10.214617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.922 qpair failed and we were unable to recover it. 00:38:34.922 [2024-12-09 09:56:10.214927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.922 [2024-12-09 09:56:10.214935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.922 qpair failed and we were unable to recover it. 00:38:34.922 [2024-12-09 09:56:10.215247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.922 [2024-12-09 09:56:10.215254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.922 qpair failed and we were unable to recover it. 00:38:34.922 [2024-12-09 09:56:10.215460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.922 [2024-12-09 09:56:10.215468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.922 qpair failed and we were unable to recover it. 00:38:34.922 [2024-12-09 09:56:10.215629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.922 [2024-12-09 09:56:10.215645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.922 qpair failed and we were unable to recover it. 00:38:34.922 [2024-12-09 09:56:10.215966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.922 [2024-12-09 09:56:10.215973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.922 qpair failed and we were unable to recover it. 00:38:34.922 [2024-12-09 09:56:10.216307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.922 [2024-12-09 09:56:10.216314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.922 qpair failed and we were unable to recover it. 00:38:34.922 [2024-12-09 09:56:10.216504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.922 [2024-12-09 09:56:10.216513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.922 qpair failed and we were unable to recover it. 00:38:34.922 [2024-12-09 09:56:10.216825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.922 [2024-12-09 09:56:10.216833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.922 qpair failed and we were unable to recover it. 00:38:34.922 [2024-12-09 09:56:10.217121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.922 [2024-12-09 09:56:10.217128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.922 qpair failed and we were unable to recover it. 00:38:34.922 [2024-12-09 09:56:10.217327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.922 [2024-12-09 09:56:10.217337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.922 qpair failed and we were unable to recover it. 00:38:34.922 [2024-12-09 09:56:10.217649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.922 [2024-12-09 09:56:10.217657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.922 qpair failed and we were unable to recover it. 00:38:34.922 [2024-12-09 09:56:10.217931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.922 [2024-12-09 09:56:10.217938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.922 qpair failed and we were unable to recover it. 00:38:34.922 [2024-12-09 09:56:10.218251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.922 [2024-12-09 09:56:10.218258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.922 qpair failed and we were unable to recover it. 00:38:34.922 [2024-12-09 09:56:10.218563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.922 [2024-12-09 09:56:10.218569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.922 qpair failed and we were unable to recover it. 00:38:34.922 [2024-12-09 09:56:10.218846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.922 [2024-12-09 09:56:10.218853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.922 qpair failed and we were unable to recover it. 00:38:34.922 [2024-12-09 09:56:10.219159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.923 [2024-12-09 09:56:10.219166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.923 qpair failed and we were unable to recover it. 00:38:34.923 [2024-12-09 09:56:10.219327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.923 [2024-12-09 09:56:10.219344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.923 qpair failed and we were unable to recover it. 00:38:34.923 [2024-12-09 09:56:10.219707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.923 [2024-12-09 09:56:10.219714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.923 qpair failed and we were unable to recover it. 00:38:34.923 [2024-12-09 09:56:10.220023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.923 [2024-12-09 09:56:10.220029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.923 qpair failed and we were unable to recover it. 00:38:34.923 [2024-12-09 09:56:10.220361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.923 [2024-12-09 09:56:10.220367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.923 qpair failed and we were unable to recover it. 00:38:34.923 [2024-12-09 09:56:10.220678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.923 [2024-12-09 09:56:10.220685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.923 qpair failed and we were unable to recover it. 00:38:34.923 [2024-12-09 09:56:10.220984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.923 [2024-12-09 09:56:10.220991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.923 qpair failed and we were unable to recover it. 00:38:34.923 [2024-12-09 09:56:10.221275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.923 [2024-12-09 09:56:10.221281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.923 qpair failed and we were unable to recover it. 00:38:34.923 [2024-12-09 09:56:10.221628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.923 [2024-12-09 09:56:10.221635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.923 qpair failed and we were unable to recover it. 00:38:34.923 [2024-12-09 09:56:10.221849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.923 [2024-12-09 09:56:10.221857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.923 qpair failed and we were unable to recover it. 00:38:34.923 [2024-12-09 09:56:10.222156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.923 [2024-12-09 09:56:10.222163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.923 qpair failed and we were unable to recover it. 00:38:34.923 [2024-12-09 09:56:10.222388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.923 [2024-12-09 09:56:10.222395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.923 qpair failed and we were unable to recover it. 00:38:34.923 [2024-12-09 09:56:10.222704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.923 [2024-12-09 09:56:10.222711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.923 qpair failed and we were unable to recover it. 00:38:34.923 [2024-12-09 09:56:10.223048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.923 [2024-12-09 09:56:10.223055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.923 qpair failed and we were unable to recover it. 00:38:34.923 [2024-12-09 09:56:10.223239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.923 [2024-12-09 09:56:10.223246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.923 qpair failed and we were unable to recover it. 00:38:34.923 [2024-12-09 09:56:10.223516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.923 [2024-12-09 09:56:10.223522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.923 qpair failed and we were unable to recover it. 00:38:34.923 [2024-12-09 09:56:10.223847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.923 [2024-12-09 09:56:10.223855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.923 qpair failed and we were unable to recover it. 00:38:34.923 [2024-12-09 09:56:10.224174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.923 [2024-12-09 09:56:10.224181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.923 qpair failed and we were unable to recover it. 00:38:34.923 [2024-12-09 09:56:10.224343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.923 [2024-12-09 09:56:10.224350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.923 qpair failed and we were unable to recover it. 00:38:34.923 [2024-12-09 09:56:10.224680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.923 [2024-12-09 09:56:10.224687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.923 qpair failed and we were unable to recover it. 00:38:34.923 [2024-12-09 09:56:10.225009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.923 [2024-12-09 09:56:10.225016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.923 qpair failed and we were unable to recover it. 00:38:34.923 [2024-12-09 09:56:10.225318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.923 [2024-12-09 09:56:10.225326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.923 qpair failed and we were unable to recover it. 00:38:34.923 [2024-12-09 09:56:10.225625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.923 [2024-12-09 09:56:10.225631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.923 qpair failed and we were unable to recover it. 00:38:34.923 [2024-12-09 09:56:10.225908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.923 [2024-12-09 09:56:10.225916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.923 qpair failed and we were unable to recover it. 00:38:34.923 [2024-12-09 09:56:10.226227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.923 [2024-12-09 09:56:10.226234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.923 qpair failed and we were unable to recover it. 00:38:34.923 [2024-12-09 09:56:10.226550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.923 [2024-12-09 09:56:10.226557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.923 qpair failed and we were unable to recover it. 00:38:34.923 [2024-12-09 09:56:10.226878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.923 [2024-12-09 09:56:10.226885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.923 qpair failed and we were unable to recover it. 00:38:34.923 [2024-12-09 09:56:10.227214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.923 [2024-12-09 09:56:10.227220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.923 qpair failed and we were unable to recover it. 00:38:34.923 [2024-12-09 09:56:10.227414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.923 [2024-12-09 09:56:10.227422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.923 qpair failed and we were unable to recover it. 00:38:34.923 [2024-12-09 09:56:10.227712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.923 [2024-12-09 09:56:10.227719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.923 qpair failed and we were unable to recover it. 00:38:34.923 [2024-12-09 09:56:10.227998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.923 [2024-12-09 09:56:10.228011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.923 qpair failed and we were unable to recover it. 00:38:34.923 [2024-12-09 09:56:10.228224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.923 [2024-12-09 09:56:10.228232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.923 qpair failed and we were unable to recover it. 00:38:34.923 [2024-12-09 09:56:10.228601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.923 [2024-12-09 09:56:10.228608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.923 qpair failed and we were unable to recover it. 00:38:34.923 [2024-12-09 09:56:10.228894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.923 [2024-12-09 09:56:10.228901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.923 qpair failed and we were unable to recover it. 00:38:34.924 [2024-12-09 09:56:10.229221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.924 [2024-12-09 09:56:10.229229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.924 qpair failed and we were unable to recover it. 00:38:34.924 [2024-12-09 09:56:10.229520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.924 [2024-12-09 09:56:10.229528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.924 qpair failed and we were unable to recover it. 00:38:34.924 [2024-12-09 09:56:10.229794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.924 [2024-12-09 09:56:10.229801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.924 qpair failed and we were unable to recover it. 00:38:34.924 [2024-12-09 09:56:10.230070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.924 [2024-12-09 09:56:10.230076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.924 qpair failed and we were unable to recover it. 00:38:34.924 [2024-12-09 09:56:10.230395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.924 [2024-12-09 09:56:10.230403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.924 qpair failed and we were unable to recover it. 00:38:34.924 [2024-12-09 09:56:10.230675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.924 [2024-12-09 09:56:10.230683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.924 qpair failed and we were unable to recover it. 00:38:34.924 [2024-12-09 09:56:10.230986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.924 [2024-12-09 09:56:10.230993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.924 qpair failed and we were unable to recover it. 00:38:34.924 [2024-12-09 09:56:10.231186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.924 [2024-12-09 09:56:10.231193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.924 qpair failed and we were unable to recover it. 00:38:34.924 [2024-12-09 09:56:10.231510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.924 [2024-12-09 09:56:10.231516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.924 qpair failed and we were unable to recover it. 00:38:34.924 [2024-12-09 09:56:10.231679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.924 [2024-12-09 09:56:10.231686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.924 qpair failed and we were unable to recover it. 00:38:34.924 [2024-12-09 09:56:10.231969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.924 [2024-12-09 09:56:10.231976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.924 qpair failed and we were unable to recover it. 00:38:34.924 [2024-12-09 09:56:10.232143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.924 [2024-12-09 09:56:10.232150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.924 qpair failed and we were unable to recover it. 00:38:34.924 [2024-12-09 09:56:10.232415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.924 [2024-12-09 09:56:10.232422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.924 qpair failed and we were unable to recover it. 00:38:34.924 [2024-12-09 09:56:10.232734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.924 [2024-12-09 09:56:10.232741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.924 qpair failed and we were unable to recover it. 00:38:34.924 [2024-12-09 09:56:10.233102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.924 [2024-12-09 09:56:10.233109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.924 qpair failed and we were unable to recover it. 00:38:34.924 [2024-12-09 09:56:10.233439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.924 [2024-12-09 09:56:10.233446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.924 qpair failed and we were unable to recover it. 00:38:34.924 [2024-12-09 09:56:10.233713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.924 [2024-12-09 09:56:10.233720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.924 qpair failed and we were unable to recover it. 00:38:34.924 [2024-12-09 09:56:10.233996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.924 [2024-12-09 09:56:10.234003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.924 qpair failed and we were unable to recover it. 00:38:34.924 [2024-12-09 09:56:10.234321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.924 [2024-12-09 09:56:10.234328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.924 qpair failed and we were unable to recover it. 00:38:34.924 [2024-12-09 09:56:10.234507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.924 [2024-12-09 09:56:10.234513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.924 qpair failed and we were unable to recover it. 00:38:34.924 [2024-12-09 09:56:10.234818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.924 [2024-12-09 09:56:10.234826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.924 qpair failed and we were unable to recover it. 00:38:34.924 [2024-12-09 09:56:10.235115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.924 [2024-12-09 09:56:10.235121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.924 qpair failed and we were unable to recover it. 00:38:34.924 [2024-12-09 09:56:10.235456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.924 [2024-12-09 09:56:10.235464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.924 qpair failed and we were unable to recover it. 00:38:34.924 [2024-12-09 09:56:10.235784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.924 [2024-12-09 09:56:10.235792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.924 qpair failed and we were unable to recover it. 00:38:34.924 [2024-12-09 09:56:10.236111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.924 [2024-12-09 09:56:10.236119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.924 qpair failed and we were unable to recover it. 00:38:34.924 [2024-12-09 09:56:10.236424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.924 [2024-12-09 09:56:10.236431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.924 qpair failed and we were unable to recover it. 00:38:34.924 [2024-12-09 09:56:10.236636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.924 [2024-12-09 09:56:10.236646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.924 qpair failed and we were unable to recover it. 00:38:34.924 [2024-12-09 09:56:10.237007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.924 [2024-12-09 09:56:10.237014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.924 qpair failed and we were unable to recover it. 00:38:34.924 [2024-12-09 09:56:10.237299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.924 [2024-12-09 09:56:10.237305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.924 qpair failed and we were unable to recover it. 00:38:34.924 [2024-12-09 09:56:10.237579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.924 [2024-12-09 09:56:10.237586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.924 qpair failed and we were unable to recover it. 00:38:34.924 [2024-12-09 09:56:10.237796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.924 [2024-12-09 09:56:10.237803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.924 qpair failed and we were unable to recover it. 00:38:34.924 [2024-12-09 09:56:10.238107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.924 [2024-12-09 09:56:10.238114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.924 qpair failed and we were unable to recover it. 00:38:34.924 [2024-12-09 09:56:10.238430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.924 [2024-12-09 09:56:10.238437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.924 qpair failed and we were unable to recover it. 00:38:34.924 [2024-12-09 09:56:10.238647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.924 [2024-12-09 09:56:10.238654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.924 qpair failed and we were unable to recover it. 00:38:34.924 [2024-12-09 09:56:10.238993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.924 [2024-12-09 09:56:10.239000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.924 qpair failed and we were unable to recover it. 00:38:34.924 [2024-12-09 09:56:10.239312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.924 [2024-12-09 09:56:10.239319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.924 qpair failed and we were unable to recover it. 00:38:34.924 [2024-12-09 09:56:10.239496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.924 [2024-12-09 09:56:10.239504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.924 qpair failed and we were unable to recover it. 00:38:34.924 [2024-12-09 09:56:10.239793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.924 [2024-12-09 09:56:10.239801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.925 qpair failed and we were unable to recover it. 00:38:34.925 [2024-12-09 09:56:10.240114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.925 [2024-12-09 09:56:10.240120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.925 qpair failed and we were unable to recover it. 00:38:34.925 [2024-12-09 09:56:10.240420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.925 [2024-12-09 09:56:10.240427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.925 qpair failed and we were unable to recover it. 00:38:34.925 [2024-12-09 09:56:10.240604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.925 [2024-12-09 09:56:10.240613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.925 qpair failed and we were unable to recover it. 00:38:34.925 [2024-12-09 09:56:10.240778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.925 [2024-12-09 09:56:10.240785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.925 qpair failed and we were unable to recover it. 00:38:34.925 [2024-12-09 09:56:10.241119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.925 [2024-12-09 09:56:10.241125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.925 qpair failed and we were unable to recover it. 00:38:34.925 [2024-12-09 09:56:10.241428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.925 [2024-12-09 09:56:10.241434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.925 qpair failed and we were unable to recover it. 00:38:34.925 [2024-12-09 09:56:10.241770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.925 [2024-12-09 09:56:10.241777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.925 qpair failed and we were unable to recover it. 00:38:34.925 [2024-12-09 09:56:10.242108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.925 [2024-12-09 09:56:10.242115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.925 qpair failed and we were unable to recover it. 00:38:34.925 [2024-12-09 09:56:10.242433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.925 [2024-12-09 09:56:10.242440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.925 qpair failed and we were unable to recover it. 00:38:34.925 [2024-12-09 09:56:10.242752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.925 [2024-12-09 09:56:10.242760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.925 qpair failed and we were unable to recover it. 00:38:34.925 [2024-12-09 09:56:10.243064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.925 [2024-12-09 09:56:10.243071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.925 qpair failed and we were unable to recover it. 00:38:34.925 [2024-12-09 09:56:10.243400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.925 [2024-12-09 09:56:10.243407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.925 qpair failed and we were unable to recover it. 00:38:34.925 [2024-12-09 09:56:10.243658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.925 [2024-12-09 09:56:10.243665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.925 qpair failed and we were unable to recover it. 00:38:34.925 [2024-12-09 09:56:10.243941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.925 [2024-12-09 09:56:10.243957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.925 qpair failed and we were unable to recover it. 00:38:34.925 [2024-12-09 09:56:10.244129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.925 [2024-12-09 09:56:10.244136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.925 qpair failed and we were unable to recover it. 00:38:34.925 [2024-12-09 09:56:10.244436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.925 [2024-12-09 09:56:10.244443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.925 qpair failed and we were unable to recover it. 00:38:34.925 [2024-12-09 09:56:10.244613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.925 [2024-12-09 09:56:10.244620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.925 qpair failed and we were unable to recover it. 00:38:34.925 [2024-12-09 09:56:10.244902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.925 [2024-12-09 09:56:10.244909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.925 qpair failed and we were unable to recover it. 00:38:34.925 [2024-12-09 09:56:10.245211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.925 [2024-12-09 09:56:10.245218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.925 qpair failed and we were unable to recover it. 00:38:34.925 [2024-12-09 09:56:10.245520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.925 [2024-12-09 09:56:10.245528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.925 qpair failed and we were unable to recover it. 00:38:34.925 [2024-12-09 09:56:10.245705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.925 [2024-12-09 09:56:10.245712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.925 qpair failed and we were unable to recover it. 00:38:34.925 [2024-12-09 09:56:10.245987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.925 [2024-12-09 09:56:10.245994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.925 qpair failed and we were unable to recover it. 00:38:34.925 [2024-12-09 09:56:10.246322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.925 [2024-12-09 09:56:10.246329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.925 qpair failed and we were unable to recover it. 00:38:34.925 [2024-12-09 09:56:10.246613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.925 [2024-12-09 09:56:10.246620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.925 qpair failed and we were unable to recover it. 00:38:34.925 [2024-12-09 09:56:10.246917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.925 [2024-12-09 09:56:10.246924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.925 qpair failed and we were unable to recover it. 00:38:34.925 [2024-12-09 09:56:10.247098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.925 [2024-12-09 09:56:10.247105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.925 qpair failed and we were unable to recover it. 00:38:34.925 [2024-12-09 09:56:10.247325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.925 [2024-12-09 09:56:10.247332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.925 qpair failed and we were unable to recover it. 00:38:34.925 [2024-12-09 09:56:10.247622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.925 [2024-12-09 09:56:10.247630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.925 qpair failed and we were unable to recover it. 00:38:34.925 [2024-12-09 09:56:10.247948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.925 [2024-12-09 09:56:10.247955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.925 qpair failed and we were unable to recover it. 00:38:34.925 [2024-12-09 09:56:10.248119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.925 [2024-12-09 09:56:10.248126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.925 qpair failed and we were unable to recover it. 00:38:34.925 [2024-12-09 09:56:10.248401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.925 [2024-12-09 09:56:10.248408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.925 qpair failed and we were unable to recover it. 00:38:34.925 [2024-12-09 09:56:10.248678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.925 [2024-12-09 09:56:10.248685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.925 qpair failed and we were unable to recover it. 00:38:34.925 [2024-12-09 09:56:10.248978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.925 [2024-12-09 09:56:10.248985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.925 qpair failed and we were unable to recover it. 00:38:34.925 [2024-12-09 09:56:10.249156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.925 [2024-12-09 09:56:10.249162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.925 qpair failed and we were unable to recover it. 00:38:34.925 [2024-12-09 09:56:10.249335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.925 [2024-12-09 09:56:10.249343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.925 qpair failed and we were unable to recover it. 00:38:34.925 [2024-12-09 09:56:10.249654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.925 [2024-12-09 09:56:10.249661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.925 qpair failed and we were unable to recover it. 00:38:34.925 [2024-12-09 09:56:10.249936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.925 [2024-12-09 09:56:10.249943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.925 qpair failed and we were unable to recover it. 00:38:34.925 [2024-12-09 09:56:10.250104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.925 [2024-12-09 09:56:10.250110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.926 qpair failed and we were unable to recover it. 00:38:34.926 [2024-12-09 09:56:10.250430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.926 [2024-12-09 09:56:10.250436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.926 qpair failed and we were unable to recover it. 00:38:34.926 [2024-12-09 09:56:10.250612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.926 [2024-12-09 09:56:10.250619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.926 qpair failed and we were unable to recover it. 00:38:34.926 [2024-12-09 09:56:10.250922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.926 [2024-12-09 09:56:10.250929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.926 qpair failed and we were unable to recover it. 00:38:34.926 [2024-12-09 09:56:10.251217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.926 [2024-12-09 09:56:10.251225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.926 qpair failed and we were unable to recover it. 00:38:34.926 [2024-12-09 09:56:10.251529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.926 [2024-12-09 09:56:10.251537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.926 qpair failed and we were unable to recover it. 00:38:34.926 [2024-12-09 09:56:10.251848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.926 [2024-12-09 09:56:10.251855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.926 qpair failed and we were unable to recover it. 00:38:34.926 [2024-12-09 09:56:10.252025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.926 [2024-12-09 09:56:10.252032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.926 qpair failed and we were unable to recover it. 00:38:34.926 [2024-12-09 09:56:10.252358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.926 [2024-12-09 09:56:10.252365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.926 qpair failed and we were unable to recover it. 00:38:34.926 [2024-12-09 09:56:10.252699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.926 [2024-12-09 09:56:10.252706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.926 qpair failed and we were unable to recover it. 00:38:34.926 [2024-12-09 09:56:10.252977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.926 [2024-12-09 09:56:10.252984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.926 qpair failed and we were unable to recover it. 00:38:34.926 [2024-12-09 09:56:10.253267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.926 [2024-12-09 09:56:10.253275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.926 qpair failed and we were unable to recover it. 00:38:34.926 [2024-12-09 09:56:10.253575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.926 [2024-12-09 09:56:10.253582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.926 qpair failed and we were unable to recover it. 00:38:34.926 [2024-12-09 09:56:10.253906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.926 [2024-12-09 09:56:10.253914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.926 qpair failed and we were unable to recover it. 00:38:34.926 [2024-12-09 09:56:10.254212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.926 [2024-12-09 09:56:10.254219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.926 qpair failed and we were unable to recover it. 00:38:34.926 [2024-12-09 09:56:10.254540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.926 [2024-12-09 09:56:10.254546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.926 qpair failed and we were unable to recover it. 00:38:34.926 [2024-12-09 09:56:10.254723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.926 [2024-12-09 09:56:10.254730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.926 qpair failed and we were unable to recover it. 00:38:34.926 [2024-12-09 09:56:10.254989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.926 [2024-12-09 09:56:10.254995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.926 qpair failed and we were unable to recover it. 00:38:34.926 [2024-12-09 09:56:10.255264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.926 [2024-12-09 09:56:10.255272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.926 qpair failed and we were unable to recover it. 00:38:34.926 [2024-12-09 09:56:10.255585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.926 [2024-12-09 09:56:10.255591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.926 qpair failed and we were unable to recover it. 00:38:34.926 [2024-12-09 09:56:10.255751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.926 [2024-12-09 09:56:10.255758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.926 qpair failed and we were unable to recover it. 00:38:34.926 [2024-12-09 09:56:10.256061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.926 [2024-12-09 09:56:10.256068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.926 qpair failed and we were unable to recover it. 00:38:34.926 [2024-12-09 09:56:10.256337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.926 [2024-12-09 09:56:10.256344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.926 qpair failed and we were unable to recover it. 00:38:34.926 [2024-12-09 09:56:10.256709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.926 [2024-12-09 09:56:10.256716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.926 qpair failed and we were unable to recover it. 00:38:34.926 [2024-12-09 09:56:10.257013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.926 [2024-12-09 09:56:10.257020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.926 qpair failed and we were unable to recover it. 00:38:34.926 [2024-12-09 09:56:10.257172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.926 [2024-12-09 09:56:10.257178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.926 qpair failed and we were unable to recover it. 00:38:34.926 [2024-12-09 09:56:10.257447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.926 [2024-12-09 09:56:10.257455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.926 qpair failed and we were unable to recover it. 00:38:34.926 [2024-12-09 09:56:10.257773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.926 [2024-12-09 09:56:10.257780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.926 qpair failed and we were unable to recover it. 00:38:34.926 [2024-12-09 09:56:10.258059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.926 [2024-12-09 09:56:10.258066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.926 qpair failed and we were unable to recover it. 00:38:34.926 [2024-12-09 09:56:10.258383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.926 [2024-12-09 09:56:10.258389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.926 qpair failed and we were unable to recover it. 00:38:34.926 [2024-12-09 09:56:10.258704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.926 [2024-12-09 09:56:10.258711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.926 qpair failed and we were unable to recover it. 00:38:34.926 [2024-12-09 09:56:10.259014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.926 [2024-12-09 09:56:10.259021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.926 qpair failed and we were unable to recover it. 00:38:34.926 [2024-12-09 09:56:10.259394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.926 [2024-12-09 09:56:10.259402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.926 qpair failed and we were unable to recover it. 00:38:34.926 [2024-12-09 09:56:10.259770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.926 [2024-12-09 09:56:10.259777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.926 qpair failed and we were unable to recover it. 00:38:34.926 [2024-12-09 09:56:10.260074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.926 [2024-12-09 09:56:10.260080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.926 qpair failed and we were unable to recover it. 00:38:34.926 [2024-12-09 09:56:10.260396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.926 [2024-12-09 09:56:10.260403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.926 qpair failed and we were unable to recover it. 00:38:34.926 [2024-12-09 09:56:10.260713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.926 [2024-12-09 09:56:10.260720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.926 qpair failed and we were unable to recover it. 00:38:34.926 [2024-12-09 09:56:10.261066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.926 [2024-12-09 09:56:10.261073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.926 qpair failed and we were unable to recover it. 00:38:34.926 [2024-12-09 09:56:10.261350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.926 [2024-12-09 09:56:10.261357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.927 qpair failed and we were unable to recover it. 00:38:34.927 [2024-12-09 09:56:10.261625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.927 [2024-12-09 09:56:10.261631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.927 qpair failed and we were unable to recover it. 00:38:34.927 [2024-12-09 09:56:10.261927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.927 [2024-12-09 09:56:10.261935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.927 qpair failed and we were unable to recover it. 00:38:34.927 [2024-12-09 09:56:10.262121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.927 [2024-12-09 09:56:10.262129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.927 qpair failed and we were unable to recover it. 00:38:34.927 [2024-12-09 09:56:10.262456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.927 [2024-12-09 09:56:10.262463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.927 qpair failed and we were unable to recover it. 00:38:34.927 [2024-12-09 09:56:10.262766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.927 [2024-12-09 09:56:10.262773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.927 qpair failed and we were unable to recover it. 00:38:34.927 [2024-12-09 09:56:10.262994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.927 [2024-12-09 09:56:10.263001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.927 qpair failed and we were unable to recover it. 00:38:34.927 [2024-12-09 09:56:10.263362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.927 [2024-12-09 09:56:10.263370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.927 qpair failed and we were unable to recover it. 00:38:34.927 [2024-12-09 09:56:10.263650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.927 [2024-12-09 09:56:10.263657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.927 qpair failed and we were unable to recover it. 00:38:34.927 [2024-12-09 09:56:10.263933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.927 [2024-12-09 09:56:10.263939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.927 qpair failed and we were unable to recover it. 00:38:34.927 [2024-12-09 09:56:10.264246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.927 [2024-12-09 09:56:10.264254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.927 qpair failed and we were unable to recover it. 00:38:34.927 [2024-12-09 09:56:10.264554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.927 [2024-12-09 09:56:10.264562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.927 qpair failed and we were unable to recover it. 00:38:34.927 [2024-12-09 09:56:10.264850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.927 [2024-12-09 09:56:10.264857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.927 qpair failed and we were unable to recover it. 00:38:34.927 [2024-12-09 09:56:10.265170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.927 [2024-12-09 09:56:10.265177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.927 qpair failed and we were unable to recover it. 00:38:34.927 [2024-12-09 09:56:10.265354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.927 [2024-12-09 09:56:10.265360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.927 qpair failed and we were unable to recover it. 00:38:34.927 [2024-12-09 09:56:10.265647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.927 [2024-12-09 09:56:10.265654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.927 qpair failed and we were unable to recover it. 00:38:34.927 [2024-12-09 09:56:10.265964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.927 [2024-12-09 09:56:10.265971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.927 qpair failed and we were unable to recover it. 00:38:34.927 [2024-12-09 09:56:10.266147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.927 [2024-12-09 09:56:10.266153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.927 qpair failed and we were unable to recover it. 00:38:34.927 [2024-12-09 09:56:10.266433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.927 [2024-12-09 09:56:10.266439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.927 qpair failed and we were unable to recover it. 00:38:34.927 [2024-12-09 09:56:10.266722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.927 [2024-12-09 09:56:10.266730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.927 qpair failed and we were unable to recover it. 00:38:34.927 [2024-12-09 09:56:10.267039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.927 [2024-12-09 09:56:10.267045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.927 qpair failed and we were unable to recover it. 00:38:34.927 [2024-12-09 09:56:10.267347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.927 [2024-12-09 09:56:10.267354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.927 qpair failed and we were unable to recover it. 00:38:34.927 [2024-12-09 09:56:10.267667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.927 [2024-12-09 09:56:10.267674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.927 qpair failed and we were unable to recover it. 00:38:34.927 [2024-12-09 09:56:10.268059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.927 [2024-12-09 09:56:10.268066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.927 qpair failed and we were unable to recover it. 00:38:34.927 [2024-12-09 09:56:10.268366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.927 [2024-12-09 09:56:10.268373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.927 qpair failed and we were unable to recover it. 00:38:34.927 [2024-12-09 09:56:10.268708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.927 [2024-12-09 09:56:10.268715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.927 qpair failed and we were unable to recover it. 00:38:34.927 [2024-12-09 09:56:10.269033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.927 [2024-12-09 09:56:10.269040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.927 qpair failed and we were unable to recover it. 00:38:34.927 [2024-12-09 09:56:10.269223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.927 [2024-12-09 09:56:10.269230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.927 qpair failed and we were unable to recover it. 00:38:34.927 [2024-12-09 09:56:10.269549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.927 [2024-12-09 09:56:10.269555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.927 qpair failed and we were unable to recover it. 00:38:34.927 [2024-12-09 09:56:10.269870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.927 [2024-12-09 09:56:10.269878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.927 qpair failed and we were unable to recover it. 00:38:34.927 [2024-12-09 09:56:10.270180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.927 [2024-12-09 09:56:10.270188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.927 qpair failed and we were unable to recover it. 00:38:34.927 [2024-12-09 09:56:10.270341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.927 [2024-12-09 09:56:10.270349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.927 qpair failed and we were unable to recover it. 00:38:34.927 [2024-12-09 09:56:10.270686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.927 [2024-12-09 09:56:10.270693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.927 qpair failed and we were unable to recover it. 00:38:34.927 [2024-12-09 09:56:10.270981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.927 [2024-12-09 09:56:10.270988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.927 qpair failed and we were unable to recover it. 00:38:34.927 [2024-12-09 09:56:10.271320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.927 [2024-12-09 09:56:10.271327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.927 qpair failed and we were unable to recover it. 00:38:34.927 [2024-12-09 09:56:10.271506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.927 [2024-12-09 09:56:10.271513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.927 qpair failed and we were unable to recover it. 00:38:34.927 [2024-12-09 09:56:10.271686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.927 [2024-12-09 09:56:10.271693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.927 qpair failed and we were unable to recover it. 00:38:34.927 [2024-12-09 09:56:10.271996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.927 [2024-12-09 09:56:10.272003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.927 qpair failed and we were unable to recover it. 00:38:34.927 [2024-12-09 09:56:10.272269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.927 [2024-12-09 09:56:10.272275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.928 qpair failed and we were unable to recover it. 00:38:34.928 [2024-12-09 09:56:10.272584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.928 [2024-12-09 09:56:10.272591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.928 qpair failed and we were unable to recover it. 00:38:34.928 [2024-12-09 09:56:10.272925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.928 [2024-12-09 09:56:10.272932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.928 qpair failed and we were unable to recover it. 00:38:34.928 [2024-12-09 09:56:10.273332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.928 [2024-12-09 09:56:10.273338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.928 qpair failed and we were unable to recover it. 00:38:34.928 [2024-12-09 09:56:10.273657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.928 [2024-12-09 09:56:10.273664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.928 qpair failed and we were unable to recover it. 00:38:34.928 [2024-12-09 09:56:10.273976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.928 [2024-12-09 09:56:10.273983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.928 qpair failed and we were unable to recover it. 00:38:34.928 [2024-12-09 09:56:10.274162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.928 [2024-12-09 09:56:10.274168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.928 qpair failed and we were unable to recover it. 00:38:34.928 [2024-12-09 09:56:10.274346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.928 [2024-12-09 09:56:10.274354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.928 qpair failed and we were unable to recover it. 00:38:34.928 [2024-12-09 09:56:10.274558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.928 [2024-12-09 09:56:10.274565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.928 qpair failed and we were unable to recover it. 00:38:34.928 [2024-12-09 09:56:10.274871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.928 [2024-12-09 09:56:10.274881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.928 qpair failed and we were unable to recover it. 00:38:34.928 [2024-12-09 09:56:10.275148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.928 [2024-12-09 09:56:10.275156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.928 qpair failed and we were unable to recover it. 00:38:34.928 [2024-12-09 09:56:10.275503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.928 [2024-12-09 09:56:10.275510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.928 qpair failed and we were unable to recover it. 00:38:34.928 [2024-12-09 09:56:10.275682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.928 [2024-12-09 09:56:10.275691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.928 qpair failed and we were unable to recover it. 00:38:34.928 [2024-12-09 09:56:10.276020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.928 [2024-12-09 09:56:10.276027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.928 qpair failed and we were unable to recover it. 00:38:34.928 [2024-12-09 09:56:10.276311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.928 [2024-12-09 09:56:10.276317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.928 qpair failed and we were unable to recover it. 00:38:34.928 [2024-12-09 09:56:10.276356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.928 [2024-12-09 09:56:10.276363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.928 qpair failed and we were unable to recover it. 00:38:34.928 [2024-12-09 09:56:10.276716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.928 [2024-12-09 09:56:10.276723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.928 qpair failed and we were unable to recover it. 00:38:34.928 [2024-12-09 09:56:10.277018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.928 [2024-12-09 09:56:10.277024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.928 qpair failed and we were unable to recover it. 00:38:34.928 [2024-12-09 09:56:10.277310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.928 [2024-12-09 09:56:10.277318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.928 qpair failed and we were unable to recover it. 00:38:34.928 [2024-12-09 09:56:10.277618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.928 [2024-12-09 09:56:10.277624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.928 qpair failed and we were unable to recover it. 00:38:34.928 [2024-12-09 09:56:10.277938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.928 [2024-12-09 09:56:10.277945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.928 qpair failed and we were unable to recover it. 00:38:34.928 [2024-12-09 09:56:10.278259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.928 [2024-12-09 09:56:10.278266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.928 qpair failed and we were unable to recover it. 00:38:34.928 [2024-12-09 09:56:10.278331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.928 [2024-12-09 09:56:10.278338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.928 qpair failed and we were unable to recover it. 00:38:34.928 [2024-12-09 09:56:10.278702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.928 [2024-12-09 09:56:10.278709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.928 qpair failed and we were unable to recover it. 00:38:34.928 [2024-12-09 09:56:10.279030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.928 [2024-12-09 09:56:10.279037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.928 qpair failed and we were unable to recover it. 00:38:34.928 [2024-12-09 09:56:10.279209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.928 [2024-12-09 09:56:10.279225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.928 qpair failed and we were unable to recover it. 00:38:34.928 [2024-12-09 09:56:10.279522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.928 [2024-12-09 09:56:10.279529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.928 qpair failed and we were unable to recover it. 00:38:34.928 [2024-12-09 09:56:10.279874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.928 [2024-12-09 09:56:10.279881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.928 qpair failed and we were unable to recover it. 00:38:34.928 [2024-12-09 09:56:10.280175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.928 [2024-12-09 09:56:10.280182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.928 qpair failed and we were unable to recover it. 00:38:34.928 [2024-12-09 09:56:10.280349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.928 [2024-12-09 09:56:10.280358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.928 qpair failed and we were unable to recover it. 00:38:34.928 [2024-12-09 09:56:10.280744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.928 [2024-12-09 09:56:10.280751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.928 qpair failed and we were unable to recover it. 00:38:34.928 [2024-12-09 09:56:10.280938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.928 [2024-12-09 09:56:10.280945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.928 qpair failed and we were unable to recover it. 00:38:34.928 [2024-12-09 09:56:10.281282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.928 [2024-12-09 09:56:10.281290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.928 qpair failed and we were unable to recover it. 00:38:34.928 [2024-12-09 09:56:10.281438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.928 [2024-12-09 09:56:10.281445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.928 qpair failed and we were unable to recover it. 00:38:34.928 [2024-12-09 09:56:10.281594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.928 [2024-12-09 09:56:10.281601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.928 qpair failed and we were unable to recover it. 00:38:34.929 [2024-12-09 09:56:10.281881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.929 [2024-12-09 09:56:10.281888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.929 qpair failed and we were unable to recover it. 00:38:34.929 [2024-12-09 09:56:10.282154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.929 [2024-12-09 09:56:10.282162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.929 qpair failed and we were unable to recover it. 00:38:34.929 [2024-12-09 09:56:10.282475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.929 [2024-12-09 09:56:10.282483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.929 qpair failed and we were unable to recover it. 00:38:34.929 [2024-12-09 09:56:10.282625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.929 [2024-12-09 09:56:10.282632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.929 qpair failed and we were unable to recover it. 00:38:34.929 [2024-12-09 09:56:10.282958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.929 [2024-12-09 09:56:10.282966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.929 qpair failed and we were unable to recover it. 00:38:34.929 [2024-12-09 09:56:10.283285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.929 [2024-12-09 09:56:10.283293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.929 qpair failed and we were unable to recover it. 00:38:34.929 [2024-12-09 09:56:10.283625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.929 [2024-12-09 09:56:10.283633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.929 qpair failed and we were unable to recover it. 00:38:34.929 [2024-12-09 09:56:10.283975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.929 [2024-12-09 09:56:10.283983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.929 qpair failed and we were unable to recover it. 00:38:34.929 [2024-12-09 09:56:10.284293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.929 [2024-12-09 09:56:10.284301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.929 qpair failed and we were unable to recover it. 00:38:34.929 [2024-12-09 09:56:10.284607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.929 [2024-12-09 09:56:10.284614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.929 qpair failed and we were unable to recover it. 00:38:34.929 [2024-12-09 09:56:10.284924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.929 [2024-12-09 09:56:10.284932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.929 qpair failed and we were unable to recover it. 00:38:34.929 [2024-12-09 09:56:10.285245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.929 [2024-12-09 09:56:10.285253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.929 qpair failed and we were unable to recover it. 00:38:34.929 [2024-12-09 09:56:10.285596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.929 [2024-12-09 09:56:10.285603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.929 qpair failed and we were unable to recover it. 00:38:34.929 [2024-12-09 09:56:10.285807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.929 [2024-12-09 09:56:10.285815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.929 qpair failed and we were unable to recover it. 00:38:34.929 [2024-12-09 09:56:10.286087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.929 [2024-12-09 09:56:10.286095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.929 qpair failed and we were unable to recover it. 00:38:34.929 [2024-12-09 09:56:10.286363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.929 [2024-12-09 09:56:10.286371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.929 qpair failed and we were unable to recover it. 00:38:34.929 [2024-12-09 09:56:10.286689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.929 [2024-12-09 09:56:10.286696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.929 qpair failed and we were unable to recover it. 00:38:34.929 [2024-12-09 09:56:10.286870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.929 [2024-12-09 09:56:10.286878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.929 qpair failed and we were unable to recover it. 00:38:34.929 [2024-12-09 09:56:10.287200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.929 [2024-12-09 09:56:10.287207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.929 qpair failed and we were unable to recover it. 00:38:34.929 [2024-12-09 09:56:10.287488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.929 [2024-12-09 09:56:10.287495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.929 qpair failed and we were unable to recover it. 00:38:34.929 [2024-12-09 09:56:10.287804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.929 [2024-12-09 09:56:10.287811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.929 qpair failed and we were unable to recover it. 00:38:34.929 [2024-12-09 09:56:10.288108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.929 [2024-12-09 09:56:10.288115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.929 qpair failed and we were unable to recover it. 00:38:34.929 [2024-12-09 09:56:10.288290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.929 [2024-12-09 09:56:10.288297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.929 qpair failed and we were unable to recover it. 00:38:34.929 [2024-12-09 09:56:10.288576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.929 [2024-12-09 09:56:10.288583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.929 qpair failed and we were unable to recover it. 00:38:34.929 [2024-12-09 09:56:10.288981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.929 [2024-12-09 09:56:10.288988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.929 qpair failed and we were unable to recover it. 00:38:34.929 [2024-12-09 09:56:10.289153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.929 [2024-12-09 09:56:10.289161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.929 qpair failed and we were unable to recover it. 00:38:34.929 [2024-12-09 09:56:10.289535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.929 [2024-12-09 09:56:10.289542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.929 qpair failed and we were unable to recover it. 00:38:34.929 [2024-12-09 09:56:10.289848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.929 [2024-12-09 09:56:10.289855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.929 qpair failed and we were unable to recover it. 00:38:34.929 [2024-12-09 09:56:10.290134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.929 [2024-12-09 09:56:10.290142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.929 qpair failed and we were unable to recover it. 00:38:34.929 [2024-12-09 09:56:10.290444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.929 [2024-12-09 09:56:10.290451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.929 qpair failed and we were unable to recover it. 00:38:34.929 [2024-12-09 09:56:10.290764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.929 [2024-12-09 09:56:10.290771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.929 qpair failed and we were unable to recover it. 00:38:34.929 [2024-12-09 09:56:10.291098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.929 [2024-12-09 09:56:10.291104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.929 qpair failed and we were unable to recover it. 00:38:34.929 [2024-12-09 09:56:10.291413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.929 [2024-12-09 09:56:10.291421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.929 qpair failed and we were unable to recover it. 00:38:34.929 [2024-12-09 09:56:10.291773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.929 [2024-12-09 09:56:10.291780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.929 qpair failed and we were unable to recover it. 00:38:34.929 [2024-12-09 09:56:10.291961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.929 [2024-12-09 09:56:10.291968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.929 qpair failed and we were unable to recover it. 00:38:34.929 [2024-12-09 09:56:10.292322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.929 [2024-12-09 09:56:10.292329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.929 qpair failed and we were unable to recover it. 00:38:34.929 [2024-12-09 09:56:10.292599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.929 [2024-12-09 09:56:10.292606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.929 qpair failed and we were unable to recover it. 00:38:34.929 [2024-12-09 09:56:10.292914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.929 [2024-12-09 09:56:10.292921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.929 qpair failed and we were unable to recover it. 00:38:34.930 [2024-12-09 09:56:10.293132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.930 [2024-12-09 09:56:10.293139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.930 qpair failed and we were unable to recover it. 00:38:34.930 [2024-12-09 09:56:10.293436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.930 [2024-12-09 09:56:10.293444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.930 qpair failed and we were unable to recover it. 00:38:34.930 [2024-12-09 09:56:10.293754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.930 [2024-12-09 09:56:10.293761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.930 qpair failed and we were unable to recover it. 00:38:34.930 [2024-12-09 09:56:10.294048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.930 [2024-12-09 09:56:10.294055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.930 qpair failed and we were unable to recover it. 00:38:34.930 [2024-12-09 09:56:10.294208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.930 [2024-12-09 09:56:10.294214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.930 qpair failed and we were unable to recover it. 00:38:34.930 [2024-12-09 09:56:10.294455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.930 [2024-12-09 09:56:10.294462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.930 qpair failed and we were unable to recover it. 00:38:34.930 [2024-12-09 09:56:10.294773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.930 [2024-12-09 09:56:10.294780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.930 qpair failed and we were unable to recover it. 00:38:34.930 [2024-12-09 09:56:10.295113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.930 [2024-12-09 09:56:10.295120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.930 qpair failed and we were unable to recover it. 00:38:34.930 [2024-12-09 09:56:10.295301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.930 [2024-12-09 09:56:10.295309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.930 qpair failed and we were unable to recover it. 00:38:34.930 [2024-12-09 09:56:10.295473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.930 [2024-12-09 09:56:10.295479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.930 qpair failed and we were unable to recover it. 00:38:34.930 [2024-12-09 09:56:10.295788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.930 [2024-12-09 09:56:10.295795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.930 qpair failed and we were unable to recover it. 00:38:34.930 [2024-12-09 09:56:10.296130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.930 [2024-12-09 09:56:10.296137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.930 qpair failed and we were unable to recover it. 00:38:34.930 [2024-12-09 09:56:10.296451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.930 [2024-12-09 09:56:10.296458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.930 qpair failed and we were unable to recover it. 00:38:34.930 [2024-12-09 09:56:10.296756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.930 [2024-12-09 09:56:10.296763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.930 qpair failed and we were unable to recover it. 00:38:34.930 [2024-12-09 09:56:10.296955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.930 [2024-12-09 09:56:10.296962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.930 qpair failed and we were unable to recover it. 00:38:34.930 [2024-12-09 09:56:10.297117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.930 [2024-12-09 09:56:10.297124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.930 qpair failed and we were unable to recover it. 00:38:34.930 [2024-12-09 09:56:10.297447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.930 [2024-12-09 09:56:10.297455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.930 qpair failed and we were unable to recover it. 00:38:34.930 [2024-12-09 09:56:10.297788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.930 [2024-12-09 09:56:10.297796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.930 qpair failed and we were unable to recover it. 00:38:34.930 [2024-12-09 09:56:10.298121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.930 [2024-12-09 09:56:10.298128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.930 qpair failed and we were unable to recover it. 00:38:34.930 [2024-12-09 09:56:10.298296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.930 [2024-12-09 09:56:10.298303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.930 qpair failed and we were unable to recover it. 00:38:34.930 [2024-12-09 09:56:10.298565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.930 [2024-12-09 09:56:10.298571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.930 qpair failed and we were unable to recover it. 00:38:34.930 [2024-12-09 09:56:10.298744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.930 [2024-12-09 09:56:10.298752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.930 qpair failed and we were unable to recover it. 00:38:34.930 [2024-12-09 09:56:10.299064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.930 [2024-12-09 09:56:10.299071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.930 qpair failed and we were unable to recover it. 00:38:34.930 [2024-12-09 09:56:10.299361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.930 [2024-12-09 09:56:10.299368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.930 qpair failed and we were unable to recover it. 00:38:34.930 [2024-12-09 09:56:10.299583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.930 [2024-12-09 09:56:10.299590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.930 qpair failed and we were unable to recover it. 00:38:34.930 [2024-12-09 09:56:10.299856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.930 [2024-12-09 09:56:10.299863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.930 qpair failed and we were unable to recover it. 00:38:34.930 [2024-12-09 09:56:10.300067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.930 [2024-12-09 09:56:10.300074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.930 qpair failed and we were unable to recover it. 00:38:34.930 [2024-12-09 09:56:10.300339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.930 [2024-12-09 09:56:10.300346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.930 qpair failed and we were unable to recover it. 00:38:34.930 [2024-12-09 09:56:10.300651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.930 [2024-12-09 09:56:10.300658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.930 qpair failed and we were unable to recover it. 00:38:34.930 [2024-12-09 09:56:10.300968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.930 [2024-12-09 09:56:10.300975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.930 qpair failed and we were unable to recover it. 00:38:34.930 [2024-12-09 09:56:10.301130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.930 [2024-12-09 09:56:10.301137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.930 qpair failed and we were unable to recover it. 00:38:34.930 [2024-12-09 09:56:10.301411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.930 [2024-12-09 09:56:10.301418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.930 qpair failed and we were unable to recover it. 00:38:34.930 [2024-12-09 09:56:10.301739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.930 [2024-12-09 09:56:10.301746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.930 qpair failed and we were unable to recover it. 00:38:34.930 [2024-12-09 09:56:10.301808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.930 [2024-12-09 09:56:10.301814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.930 qpair failed and we were unable to recover it. 00:38:34.930 [2024-12-09 09:56:10.302115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.930 [2024-12-09 09:56:10.302122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.930 qpair failed and we were unable to recover it. 00:38:34.930 [2024-12-09 09:56:10.302423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.930 [2024-12-09 09:56:10.302430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.930 qpair failed and we were unable to recover it. 00:38:34.930 [2024-12-09 09:56:10.302775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.930 [2024-12-09 09:56:10.302782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.930 qpair failed and we were unable to recover it. 00:38:34.930 [2024-12-09 09:56:10.302987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.930 [2024-12-09 09:56:10.302994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.930 qpair failed and we were unable to recover it. 00:38:34.931 [2024-12-09 09:56:10.303265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.931 [2024-12-09 09:56:10.303272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.931 qpair failed and we were unable to recover it. 00:38:34.931 [2024-12-09 09:56:10.303489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.931 [2024-12-09 09:56:10.303496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.931 qpair failed and we were unable to recover it. 00:38:34.931 [2024-12-09 09:56:10.303805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.931 [2024-12-09 09:56:10.303812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.931 qpair failed and we were unable to recover it. 00:38:34.931 [2024-12-09 09:56:10.303986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.931 [2024-12-09 09:56:10.303992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.931 qpair failed and we were unable to recover it. 00:38:34.931 [2024-12-09 09:56:10.304142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.931 [2024-12-09 09:56:10.304149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.931 qpair failed and we were unable to recover it. 00:38:34.931 [2024-12-09 09:56:10.304428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.931 [2024-12-09 09:56:10.304435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.931 qpair failed and we were unable to recover it. 00:38:34.931 [2024-12-09 09:56:10.304652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.931 [2024-12-09 09:56:10.304660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.931 qpair failed and we were unable to recover it. 00:38:34.931 [2024-12-09 09:56:10.304972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.931 [2024-12-09 09:56:10.304979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.931 qpair failed and we were unable to recover it. 00:38:34.931 [2024-12-09 09:56:10.305265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.931 [2024-12-09 09:56:10.305272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.931 qpair failed and we were unable to recover it. 00:38:34.931 [2024-12-09 09:56:10.305553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.931 [2024-12-09 09:56:10.305559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.931 qpair failed and we were unable to recover it. 00:38:34.931 [2024-12-09 09:56:10.305856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.931 [2024-12-09 09:56:10.305864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.931 qpair failed and we were unable to recover it. 00:38:34.931 [2024-12-09 09:56:10.306059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.931 [2024-12-09 09:56:10.306066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.931 qpair failed and we were unable to recover it. 00:38:34.931 [2024-12-09 09:56:10.306369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.931 [2024-12-09 09:56:10.306375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.931 qpair failed and we were unable to recover it. 00:38:34.931 [2024-12-09 09:56:10.306663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.931 [2024-12-09 09:56:10.306671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.931 qpair failed and we were unable to recover it. 00:38:34.931 [2024-12-09 09:56:10.307033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.931 [2024-12-09 09:56:10.307040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.931 qpair failed and we were unable to recover it. 00:38:34.931 [2024-12-09 09:56:10.307320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.931 [2024-12-09 09:56:10.307327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.931 qpair failed and we were unable to recover it. 00:38:34.931 [2024-12-09 09:56:10.307616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.931 [2024-12-09 09:56:10.307622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.931 qpair failed and we were unable to recover it. 00:38:34.931 [2024-12-09 09:56:10.307929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.931 [2024-12-09 09:56:10.307936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.931 qpair failed and we were unable to recover it. 00:38:34.931 [2024-12-09 09:56:10.308272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.931 [2024-12-09 09:56:10.308280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.931 qpair failed and we were unable to recover it. 00:38:34.931 [2024-12-09 09:56:10.308485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.931 [2024-12-09 09:56:10.308493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.931 qpair failed and we were unable to recover it. 00:38:34.931 [2024-12-09 09:56:10.308689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.931 [2024-12-09 09:56:10.308697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.931 qpair failed and we were unable to recover it. 00:38:34.931 [2024-12-09 09:56:10.308899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.931 [2024-12-09 09:56:10.308906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.931 qpair failed and we were unable to recover it. 00:38:34.931 [2024-12-09 09:56:10.309207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.931 [2024-12-09 09:56:10.309214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.931 qpair failed and we were unable to recover it. 00:38:34.931 [2024-12-09 09:56:10.309525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.931 [2024-12-09 09:56:10.309532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.931 qpair failed and we were unable to recover it. 00:38:34.931 [2024-12-09 09:56:10.309854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.931 [2024-12-09 09:56:10.309861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.931 qpair failed and we were unable to recover it. 00:38:34.931 [2024-12-09 09:56:10.309904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.931 [2024-12-09 09:56:10.309910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.931 qpair failed and we were unable to recover it. 00:38:34.931 [2024-12-09 09:56:10.310254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.931 [2024-12-09 09:56:10.310261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.931 qpair failed and we were unable to recover it. 00:38:34.931 [2024-12-09 09:56:10.310544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.931 [2024-12-09 09:56:10.310551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.931 qpair failed and we were unable to recover it. 00:38:34.931 [2024-12-09 09:56:10.310885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.931 [2024-12-09 09:56:10.310892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.931 qpair failed and we were unable to recover it. 00:38:34.931 [2024-12-09 09:56:10.311188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.931 [2024-12-09 09:56:10.311196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.931 qpair failed and we were unable to recover it. 00:38:34.931 [2024-12-09 09:56:10.311516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.931 [2024-12-09 09:56:10.311523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.931 qpair failed and we were unable to recover it. 00:38:34.931 [2024-12-09 09:56:10.311830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.931 [2024-12-09 09:56:10.311837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.931 qpair failed and we were unable to recover it. 00:38:34.931 [2024-12-09 09:56:10.312198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.931 [2024-12-09 09:56:10.312204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.931 qpair failed and we were unable to recover it. 00:38:34.931 [2024-12-09 09:56:10.312400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.931 [2024-12-09 09:56:10.312407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.931 qpair failed and we were unable to recover it. 00:38:34.931 [2024-12-09 09:56:10.312618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.931 [2024-12-09 09:56:10.312624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.931 qpair failed and we were unable to recover it. 00:38:34.931 [2024-12-09 09:56:10.312838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.931 [2024-12-09 09:56:10.312846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.931 qpair failed and we were unable to recover it. 00:38:34.931 [2024-12-09 09:56:10.313026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.931 [2024-12-09 09:56:10.313033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.931 qpair failed and we were unable to recover it. 00:38:34.931 [2024-12-09 09:56:10.313222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.931 [2024-12-09 09:56:10.313229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.931 qpair failed and we were unable to recover it. 00:38:34.932 [2024-12-09 09:56:10.313424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.932 [2024-12-09 09:56:10.313432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.932 qpair failed and we were unable to recover it. 00:38:34.932 [2024-12-09 09:56:10.313733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.932 [2024-12-09 09:56:10.313740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.932 qpair failed and we were unable to recover it. 00:38:34.932 [2024-12-09 09:56:10.313907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.932 [2024-12-09 09:56:10.313914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.932 qpair failed and we were unable to recover it. 00:38:34.932 [2024-12-09 09:56:10.314184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.932 [2024-12-09 09:56:10.314191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.932 qpair failed and we were unable to recover it. 00:38:34.932 [2024-12-09 09:56:10.314498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.932 [2024-12-09 09:56:10.314504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.932 qpair failed and we were unable to recover it. 00:38:34.932 [2024-12-09 09:56:10.314684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.932 [2024-12-09 09:56:10.314691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.932 qpair failed and we were unable to recover it. 00:38:34.932 [2024-12-09 09:56:10.314873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.932 [2024-12-09 09:56:10.314880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.932 qpair failed and we were unable to recover it. 00:38:34.932 [2024-12-09 09:56:10.315195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.932 [2024-12-09 09:56:10.315202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.932 qpair failed and we were unable to recover it. 00:38:34.932 [2024-12-09 09:56:10.315370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.932 [2024-12-09 09:56:10.315377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.932 qpair failed and we were unable to recover it. 00:38:34.932 [2024-12-09 09:56:10.315609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.932 [2024-12-09 09:56:10.315616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.932 qpair failed and we were unable to recover it. 00:38:34.932 [2024-12-09 09:56:10.315931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.932 [2024-12-09 09:56:10.315939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.932 qpair failed and we were unable to recover it. 00:38:34.932 [2024-12-09 09:56:10.316260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.932 [2024-12-09 09:56:10.316268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.932 qpair failed and we were unable to recover it. 00:38:34.932 [2024-12-09 09:56:10.316455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.932 [2024-12-09 09:56:10.316463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.932 qpair failed and we were unable to recover it. 00:38:34.932 [2024-12-09 09:56:10.316755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.932 [2024-12-09 09:56:10.316762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.932 qpair failed and we were unable to recover it. 00:38:34.932 [2024-12-09 09:56:10.317026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.932 [2024-12-09 09:56:10.317032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.932 qpair failed and we were unable to recover it. 00:38:34.932 [2024-12-09 09:56:10.317387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.932 [2024-12-09 09:56:10.317395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.932 qpair failed and we were unable to recover it. 00:38:34.932 [2024-12-09 09:56:10.317662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.932 [2024-12-09 09:56:10.317670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.932 qpair failed and we were unable to recover it. 00:38:34.932 [2024-12-09 09:56:10.317864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.932 [2024-12-09 09:56:10.317872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.932 qpair failed and we were unable to recover it. 00:38:34.932 [2024-12-09 09:56:10.318195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.932 [2024-12-09 09:56:10.318201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.932 qpair failed and we were unable to recover it. 00:38:34.932 [2024-12-09 09:56:10.318498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.932 [2024-12-09 09:56:10.318504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.932 qpair failed and we were unable to recover it. 00:38:34.932 [2024-12-09 09:56:10.318818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.932 [2024-12-09 09:56:10.318827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.932 qpair failed and we were unable to recover it. 00:38:34.932 [2024-12-09 09:56:10.319007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.932 [2024-12-09 09:56:10.319015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.932 qpair failed and we were unable to recover it. 00:38:34.932 [2024-12-09 09:56:10.319294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.932 [2024-12-09 09:56:10.319302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.932 qpair failed and we were unable to recover it. 00:38:34.932 [2024-12-09 09:56:10.319474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.932 [2024-12-09 09:56:10.319480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.932 qpair failed and we were unable to recover it. 00:38:34.932 [2024-12-09 09:56:10.319787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.932 [2024-12-09 09:56:10.319794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.932 qpair failed and we were unable to recover it. 00:38:34.932 [2024-12-09 09:56:10.320062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.932 [2024-12-09 09:56:10.320070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.932 qpair failed and we were unable to recover it. 00:38:34.932 [2024-12-09 09:56:10.320367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.932 [2024-12-09 09:56:10.320373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.932 qpair failed and we were unable to recover it. 00:38:34.932 [2024-12-09 09:56:10.320541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.932 [2024-12-09 09:56:10.320549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.932 qpair failed and we were unable to recover it. 00:38:34.932 [2024-12-09 09:56:10.320823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.932 [2024-12-09 09:56:10.320830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.932 qpair failed and we were unable to recover it. 00:38:34.932 [2024-12-09 09:56:10.321128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.932 [2024-12-09 09:56:10.321135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.932 qpair failed and we were unable to recover it. 00:38:34.932 [2024-12-09 09:56:10.321456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.932 [2024-12-09 09:56:10.321462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.932 qpair failed and we were unable to recover it. 00:38:34.932 [2024-12-09 09:56:10.321763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.932 [2024-12-09 09:56:10.321770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.932 qpair failed and we were unable to recover it. 00:38:34.932 [2024-12-09 09:56:10.321956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.932 [2024-12-09 09:56:10.321963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.932 qpair failed and we were unable to recover it. 00:38:34.932 [2024-12-09 09:56:10.322335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.932 [2024-12-09 09:56:10.322342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.932 qpair failed and we were unable to recover it. 00:38:34.932 [2024-12-09 09:56:10.322626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.932 [2024-12-09 09:56:10.322633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.932 qpair failed and we were unable to recover it. 00:38:34.932 [2024-12-09 09:56:10.322706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.932 [2024-12-09 09:56:10.322713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.932 qpair failed and we were unable to recover it. 00:38:34.932 [2024-12-09 09:56:10.322867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.932 [2024-12-09 09:56:10.322876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.932 qpair failed and we were unable to recover it. 00:38:34.932 [2024-12-09 09:56:10.323199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.933 [2024-12-09 09:56:10.323206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.933 qpair failed and we were unable to recover it. 00:38:34.933 [2024-12-09 09:56:10.323494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.933 [2024-12-09 09:56:10.323500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.933 qpair failed and we were unable to recover it. 00:38:34.933 [2024-12-09 09:56:10.323746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.933 [2024-12-09 09:56:10.323753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.933 qpair failed and we were unable to recover it. 00:38:34.933 [2024-12-09 09:56:10.323926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.933 [2024-12-09 09:56:10.323932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.933 qpair failed and we were unable to recover it. 00:38:34.933 [2024-12-09 09:56:10.324249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.933 [2024-12-09 09:56:10.324255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.933 qpair failed and we were unable to recover it. 00:38:34.933 [2024-12-09 09:56:10.324581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.933 [2024-12-09 09:56:10.324589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.933 qpair failed and we were unable to recover it. 00:38:34.933 [2024-12-09 09:56:10.324770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.933 [2024-12-09 09:56:10.324777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.933 qpair failed and we were unable to recover it. 00:38:34.933 [2024-12-09 09:56:10.325096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.933 [2024-12-09 09:56:10.325103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.933 qpair failed and we were unable to recover it. 00:38:34.933 [2024-12-09 09:56:10.325386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.933 [2024-12-09 09:56:10.325392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.933 qpair failed and we were unable to recover it. 00:38:34.933 [2024-12-09 09:56:10.325703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.933 [2024-12-09 09:56:10.325710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.933 qpair failed and we were unable to recover it. 00:38:34.933 [2024-12-09 09:56:10.326044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.933 [2024-12-09 09:56:10.326052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.933 qpair failed and we were unable to recover it. 00:38:34.933 [2024-12-09 09:56:10.326231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.933 [2024-12-09 09:56:10.326239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.933 qpair failed and we were unable to recover it. 00:38:34.933 [2024-12-09 09:56:10.326405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.933 [2024-12-09 09:56:10.326411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.933 qpair failed and we were unable to recover it. 00:38:34.933 [2024-12-09 09:56:10.326624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.933 [2024-12-09 09:56:10.326632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.933 qpair failed and we were unable to recover it. 00:38:34.933 [2024-12-09 09:56:10.326918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.933 [2024-12-09 09:56:10.326925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.933 qpair failed and we were unable to recover it. 00:38:34.933 [2024-12-09 09:56:10.327105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.933 [2024-12-09 09:56:10.327111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.933 qpair failed and we were unable to recover it. 00:38:34.933 [2024-12-09 09:56:10.327386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.933 [2024-12-09 09:56:10.327393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.933 qpair failed and we were unable to recover it. 00:38:34.933 [2024-12-09 09:56:10.327677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.933 [2024-12-09 09:56:10.327684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.933 qpair failed and we were unable to recover it. 00:38:34.933 [2024-12-09 09:56:10.328030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.933 [2024-12-09 09:56:10.328037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.933 qpair failed and we were unable to recover it. 00:38:34.933 [2024-12-09 09:56:10.328342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.933 [2024-12-09 09:56:10.328349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.933 qpair failed and we were unable to recover it. 00:38:34.933 [2024-12-09 09:56:10.328541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.933 [2024-12-09 09:56:10.328549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.933 qpair failed and we were unable to recover it. 00:38:34.933 [2024-12-09 09:56:10.328835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.933 [2024-12-09 09:56:10.328842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.933 qpair failed and we were unable to recover it. 00:38:34.933 [2024-12-09 09:56:10.329148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.933 [2024-12-09 09:56:10.329155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.933 qpair failed and we were unable to recover it. 00:38:34.933 [2024-12-09 09:56:10.329323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.933 [2024-12-09 09:56:10.329332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.933 qpair failed and we were unable to recover it. 00:38:34.933 [2024-12-09 09:56:10.329661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.933 [2024-12-09 09:56:10.329669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.933 qpair failed and we were unable to recover it. 00:38:34.933 [2024-12-09 09:56:10.329863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.933 [2024-12-09 09:56:10.329869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.933 qpair failed and we were unable to recover it. 00:38:34.933 [2024-12-09 09:56:10.330127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.933 [2024-12-09 09:56:10.330134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.933 qpair failed and we were unable to recover it. 00:38:34.933 [2024-12-09 09:56:10.330325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.933 [2024-12-09 09:56:10.330332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.933 qpair failed and we were unable to recover it. 00:38:34.933 [2024-12-09 09:56:10.330609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.933 [2024-12-09 09:56:10.330616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.933 qpair failed and we were unable to recover it. 00:38:34.933 [2024-12-09 09:56:10.330903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.933 [2024-12-09 09:56:10.330910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.933 qpair failed and we were unable to recover it. 00:38:34.933 [2024-12-09 09:56:10.330982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.933 [2024-12-09 09:56:10.330989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.933 qpair failed and we were unable to recover it. 00:38:34.933 [2024-12-09 09:56:10.331134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.933 [2024-12-09 09:56:10.331141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.933 qpair failed and we were unable to recover it. 00:38:34.933 [2024-12-09 09:56:10.331485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.933 [2024-12-09 09:56:10.331492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.933 qpair failed and we were unable to recover it. 00:38:34.933 [2024-12-09 09:56:10.331773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.933 [2024-12-09 09:56:10.331780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.933 qpair failed and we were unable to recover it. 00:38:34.933 [2024-12-09 09:56:10.331951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.933 [2024-12-09 09:56:10.331959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.933 qpair failed and we were unable to recover it. 00:38:34.934 [2024-12-09 09:56:10.332234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.934 [2024-12-09 09:56:10.332241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.934 qpair failed and we were unable to recover it. 00:38:34.934 [2024-12-09 09:56:10.332531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.934 [2024-12-09 09:56:10.332538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.934 qpair failed and we were unable to recover it. 00:38:34.934 [2024-12-09 09:56:10.332866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.934 [2024-12-09 09:56:10.332874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.934 qpair failed and we were unable to recover it. 00:38:34.934 [2024-12-09 09:56:10.333055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.934 [2024-12-09 09:56:10.333062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.934 qpair failed and we were unable to recover it. 00:38:34.934 [2024-12-09 09:56:10.333362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.934 [2024-12-09 09:56:10.333369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.934 qpair failed and we were unable to recover it. 00:38:34.934 [2024-12-09 09:56:10.333640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.934 [2024-12-09 09:56:10.333648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.934 qpair failed and we were unable to recover it. 00:38:34.934 [2024-12-09 09:56:10.333968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.934 [2024-12-09 09:56:10.333975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.934 qpair failed and we were unable to recover it. 00:38:34.934 [2024-12-09 09:56:10.334153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.934 [2024-12-09 09:56:10.334162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.934 qpair failed and we were unable to recover it. 00:38:34.934 [2024-12-09 09:56:10.334335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.934 [2024-12-09 09:56:10.334342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.934 qpair failed and we were unable to recover it. 00:38:34.934 [2024-12-09 09:56:10.334562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.934 [2024-12-09 09:56:10.334570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.934 qpair failed and we were unable to recover it. 00:38:34.934 [2024-12-09 09:56:10.334737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.934 [2024-12-09 09:56:10.334744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.934 qpair failed and we were unable to recover it. 00:38:34.934 [2024-12-09 09:56:10.335046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.934 [2024-12-09 09:56:10.335052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.934 qpair failed and we were unable to recover it. 00:38:34.934 [2024-12-09 09:56:10.335236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.934 [2024-12-09 09:56:10.335243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.934 qpair failed and we were unable to recover it. 00:38:34.934 [2024-12-09 09:56:10.335644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.934 [2024-12-09 09:56:10.335651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.934 qpair failed and we were unable to recover it. 00:38:34.934 [2024-12-09 09:56:10.335833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.934 [2024-12-09 09:56:10.335841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.934 qpair failed and we were unable to recover it. 00:38:34.934 [2024-12-09 09:56:10.336131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.934 [2024-12-09 09:56:10.336138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.934 qpair failed and we were unable to recover it. 00:38:34.934 [2024-12-09 09:56:10.336455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.934 [2024-12-09 09:56:10.336462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.934 qpair failed and we were unable to recover it. 00:38:34.934 [2024-12-09 09:56:10.336641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.934 [2024-12-09 09:56:10.336648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.934 qpair failed and we were unable to recover it. 00:38:34.934 [2024-12-09 09:56:10.336939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.934 [2024-12-09 09:56:10.336946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.934 qpair failed and we were unable to recover it. 00:38:34.934 [2024-12-09 09:56:10.337115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.934 [2024-12-09 09:56:10.337122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.934 qpair failed and we were unable to recover it. 00:38:34.934 [2024-12-09 09:56:10.337399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.934 [2024-12-09 09:56:10.337406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.934 qpair failed and we were unable to recover it. 00:38:34.934 [2024-12-09 09:56:10.337699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.934 [2024-12-09 09:56:10.337706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.934 qpair failed and we were unable to recover it. 00:38:34.934 [2024-12-09 09:56:10.338057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.934 [2024-12-09 09:56:10.338064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.934 qpair failed and we were unable to recover it. 00:38:34.934 [2024-12-09 09:56:10.338366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.934 [2024-12-09 09:56:10.338373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.934 qpair failed and we were unable to recover it. 00:38:34.934 [2024-12-09 09:56:10.338629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.934 [2024-12-09 09:56:10.338640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.934 qpair failed and we were unable to recover it. 00:38:34.934 [2024-12-09 09:56:10.338797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.934 [2024-12-09 09:56:10.338803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.934 qpair failed and we were unable to recover it. 00:38:34.934 [2024-12-09 09:56:10.339085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.934 [2024-12-09 09:56:10.339093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.934 qpair failed and we were unable to recover it. 00:38:34.934 [2024-12-09 09:56:10.339401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.934 [2024-12-09 09:56:10.339408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.934 qpair failed and we were unable to recover it. 00:38:34.934 [2024-12-09 09:56:10.339449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.934 [2024-12-09 09:56:10.339455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.934 qpair failed and we were unable to recover it. 00:38:34.934 [2024-12-09 09:56:10.339824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.934 [2024-12-09 09:56:10.339832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.934 qpair failed and we were unable to recover it. 00:38:34.934 [2024-12-09 09:56:10.339998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.934 [2024-12-09 09:56:10.340005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.934 qpair failed and we were unable to recover it. 00:38:34.934 [2024-12-09 09:56:10.340157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:34.934 [2024-12-09 09:56:10.340164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:34.934 qpair failed and we were unable to recover it. 00:38:35.214 [2024-12-09 09:56:10.340453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.214 [2024-12-09 09:56:10.340461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.214 qpair failed and we were unable to recover it. 00:38:35.214 [2024-12-09 09:56:10.340770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.214 [2024-12-09 09:56:10.340779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.214 qpair failed and we were unable to recover it. 00:38:35.214 [2024-12-09 09:56:10.341051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.214 [2024-12-09 09:56:10.341058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.214 qpair failed and we were unable to recover it. 00:38:35.214 [2024-12-09 09:56:10.341336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.214 [2024-12-09 09:56:10.341343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.214 qpair failed and we were unable to recover it. 00:38:35.214 [2024-12-09 09:56:10.341525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.214 [2024-12-09 09:56:10.341532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.214 qpair failed and we were unable to recover it. 00:38:35.214 [2024-12-09 09:56:10.341765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.214 [2024-12-09 09:56:10.341772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.214 qpair failed and we were unable to recover it. 00:38:35.214 [2024-12-09 09:56:10.341927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.214 [2024-12-09 09:56:10.341935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.214 qpair failed and we were unable to recover it. 00:38:35.214 [2024-12-09 09:56:10.342251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.214 [2024-12-09 09:56:10.342259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.214 qpair failed and we were unable to recover it. 00:38:35.214 [2024-12-09 09:56:10.342558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.214 [2024-12-09 09:56:10.342565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.214 qpair failed and we were unable to recover it. 00:38:35.214 [2024-12-09 09:56:10.342878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.214 [2024-12-09 09:56:10.342885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.214 qpair failed and we were unable to recover it. 00:38:35.214 [2024-12-09 09:56:10.343063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.214 [2024-12-09 09:56:10.343071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.214 qpair failed and we were unable to recover it. 00:38:35.214 [2024-12-09 09:56:10.343276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.214 [2024-12-09 09:56:10.343283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.214 qpair failed and we were unable to recover it. 00:38:35.214 [2024-12-09 09:56:10.343471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.214 [2024-12-09 09:56:10.343486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.214 qpair failed and we were unable to recover it. 00:38:35.214 [2024-12-09 09:56:10.343733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.214 [2024-12-09 09:56:10.343742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.214 qpair failed and we were unable to recover it. 00:38:35.214 [2024-12-09 09:56:10.344046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.214 [2024-12-09 09:56:10.344053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.214 qpair failed and we were unable to recover it. 00:38:35.214 [2024-12-09 09:56:10.344369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.214 [2024-12-09 09:56:10.344377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.214 qpair failed and we were unable to recover it. 00:38:35.214 [2024-12-09 09:56:10.344701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.214 [2024-12-09 09:56:10.344709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.214 qpair failed and we were unable to recover it. 00:38:35.214 [2024-12-09 09:56:10.344972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.214 [2024-12-09 09:56:10.344980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.214 qpair failed and we were unable to recover it. 00:38:35.214 [2024-12-09 09:56:10.345321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.214 [2024-12-09 09:56:10.345328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.214 qpair failed and we were unable to recover it. 00:38:35.214 [2024-12-09 09:56:10.345641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.214 [2024-12-09 09:56:10.345648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.214 qpair failed and we were unable to recover it. 00:38:35.214 [2024-12-09 09:56:10.345926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.214 [2024-12-09 09:56:10.345932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.214 qpair failed and we were unable to recover it. 00:38:35.214 [2024-12-09 09:56:10.346139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.214 [2024-12-09 09:56:10.346146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.214 qpair failed and we were unable to recover it. 00:38:35.214 [2024-12-09 09:56:10.346462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.214 [2024-12-09 09:56:10.346468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.214 qpair failed and we were unable to recover it. 00:38:35.214 [2024-12-09 09:56:10.346760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.214 [2024-12-09 09:56:10.346768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.215 qpair failed and we were unable to recover it. 00:38:35.215 [2024-12-09 09:56:10.346948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.215 [2024-12-09 09:56:10.346956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.215 qpair failed and we were unable to recover it. 00:38:35.215 [2024-12-09 09:56:10.347233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.215 [2024-12-09 09:56:10.347241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.215 qpair failed and we were unable to recover it. 00:38:35.215 [2024-12-09 09:56:10.347369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.215 [2024-12-09 09:56:10.347375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.215 qpair failed and we were unable to recover it. 00:38:35.215 [2024-12-09 09:56:10.347418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.215 [2024-12-09 09:56:10.347425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.215 qpair failed and we were unable to recover it. 00:38:35.215 [2024-12-09 09:56:10.347628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.215 [2024-12-09 09:56:10.347635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.215 qpair failed and we were unable to recover it. 00:38:35.215 [2024-12-09 09:56:10.347824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.215 [2024-12-09 09:56:10.347831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.215 qpair failed and we were unable to recover it. 00:38:35.215 [2024-12-09 09:56:10.348149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.215 [2024-12-09 09:56:10.348156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.215 qpair failed and we were unable to recover it. 00:38:35.215 [2024-12-09 09:56:10.348448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.215 [2024-12-09 09:56:10.348455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.215 qpair failed and we were unable to recover it. 00:38:35.215 [2024-12-09 09:56:10.348770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.215 [2024-12-09 09:56:10.348778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.215 qpair failed and we were unable to recover it. 00:38:35.215 [2024-12-09 09:56:10.349104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.215 [2024-12-09 09:56:10.349111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.215 qpair failed and we were unable to recover it. 00:38:35.215 [2024-12-09 09:56:10.349422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.215 [2024-12-09 09:56:10.349429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.215 qpair failed and we were unable to recover it. 00:38:35.215 [2024-12-09 09:56:10.349588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.215 [2024-12-09 09:56:10.349595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.215 qpair failed and we were unable to recover it. 00:38:35.215 [2024-12-09 09:56:10.349777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.215 [2024-12-09 09:56:10.349784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.215 qpair failed and we were unable to recover it. 00:38:35.215 [2024-12-09 09:56:10.350073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.215 [2024-12-09 09:56:10.350081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.215 qpair failed and we were unable to recover it. 00:38:35.215 [2024-12-09 09:56:10.350410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.215 [2024-12-09 09:56:10.350417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.215 qpair failed and we were unable to recover it. 00:38:35.215 [2024-12-09 09:56:10.350572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.215 [2024-12-09 09:56:10.350579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.215 qpair failed and we were unable to recover it. 00:38:35.215 [2024-12-09 09:56:10.350878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.215 [2024-12-09 09:56:10.350885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.215 qpair failed and we were unable to recover it. 00:38:35.215 [2024-12-09 09:56:10.351063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.215 [2024-12-09 09:56:10.351071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.215 qpair failed and we were unable to recover it. 00:38:35.215 [2024-12-09 09:56:10.351364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.215 [2024-12-09 09:56:10.351371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.215 qpair failed and we were unable to recover it. 00:38:35.215 [2024-12-09 09:56:10.351675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.215 [2024-12-09 09:56:10.351682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.215 qpair failed and we were unable to recover it. 00:38:35.215 [2024-12-09 09:56:10.351999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.215 [2024-12-09 09:56:10.352005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.215 qpair failed and we were unable to recover it. 00:38:35.215 [2024-12-09 09:56:10.352221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.215 [2024-12-09 09:56:10.352227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.215 qpair failed and we were unable to recover it. 00:38:35.215 [2024-12-09 09:56:10.352560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.215 [2024-12-09 09:56:10.352567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.215 qpair failed and we were unable to recover it. 00:38:35.215 [2024-12-09 09:56:10.352738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.215 [2024-12-09 09:56:10.352745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.215 qpair failed and we were unable to recover it. 00:38:35.215 [2024-12-09 09:56:10.353123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.215 [2024-12-09 09:56:10.353130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.215 qpair failed and we were unable to recover it. 00:38:35.215 [2024-12-09 09:56:10.353417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.215 [2024-12-09 09:56:10.353424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.215 qpair failed and we were unable to recover it. 00:38:35.215 [2024-12-09 09:56:10.353599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.215 [2024-12-09 09:56:10.353607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.215 qpair failed and we were unable to recover it. 00:38:35.215 [2024-12-09 09:56:10.353767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.215 [2024-12-09 09:56:10.353774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.215 qpair failed and we were unable to recover it. 00:38:35.215 [2024-12-09 09:56:10.354068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.215 [2024-12-09 09:56:10.354075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.215 qpair failed and we were unable to recover it. 00:38:35.215 [2024-12-09 09:56:10.354409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.215 [2024-12-09 09:56:10.354416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.215 qpair failed and we were unable to recover it. 00:38:35.215 [2024-12-09 09:56:10.354753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.215 [2024-12-09 09:56:10.354760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.215 qpair failed and we were unable to recover it. 00:38:35.215 [2024-12-09 09:56:10.355031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.215 [2024-12-09 09:56:10.355038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.215 qpair failed and we were unable to recover it. 00:38:35.215 [2024-12-09 09:56:10.355302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.215 [2024-12-09 09:56:10.355309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.215 qpair failed and we were unable to recover it. 00:38:35.215 [2024-12-09 09:56:10.355631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.215 [2024-12-09 09:56:10.355644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.215 qpair failed and we were unable to recover it. 00:38:35.215 [2024-12-09 09:56:10.355974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.215 [2024-12-09 09:56:10.355980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.215 qpair failed and we were unable to recover it. 00:38:35.215 [2024-12-09 09:56:10.356297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.215 [2024-12-09 09:56:10.356303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.215 qpair failed and we were unable to recover it. 00:38:35.215 [2024-12-09 09:56:10.356599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.215 [2024-12-09 09:56:10.356606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.215 qpair failed and we were unable to recover it. 00:38:35.216 [2024-12-09 09:56:10.356877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.216 [2024-12-09 09:56:10.356885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.216 qpair failed and we were unable to recover it. 00:38:35.216 [2024-12-09 09:56:10.357066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.216 [2024-12-09 09:56:10.357072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.216 qpair failed and we were unable to recover it. 00:38:35.216 [2024-12-09 09:56:10.357454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.216 [2024-12-09 09:56:10.357463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.216 qpair failed and we were unable to recover it. 00:38:35.216 [2024-12-09 09:56:10.357751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.216 [2024-12-09 09:56:10.357758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.216 qpair failed and we were unable to recover it. 00:38:35.216 [2024-12-09 09:56:10.358073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.216 [2024-12-09 09:56:10.358080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.216 qpair failed and we were unable to recover it. 00:38:35.216 [2024-12-09 09:56:10.358389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.216 [2024-12-09 09:56:10.358396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.216 qpair failed and we were unable to recover it. 00:38:35.216 [2024-12-09 09:56:10.358701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.216 [2024-12-09 09:56:10.358709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.216 qpair failed and we were unable to recover it. 00:38:35.216 [2024-12-09 09:56:10.359022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.216 [2024-12-09 09:56:10.359029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.216 qpair failed and we were unable to recover it. 00:38:35.216 [2024-12-09 09:56:10.359233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.216 [2024-12-09 09:56:10.359241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.216 qpair failed and we were unable to recover it. 00:38:35.216 [2024-12-09 09:56:10.359524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.216 [2024-12-09 09:56:10.359531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.216 qpair failed and we were unable to recover it. 00:38:35.216 [2024-12-09 09:56:10.359810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.216 [2024-12-09 09:56:10.359817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.216 qpair failed and we were unable to recover it. 00:38:35.216 [2024-12-09 09:56:10.360107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.216 [2024-12-09 09:56:10.360115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.216 qpair failed and we were unable to recover it. 00:38:35.216 [2024-12-09 09:56:10.360459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.216 [2024-12-09 09:56:10.360466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.216 qpair failed and we were unable to recover it. 00:38:35.216 [2024-12-09 09:56:10.360768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.216 [2024-12-09 09:56:10.360775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.216 qpair failed and we were unable to recover it. 00:38:35.216 [2024-12-09 09:56:10.361139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.216 [2024-12-09 09:56:10.361145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.216 qpair failed and we were unable to recover it. 00:38:35.216 [2024-12-09 09:56:10.361456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.216 [2024-12-09 09:56:10.361463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.216 qpair failed and we were unable to recover it. 00:38:35.216 [2024-12-09 09:56:10.361774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.216 [2024-12-09 09:56:10.361781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.216 qpair failed and we were unable to recover it. 00:38:35.216 [2024-12-09 09:56:10.362087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.216 [2024-12-09 09:56:10.362094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.216 qpair failed and we were unable to recover it. 00:38:35.216 [2024-12-09 09:56:10.362381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.216 [2024-12-09 09:56:10.362388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.216 qpair failed and we were unable to recover it. 00:38:35.216 [2024-12-09 09:56:10.362704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.216 [2024-12-09 09:56:10.362711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.216 qpair failed and we were unable to recover it. 00:38:35.216 [2024-12-09 09:56:10.363004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.216 [2024-12-09 09:56:10.363010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.216 qpair failed and we were unable to recover it. 00:38:35.216 [2024-12-09 09:56:10.363188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.216 [2024-12-09 09:56:10.363195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.216 qpair failed and we were unable to recover it. 00:38:35.216 [2024-12-09 09:56:10.363444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.216 [2024-12-09 09:56:10.363451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.216 qpair failed and we were unable to recover it. 00:38:35.216 [2024-12-09 09:56:10.363797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.216 [2024-12-09 09:56:10.363804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.216 qpair failed and we were unable to recover it. 00:38:35.216 [2024-12-09 09:56:10.364086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.216 [2024-12-09 09:56:10.364093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.216 qpair failed and we were unable to recover it. 00:38:35.216 [2024-12-09 09:56:10.364364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.216 [2024-12-09 09:56:10.364371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.216 qpair failed and we were unable to recover it. 00:38:35.216 [2024-12-09 09:56:10.364684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.216 [2024-12-09 09:56:10.364692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.216 qpair failed and we were unable to recover it. 00:38:35.216 [2024-12-09 09:56:10.365014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.216 [2024-12-09 09:56:10.365021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.216 qpair failed and we were unable to recover it. 00:38:35.216 [2024-12-09 09:56:10.365221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.216 [2024-12-09 09:56:10.365229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.216 qpair failed and we were unable to recover it. 00:38:35.216 [2024-12-09 09:56:10.365496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.216 [2024-12-09 09:56:10.365503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.216 qpair failed and we were unable to recover it. 00:38:35.216 [2024-12-09 09:56:10.365805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.216 [2024-12-09 09:56:10.365814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.216 qpair failed and we were unable to recover it. 00:38:35.216 [2024-12-09 09:56:10.366144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.216 [2024-12-09 09:56:10.366151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.216 qpair failed and we were unable to recover it. 00:38:35.216 [2024-12-09 09:56:10.366318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.216 [2024-12-09 09:56:10.366325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.216 qpair failed and we were unable to recover it. 00:38:35.216 [2024-12-09 09:56:10.366555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.216 [2024-12-09 09:56:10.366562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.216 qpair failed and we were unable to recover it. 00:38:35.216 [2024-12-09 09:56:10.366758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.216 [2024-12-09 09:56:10.366765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.216 qpair failed and we were unable to recover it. 00:38:35.216 [2024-12-09 09:56:10.366937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.216 [2024-12-09 09:56:10.366944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.216 qpair failed and we were unable to recover it. 00:38:35.216 [2024-12-09 09:56:10.367245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.216 [2024-12-09 09:56:10.367251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.216 qpair failed and we were unable to recover it. 00:38:35.216 [2024-12-09 09:56:10.367566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.216 [2024-12-09 09:56:10.367574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.216 qpair failed and we were unable to recover it. 00:38:35.217 [2024-12-09 09:56:10.367785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.217 [2024-12-09 09:56:10.367793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.217 qpair failed and we were unable to recover it. 00:38:35.217 [2024-12-09 09:56:10.368129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.217 [2024-12-09 09:56:10.368136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.217 qpair failed and we were unable to recover it. 00:38:35.217 [2024-12-09 09:56:10.368419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.217 [2024-12-09 09:56:10.368425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.217 qpair failed and we were unable to recover it. 00:38:35.217 [2024-12-09 09:56:10.368742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.217 [2024-12-09 09:56:10.368749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.217 qpair failed and we were unable to recover it. 00:38:35.217 [2024-12-09 09:56:10.369064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.217 [2024-12-09 09:56:10.369074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.217 qpair failed and we were unable to recover it. 00:38:35.217 [2024-12-09 09:56:10.369359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.217 [2024-12-09 09:56:10.369367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.217 qpair failed and we were unable to recover it. 00:38:35.217 [2024-12-09 09:56:10.369582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.217 [2024-12-09 09:56:10.369589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.217 qpair failed and we were unable to recover it. 00:38:35.217 [2024-12-09 09:56:10.369904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.217 [2024-12-09 09:56:10.369911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.217 qpair failed and we were unable to recover it. 00:38:35.217 [2024-12-09 09:56:10.370230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.217 [2024-12-09 09:56:10.370236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.217 qpair failed and we were unable to recover it. 00:38:35.217 [2024-12-09 09:56:10.370544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.217 [2024-12-09 09:56:10.370551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.217 qpair failed and we were unable to recover it. 00:38:35.217 [2024-12-09 09:56:10.370865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.217 [2024-12-09 09:56:10.370872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.217 qpair failed and we were unable to recover it. 00:38:35.217 [2024-12-09 09:56:10.371212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.217 [2024-12-09 09:56:10.371219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.217 qpair failed and we were unable to recover it. 00:38:35.217 [2024-12-09 09:56:10.371397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.217 [2024-12-09 09:56:10.371404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.217 qpair failed and we were unable to recover it. 00:38:35.217 [2024-12-09 09:56:10.371446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.217 [2024-12-09 09:56:10.371453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.217 qpair failed and we were unable to recover it. 00:38:35.217 [2024-12-09 09:56:10.371792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.217 [2024-12-09 09:56:10.371799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.217 qpair failed and we were unable to recover it. 00:38:35.217 [2024-12-09 09:56:10.372106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.217 [2024-12-09 09:56:10.372113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.217 qpair failed and we were unable to recover it. 00:38:35.217 [2024-12-09 09:56:10.372323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.217 [2024-12-09 09:56:10.372330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.217 qpair failed and we were unable to recover it. 00:38:35.217 [2024-12-09 09:56:10.372629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.217 [2024-12-09 09:56:10.372641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.217 qpair failed and we were unable to recover it. 00:38:35.217 [2024-12-09 09:56:10.372939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.217 [2024-12-09 09:56:10.372946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.217 qpair failed and we were unable to recover it. 00:38:35.217 [2024-12-09 09:56:10.373128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.217 [2024-12-09 09:56:10.373135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.217 qpair failed and we were unable to recover it. 00:38:35.217 [2024-12-09 09:56:10.373404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.217 [2024-12-09 09:56:10.373410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.217 qpair failed and we were unable to recover it. 00:38:35.217 [2024-12-09 09:56:10.373749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.217 [2024-12-09 09:56:10.373757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.217 qpair failed and we were unable to recover it. 00:38:35.217 [2024-12-09 09:56:10.374039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.217 [2024-12-09 09:56:10.374046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.217 qpair failed and we were unable to recover it. 00:38:35.217 [2024-12-09 09:56:10.374326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.217 [2024-12-09 09:56:10.374333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.217 qpair failed and we were unable to recover it. 00:38:35.217 [2024-12-09 09:56:10.374495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.217 [2024-12-09 09:56:10.374512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.217 qpair failed and we were unable to recover it. 00:38:35.217 [2024-12-09 09:56:10.374812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.217 [2024-12-09 09:56:10.374819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.217 qpair failed and we were unable to recover it. 00:38:35.217 [2024-12-09 09:56:10.375162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.217 [2024-12-09 09:56:10.375169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.217 qpair failed and we were unable to recover it. 00:38:35.217 [2024-12-09 09:56:10.375413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.217 [2024-12-09 09:56:10.375420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.217 qpair failed and we were unable to recover it. 00:38:35.217 [2024-12-09 09:56:10.375643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.217 [2024-12-09 09:56:10.375650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.217 qpair failed and we were unable to recover it. 00:38:35.217 [2024-12-09 09:56:10.375980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.217 [2024-12-09 09:56:10.375987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.217 qpair failed and we were unable to recover it. 00:38:35.217 [2024-12-09 09:56:10.376268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.217 [2024-12-09 09:56:10.376274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.217 qpair failed and we were unable to recover it. 00:38:35.217 [2024-12-09 09:56:10.376596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.217 [2024-12-09 09:56:10.376603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.217 qpair failed and we were unable to recover it. 00:38:35.217 [2024-12-09 09:56:10.376900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.217 [2024-12-09 09:56:10.376907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.217 qpair failed and we were unable to recover it. 00:38:35.217 [2024-12-09 09:56:10.377217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.217 [2024-12-09 09:56:10.377223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.217 qpair failed and we were unable to recover it. 00:38:35.217 [2024-12-09 09:56:10.377530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.217 [2024-12-09 09:56:10.377536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.217 qpair failed and we were unable to recover it. 00:38:35.217 [2024-12-09 09:56:10.377890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.217 [2024-12-09 09:56:10.377897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.217 qpair failed and we were unable to recover it. 00:38:35.217 [2024-12-09 09:56:10.378065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.217 [2024-12-09 09:56:10.378073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.217 qpair failed and we were unable to recover it. 00:38:35.217 [2024-12-09 09:56:10.378232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.217 [2024-12-09 09:56:10.378240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.217 qpair failed and we were unable to recover it. 00:38:35.218 [2024-12-09 09:56:10.378346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.218 [2024-12-09 09:56:10.378354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.218 qpair failed and we were unable to recover it. 00:38:35.218 [2024-12-09 09:56:10.378625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.218 [2024-12-09 09:56:10.378632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.218 qpair failed and we were unable to recover it. 00:38:35.218 [2024-12-09 09:56:10.378951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.218 [2024-12-09 09:56:10.378958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.218 qpair failed and we were unable to recover it. 00:38:35.218 [2024-12-09 09:56:10.379255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.218 [2024-12-09 09:56:10.379262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.218 qpair failed and we were unable to recover it. 00:38:35.218 [2024-12-09 09:56:10.379450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.218 [2024-12-09 09:56:10.379457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.218 qpair failed and we were unable to recover it. 00:38:35.218 [2024-12-09 09:56:10.379804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.218 [2024-12-09 09:56:10.379811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.218 qpair failed and we were unable to recover it. 00:38:35.218 [2024-12-09 09:56:10.380126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.218 [2024-12-09 09:56:10.380135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.218 qpair failed and we were unable to recover it. 00:38:35.218 [2024-12-09 09:56:10.380416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.218 [2024-12-09 09:56:10.380424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.218 qpair failed and we were unable to recover it. 00:38:35.218 [2024-12-09 09:56:10.380607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.218 [2024-12-09 09:56:10.380614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.218 qpair failed and we were unable to recover it. 00:38:35.218 [2024-12-09 09:56:10.380928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.218 [2024-12-09 09:56:10.380936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.218 qpair failed and we were unable to recover it. 00:38:35.218 [2024-12-09 09:56:10.381242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.218 [2024-12-09 09:56:10.381249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.218 qpair failed and we were unable to recover it. 00:38:35.218 [2024-12-09 09:56:10.381427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.218 [2024-12-09 09:56:10.381434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.218 qpair failed and we were unable to recover it. 00:38:35.218 [2024-12-09 09:56:10.381653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.218 [2024-12-09 09:56:10.381661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.218 qpair failed and we were unable to recover it. 00:38:35.218 [2024-12-09 09:56:10.381948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.218 [2024-12-09 09:56:10.381954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.218 qpair failed and we were unable to recover it. 00:38:35.218 [2024-12-09 09:56:10.382268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.218 [2024-12-09 09:56:10.382274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.218 qpair failed and we were unable to recover it. 00:38:35.218 [2024-12-09 09:56:10.382542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.218 [2024-12-09 09:56:10.382550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.218 qpair failed and we were unable to recover it. 00:38:35.218 [2024-12-09 09:56:10.382879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.218 [2024-12-09 09:56:10.382886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.218 qpair failed and we were unable to recover it. 00:38:35.218 [2024-12-09 09:56:10.383203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.218 [2024-12-09 09:56:10.383210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.218 qpair failed and we were unable to recover it. 00:38:35.218 [2024-12-09 09:56:10.383417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.218 [2024-12-09 09:56:10.383425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.218 qpair failed and we were unable to recover it. 00:38:35.218 [2024-12-09 09:56:10.383584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.218 [2024-12-09 09:56:10.383591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.218 qpair failed and we were unable to recover it. 00:38:35.218 [2024-12-09 09:56:10.383861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.218 [2024-12-09 09:56:10.383868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.218 qpair failed and we were unable to recover it. 00:38:35.218 [2024-12-09 09:56:10.384169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.218 [2024-12-09 09:56:10.384176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.218 qpair failed and we were unable to recover it. 00:38:35.218 [2024-12-09 09:56:10.384399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.218 [2024-12-09 09:56:10.384407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.218 qpair failed and we were unable to recover it. 00:38:35.218 [2024-12-09 09:56:10.384714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.218 [2024-12-09 09:56:10.384721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.218 qpair failed and we were unable to recover it. 00:38:35.218 [2024-12-09 09:56:10.385050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.218 [2024-12-09 09:56:10.385057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.218 qpair failed and we were unable to recover it. 00:38:35.218 [2024-12-09 09:56:10.385398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.218 [2024-12-09 09:56:10.385404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.218 qpair failed and we were unable to recover it. 00:38:35.218 [2024-12-09 09:56:10.385715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.218 [2024-12-09 09:56:10.385722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.218 qpair failed and we were unable to recover it. 00:38:35.218 [2024-12-09 09:56:10.385939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.218 [2024-12-09 09:56:10.385946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.218 qpair failed and we were unable to recover it. 00:38:35.218 [2024-12-09 09:56:10.386195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.218 [2024-12-09 09:56:10.386202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.218 qpair failed and we were unable to recover it. 00:38:35.218 [2024-12-09 09:56:10.386496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.218 [2024-12-09 09:56:10.386503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.218 qpair failed and we were unable to recover it. 00:38:35.218 [2024-12-09 09:56:10.386541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.218 [2024-12-09 09:56:10.386547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.218 qpair failed and we were unable to recover it. 00:38:35.218 [2024-12-09 09:56:10.386837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.218 [2024-12-09 09:56:10.386844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.218 qpair failed and we were unable to recover it. 00:38:35.218 [2024-12-09 09:56:10.387157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.219 [2024-12-09 09:56:10.387165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.219 qpair failed and we were unable to recover it. 00:38:35.219 [2024-12-09 09:56:10.387509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.219 [2024-12-09 09:56:10.387517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.219 qpair failed and we were unable to recover it. 00:38:35.219 [2024-12-09 09:56:10.387811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.219 [2024-12-09 09:56:10.387819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.219 qpair failed and we were unable to recover it. 00:38:35.219 [2024-12-09 09:56:10.388181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.219 [2024-12-09 09:56:10.388187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.219 qpair failed and we were unable to recover it. 00:38:35.219 [2024-12-09 09:56:10.388475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.219 [2024-12-09 09:56:10.388482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.219 qpair failed and we were unable to recover it. 00:38:35.219 [2024-12-09 09:56:10.388830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.219 [2024-12-09 09:56:10.388837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.219 qpair failed and we were unable to recover it. 00:38:35.219 [2024-12-09 09:56:10.389015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.219 [2024-12-09 09:56:10.389022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.219 qpair failed and we were unable to recover it. 00:38:35.219 [2024-12-09 09:56:10.389342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.219 [2024-12-09 09:56:10.389349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.219 qpair failed and we were unable to recover it. 00:38:35.219 [2024-12-09 09:56:10.389663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.219 [2024-12-09 09:56:10.389670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.219 qpair failed and we were unable to recover it. 00:38:35.219 [2024-12-09 09:56:10.390092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.219 [2024-12-09 09:56:10.390099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.219 qpair failed and we were unable to recover it. 00:38:35.219 [2024-12-09 09:56:10.390398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.219 [2024-12-09 09:56:10.390405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.219 qpair failed and we were unable to recover it. 00:38:35.219 [2024-12-09 09:56:10.390578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.219 [2024-12-09 09:56:10.390585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.219 qpair failed and we were unable to recover it. 00:38:35.219 [2024-12-09 09:56:10.390821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.219 [2024-12-09 09:56:10.390829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.219 qpair failed and we were unable to recover it. 00:38:35.219 [2024-12-09 09:56:10.391048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.219 [2024-12-09 09:56:10.391055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.219 qpair failed and we were unable to recover it. 00:38:35.219 [2024-12-09 09:56:10.391361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.219 [2024-12-09 09:56:10.391372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.219 qpair failed and we were unable to recover it. 00:38:35.219 [2024-12-09 09:56:10.391520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.219 [2024-12-09 09:56:10.391527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.219 qpair failed and we were unable to recover it. 00:38:35.219 [2024-12-09 09:56:10.391857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.219 [2024-12-09 09:56:10.391865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.219 qpair failed and we were unable to recover it. 00:38:35.219 [2024-12-09 09:56:10.392165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.219 [2024-12-09 09:56:10.392172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.219 qpair failed and we were unable to recover it. 00:38:35.219 [2024-12-09 09:56:10.392487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.219 [2024-12-09 09:56:10.392494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.219 qpair failed and we were unable to recover it. 00:38:35.219 [2024-12-09 09:56:10.392796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.219 [2024-12-09 09:56:10.392803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.219 qpair failed and we were unable to recover it. 00:38:35.219 [2024-12-09 09:56:10.393140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.219 [2024-12-09 09:56:10.393147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.219 qpair failed and we were unable to recover it. 00:38:35.219 [2024-12-09 09:56:10.393478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.219 [2024-12-09 09:56:10.393485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.219 qpair failed and we were unable to recover it. 00:38:35.219 [2024-12-09 09:56:10.393660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.219 [2024-12-09 09:56:10.393667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.219 qpair failed and we were unable to recover it. 00:38:35.219 [2024-12-09 09:56:10.393953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.219 [2024-12-09 09:56:10.393960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.219 qpair failed and we were unable to recover it. 00:38:35.219 [2024-12-09 09:56:10.394134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.219 [2024-12-09 09:56:10.394141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.219 qpair failed and we were unable to recover it. 00:38:35.219 [2024-12-09 09:56:10.394382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.219 [2024-12-09 09:56:10.394389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.219 qpair failed and we were unable to recover it. 00:38:35.219 [2024-12-09 09:56:10.394590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.219 [2024-12-09 09:56:10.394597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.219 qpair failed and we were unable to recover it. 00:38:35.219 [2024-12-09 09:56:10.394817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.219 [2024-12-09 09:56:10.394824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.219 qpair failed and we were unable to recover it. 00:38:35.219 [2024-12-09 09:56:10.395016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.219 [2024-12-09 09:56:10.395023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.219 qpair failed and we were unable to recover it. 00:38:35.219 [2024-12-09 09:56:10.395299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.219 [2024-12-09 09:56:10.395307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.219 qpair failed and we were unable to recover it. 00:38:35.219 [2024-12-09 09:56:10.395482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.219 [2024-12-09 09:56:10.395490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.219 qpair failed and we were unable to recover it. 00:38:35.219 [2024-12-09 09:56:10.395662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.219 [2024-12-09 09:56:10.395671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.219 qpair failed and we were unable to recover it. 00:38:35.219 [2024-12-09 09:56:10.395990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.219 [2024-12-09 09:56:10.395997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.219 qpair failed and we were unable to recover it. 00:38:35.219 [2024-12-09 09:56:10.396260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.219 [2024-12-09 09:56:10.396276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.219 qpair failed and we were unable to recover it. 00:38:35.219 [2024-12-09 09:56:10.396572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.219 [2024-12-09 09:56:10.396579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.219 qpair failed and we were unable to recover it. 00:38:35.219 [2024-12-09 09:56:10.396758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.219 [2024-12-09 09:56:10.396765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.219 qpair failed and we were unable to recover it. 00:38:35.219 [2024-12-09 09:56:10.396989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.219 [2024-12-09 09:56:10.396996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.219 qpair failed and we were unable to recover it. 00:38:35.219 [2024-12-09 09:56:10.397163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.219 [2024-12-09 09:56:10.397170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.219 qpair failed and we were unable to recover it. 00:38:35.219 [2024-12-09 09:56:10.397469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.220 [2024-12-09 09:56:10.397477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.220 qpair failed and we were unable to recover it. 00:38:35.220 [2024-12-09 09:56:10.397649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.220 [2024-12-09 09:56:10.397657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.220 qpair failed and we were unable to recover it. 00:38:35.220 [2024-12-09 09:56:10.397976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.220 [2024-12-09 09:56:10.397982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.220 qpair failed and we were unable to recover it. 00:38:35.220 [2024-12-09 09:56:10.398274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.220 [2024-12-09 09:56:10.398281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.220 qpair failed and we were unable to recover it. 00:38:35.220 [2024-12-09 09:56:10.398443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.220 [2024-12-09 09:56:10.398450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.220 qpair failed and we were unable to recover it. 00:38:35.220 [2024-12-09 09:56:10.398817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.220 [2024-12-09 09:56:10.398824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.220 qpair failed and we were unable to recover it. 00:38:35.220 [2024-12-09 09:56:10.398976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.220 [2024-12-09 09:56:10.398983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.220 qpair failed and we were unable to recover it. 00:38:35.220 [2024-12-09 09:56:10.399257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.220 [2024-12-09 09:56:10.399263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.220 qpair failed and we were unable to recover it. 00:38:35.220 [2024-12-09 09:56:10.399549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.220 [2024-12-09 09:56:10.399556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.220 qpair failed and we were unable to recover it. 00:38:35.220 [2024-12-09 09:56:10.399871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.220 [2024-12-09 09:56:10.399878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.220 qpair failed and we were unable to recover it. 00:38:35.220 [2024-12-09 09:56:10.400111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.220 [2024-12-09 09:56:10.400117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.220 qpair failed and we were unable to recover it. 00:38:35.220 [2024-12-09 09:56:10.400472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.220 [2024-12-09 09:56:10.400479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.220 qpair failed and we were unable to recover it. 00:38:35.220 [2024-12-09 09:56:10.400657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.220 [2024-12-09 09:56:10.400664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.220 qpair failed and we were unable to recover it. 00:38:35.220 [2024-12-09 09:56:10.400994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.220 [2024-12-09 09:56:10.401002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.220 qpair failed and we were unable to recover it. 00:38:35.220 [2024-12-09 09:56:10.401362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.220 [2024-12-09 09:56:10.401369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.220 qpair failed and we were unable to recover it. 00:38:35.220 [2024-12-09 09:56:10.401683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.220 [2024-12-09 09:56:10.401691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.220 qpair failed and we were unable to recover it. 00:38:35.220 [2024-12-09 09:56:10.401876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.220 [2024-12-09 09:56:10.401884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.220 qpair failed and we were unable to recover it. 00:38:35.220 [2024-12-09 09:56:10.402095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.220 [2024-12-09 09:56:10.402103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.220 qpair failed and we were unable to recover it. 00:38:35.220 [2024-12-09 09:56:10.402409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.220 [2024-12-09 09:56:10.402415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.220 qpair failed and we were unable to recover it. 00:38:35.220 [2024-12-09 09:56:10.402694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.220 [2024-12-09 09:56:10.402701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.220 qpair failed and we were unable to recover it. 00:38:35.220 [2024-12-09 09:56:10.402898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.220 [2024-12-09 09:56:10.402905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.220 qpair failed and we were unable to recover it. 00:38:35.220 [2024-12-09 09:56:10.403207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.220 [2024-12-09 09:56:10.403214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.220 qpair failed and we were unable to recover it. 00:38:35.220 [2024-12-09 09:56:10.403518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.220 [2024-12-09 09:56:10.403525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.220 qpair failed and we were unable to recover it. 00:38:35.220 [2024-12-09 09:56:10.403864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.220 [2024-12-09 09:56:10.403871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.220 qpair failed and we were unable to recover it. 00:38:35.220 [2024-12-09 09:56:10.403910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.220 [2024-12-09 09:56:10.403916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.220 qpair failed and we were unable to recover it. 00:38:35.220 [2024-12-09 09:56:10.404089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.220 [2024-12-09 09:56:10.404096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.220 qpair failed and we were unable to recover it. 00:38:35.220 [2024-12-09 09:56:10.404265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.220 [2024-12-09 09:56:10.404272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.220 qpair failed and we were unable to recover it. 00:38:35.220 [2024-12-09 09:56:10.404585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.220 [2024-12-09 09:56:10.404593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.220 qpair failed and we were unable to recover it. 00:38:35.220 [2024-12-09 09:56:10.404778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.220 [2024-12-09 09:56:10.404786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.220 qpair failed and we were unable to recover it. 00:38:35.220 [2024-12-09 09:56:10.404826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.220 [2024-12-09 09:56:10.404835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.220 qpair failed and we were unable to recover it. 00:38:35.220 [2024-12-09 09:56:10.405158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.220 [2024-12-09 09:56:10.405165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.220 qpair failed and we were unable to recover it. 00:38:35.220 [2024-12-09 09:56:10.405362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.220 [2024-12-09 09:56:10.405369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.220 qpair failed and we were unable to recover it. 00:38:35.220 [2024-12-09 09:56:10.405696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.220 [2024-12-09 09:56:10.405704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.220 qpair failed and we were unable to recover it. 00:38:35.220 [2024-12-09 09:56:10.406054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.220 [2024-12-09 09:56:10.406061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.220 qpair failed and we were unable to recover it. 00:38:35.220 [2024-12-09 09:56:10.406422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.220 [2024-12-09 09:56:10.406428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.220 qpair failed and we were unable to recover it. 00:38:35.220 [2024-12-09 09:56:10.406632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.220 [2024-12-09 09:56:10.406641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.220 qpair failed and we were unable to recover it. 00:38:35.220 [2024-12-09 09:56:10.406731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.220 [2024-12-09 09:56:10.406739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.220 qpair failed and we were unable to recover it. 00:38:35.220 [2024-12-09 09:56:10.407049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.220 [2024-12-09 09:56:10.407056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.220 qpair failed and we were unable to recover it. 00:38:35.220 [2024-12-09 09:56:10.407327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.221 [2024-12-09 09:56:10.407333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.221 qpair failed and we were unable to recover it. 00:38:35.221 [2024-12-09 09:56:10.407613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.221 [2024-12-09 09:56:10.407620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.221 qpair failed and we were unable to recover it. 00:38:35.221 [2024-12-09 09:56:10.407941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.221 [2024-12-09 09:56:10.407948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.221 qpair failed and we were unable to recover it. 00:38:35.221 [2024-12-09 09:56:10.408163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.221 [2024-12-09 09:56:10.408170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.221 qpair failed and we were unable to recover it. 00:38:35.221 [2024-12-09 09:56:10.408413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.221 [2024-12-09 09:56:10.408419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.221 qpair failed and we were unable to recover it. 00:38:35.221 [2024-12-09 09:56:10.408780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.221 [2024-12-09 09:56:10.408787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.221 qpair failed and we were unable to recover it. 00:38:35.221 [2024-12-09 09:56:10.408967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.221 [2024-12-09 09:56:10.408975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.221 qpair failed and we were unable to recover it. 00:38:35.221 [2024-12-09 09:56:10.409269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.221 [2024-12-09 09:56:10.409276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.221 qpair failed and we were unable to recover it. 00:38:35.221 [2024-12-09 09:56:10.409598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.221 [2024-12-09 09:56:10.409604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.221 qpair failed and we were unable to recover it. 00:38:35.221 [2024-12-09 09:56:10.409917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.221 [2024-12-09 09:56:10.409924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.221 qpair failed and we were unable to recover it. 00:38:35.221 [2024-12-09 09:56:10.410115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.221 [2024-12-09 09:56:10.410121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.221 qpair failed and we were unable to recover it. 00:38:35.221 [2024-12-09 09:56:10.410481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.221 [2024-12-09 09:56:10.410489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.221 qpair failed and we were unable to recover it. 00:38:35.221 [2024-12-09 09:56:10.410823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.221 [2024-12-09 09:56:10.410831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.221 qpair failed and we were unable to recover it. 00:38:35.221 [2024-12-09 09:56:10.411134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.221 [2024-12-09 09:56:10.411140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.221 qpair failed and we were unable to recover it. 00:38:35.221 [2024-12-09 09:56:10.411311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.221 [2024-12-09 09:56:10.411318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.221 qpair failed and we were unable to recover it. 00:38:35.221 [2024-12-09 09:56:10.411489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.221 [2024-12-09 09:56:10.411497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.221 qpair failed and we were unable to recover it. 00:38:35.221 [2024-12-09 09:56:10.411697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.221 [2024-12-09 09:56:10.411704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.221 qpair failed and we were unable to recover it. 00:38:35.221 [2024-12-09 09:56:10.411869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.221 [2024-12-09 09:56:10.411877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.221 qpair failed and we were unable to recover it. 00:38:35.221 [2024-12-09 09:56:10.412198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.221 [2024-12-09 09:56:10.412207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.221 qpair failed and we were unable to recover it. 00:38:35.221 [2024-12-09 09:56:10.412489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.221 [2024-12-09 09:56:10.412496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.221 qpair failed and we were unable to recover it. 00:38:35.221 [2024-12-09 09:56:10.412786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.221 [2024-12-09 09:56:10.412793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.221 qpair failed and we were unable to recover it. 00:38:35.221 [2024-12-09 09:56:10.413101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.221 [2024-12-09 09:56:10.413107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.221 qpair failed and we were unable to recover it. 00:38:35.221 [2024-12-09 09:56:10.413337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.221 [2024-12-09 09:56:10.413344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.221 qpair failed and we were unable to recover it. 00:38:35.221 [2024-12-09 09:56:10.413661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.221 [2024-12-09 09:56:10.413668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.221 qpair failed and we were unable to recover it. 00:38:35.221 [2024-12-09 09:56:10.413852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.221 [2024-12-09 09:56:10.413858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.221 qpair failed and we were unable to recover it. 00:38:35.221 [2024-12-09 09:56:10.414126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.221 [2024-12-09 09:56:10.414133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.221 qpair failed and we were unable to recover it. 00:38:35.221 [2024-12-09 09:56:10.414402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.221 [2024-12-09 09:56:10.414409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.221 qpair failed and we were unable to recover it. 00:38:35.221 [2024-12-09 09:56:10.414721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.221 [2024-12-09 09:56:10.414729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.221 qpair failed and we were unable to recover it. 00:38:35.221 [2024-12-09 09:56:10.414883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.221 [2024-12-09 09:56:10.414890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.221 qpair failed and we were unable to recover it. 00:38:35.221 [2024-12-09 09:56:10.415175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.221 [2024-12-09 09:56:10.415182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.221 qpair failed and we were unable to recover it. 00:38:35.221 [2024-12-09 09:56:10.415369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.221 [2024-12-09 09:56:10.415375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.221 qpair failed and we were unable to recover it. 00:38:35.221 [2024-12-09 09:56:10.415693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.221 [2024-12-09 09:56:10.415700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.221 qpair failed and we were unable to recover it. 00:38:35.221 [2024-12-09 09:56:10.416028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.221 [2024-12-09 09:56:10.416035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.221 qpair failed and we were unable to recover it. 00:38:35.221 [2024-12-09 09:56:10.416083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.221 [2024-12-09 09:56:10.416090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.221 qpair failed and we were unable to recover it. 00:38:35.221 [2024-12-09 09:56:10.416399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.221 [2024-12-09 09:56:10.416406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.221 qpair failed and we were unable to recover it. 00:38:35.221 [2024-12-09 09:56:10.416572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.221 [2024-12-09 09:56:10.416586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.221 qpair failed and we were unable to recover it. 00:38:35.221 [2024-12-09 09:56:10.416971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.221 [2024-12-09 09:56:10.416979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.221 qpair failed and we were unable to recover it. 00:38:35.221 [2024-12-09 09:56:10.417292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.221 [2024-12-09 09:56:10.417299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.221 qpair failed and we were unable to recover it. 00:38:35.221 [2024-12-09 09:56:10.417590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.222 [2024-12-09 09:56:10.417598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.222 qpair failed and we were unable to recover it. 00:38:35.222 [2024-12-09 09:56:10.417906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.222 [2024-12-09 09:56:10.417913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.222 qpair failed and we were unable to recover it. 00:38:35.222 [2024-12-09 09:56:10.418103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.222 [2024-12-09 09:56:10.418110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.222 qpair failed and we were unable to recover it. 00:38:35.222 [2024-12-09 09:56:10.418476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.222 [2024-12-09 09:56:10.418482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.222 qpair failed and we were unable to recover it. 00:38:35.222 [2024-12-09 09:56:10.418765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.222 [2024-12-09 09:56:10.418772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.222 qpair failed and we were unable to recover it. 00:38:35.222 [2024-12-09 09:56:10.419109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.222 [2024-12-09 09:56:10.419115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.222 qpair failed and we were unable to recover it. 00:38:35.222 [2024-12-09 09:56:10.419413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.222 [2024-12-09 09:56:10.419419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.222 qpair failed and we were unable to recover it. 00:38:35.222 [2024-12-09 09:56:10.419600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.222 [2024-12-09 09:56:10.419608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.222 qpair failed and we were unable to recover it. 00:38:35.222 [2024-12-09 09:56:10.419931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.222 [2024-12-09 09:56:10.419938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.222 qpair failed and we were unable to recover it. 00:38:35.222 [2024-12-09 09:56:10.420265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.222 [2024-12-09 09:56:10.420272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.222 qpair failed and we were unable to recover it. 00:38:35.222 [2024-12-09 09:56:10.420440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.222 [2024-12-09 09:56:10.420446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.222 qpair failed and we were unable to recover it. 00:38:35.222 [2024-12-09 09:56:10.420804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.222 [2024-12-09 09:56:10.420811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.222 qpair failed and we were unable to recover it. 00:38:35.222 [2024-12-09 09:56:10.421076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.222 [2024-12-09 09:56:10.421084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.222 qpair failed and we were unable to recover it. 00:38:35.222 [2024-12-09 09:56:10.421417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.222 [2024-12-09 09:56:10.421424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.222 qpair failed and we were unable to recover it. 00:38:35.222 [2024-12-09 09:56:10.421704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.222 [2024-12-09 09:56:10.421710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.222 qpair failed and we were unable to recover it. 00:38:35.222 [2024-12-09 09:56:10.422038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.222 [2024-12-09 09:56:10.422047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.222 qpair failed and we were unable to recover it. 00:38:35.222 [2024-12-09 09:56:10.422244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.222 [2024-12-09 09:56:10.422252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.222 qpair failed and we were unable to recover it. 00:38:35.222 [2024-12-09 09:56:10.422544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.222 [2024-12-09 09:56:10.422551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.222 qpair failed and we were unable to recover it. 00:38:35.222 [2024-12-09 09:56:10.422730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.222 [2024-12-09 09:56:10.422737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.222 qpair failed and we were unable to recover it. 00:38:35.222 [2024-12-09 09:56:10.422891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.222 [2024-12-09 09:56:10.422898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.222 qpair failed and we were unable to recover it. 00:38:35.222 [2024-12-09 09:56:10.423192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.222 [2024-12-09 09:56:10.423200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.222 qpair failed and we were unable to recover it. 00:38:35.222 [2024-12-09 09:56:10.423486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.222 [2024-12-09 09:56:10.423493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.222 qpair failed and we were unable to recover it. 00:38:35.222 [2024-12-09 09:56:10.423805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.222 [2024-12-09 09:56:10.423812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.222 qpair failed and we were unable to recover it. 00:38:35.222 [2024-12-09 09:56:10.424045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.222 [2024-12-09 09:56:10.424051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.222 qpair failed and we were unable to recover it. 00:38:35.222 [2024-12-09 09:56:10.424281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.222 [2024-12-09 09:56:10.424290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.222 qpair failed and we were unable to recover it. 00:38:35.222 [2024-12-09 09:56:10.424454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.222 [2024-12-09 09:56:10.424461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.222 qpair failed and we were unable to recover it. 00:38:35.222 [2024-12-09 09:56:10.424725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.222 [2024-12-09 09:56:10.424732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.222 qpair failed and we were unable to recover it. 00:38:35.222 [2024-12-09 09:56:10.424920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.222 [2024-12-09 09:56:10.424927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.222 qpair failed and we were unable to recover it. 00:38:35.222 [2024-12-09 09:56:10.425241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.222 [2024-12-09 09:56:10.425248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.222 qpair failed and we were unable to recover it. 00:38:35.222 [2024-12-09 09:56:10.425516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.222 [2024-12-09 09:56:10.425523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.222 qpair failed and we were unable to recover it. 00:38:35.222 [2024-12-09 09:56:10.425827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.222 [2024-12-09 09:56:10.425834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.222 qpair failed and we were unable to recover it. 00:38:35.222 [2024-12-09 09:56:10.426119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.222 [2024-12-09 09:56:10.426127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.222 qpair failed and we were unable to recover it. 00:38:35.222 [2024-12-09 09:56:10.426332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.222 [2024-12-09 09:56:10.426341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.222 qpair failed and we were unable to recover it. 00:38:35.222 [2024-12-09 09:56:10.426661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.222 [2024-12-09 09:56:10.426669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.222 qpair failed and we were unable to recover it. 00:38:35.222 [2024-12-09 09:56:10.426972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.222 [2024-12-09 09:56:10.426980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.222 qpair failed and we were unable to recover it. 00:38:35.222 [2024-12-09 09:56:10.427158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.222 [2024-12-09 09:56:10.427165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.222 qpair failed and we were unable to recover it. 00:38:35.222 [2024-12-09 09:56:10.427345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.222 [2024-12-09 09:56:10.427353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.222 qpair failed and we were unable to recover it. 00:38:35.222 [2024-12-09 09:56:10.427598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.222 [2024-12-09 09:56:10.427605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.222 qpair failed and we were unable to recover it. 00:38:35.222 [2024-12-09 09:56:10.427929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.223 [2024-12-09 09:56:10.427936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.223 qpair failed and we were unable to recover it. 00:38:35.223 [2024-12-09 09:56:10.428099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.223 [2024-12-09 09:56:10.428106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.223 qpair failed and we were unable to recover it. 00:38:35.223 [2024-12-09 09:56:10.428433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.223 [2024-12-09 09:56:10.428440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.223 qpair failed and we were unable to recover it. 00:38:35.223 [2024-12-09 09:56:10.428623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.223 [2024-12-09 09:56:10.428629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.223 qpair failed and we were unable to recover it. 00:38:35.223 [2024-12-09 09:56:10.428855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.223 [2024-12-09 09:56:10.428862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.223 qpair failed and we were unable to recover it. 00:38:35.223 [2024-12-09 09:56:10.429164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.223 [2024-12-09 09:56:10.429171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.223 qpair failed and we were unable to recover it. 00:38:35.223 [2024-12-09 09:56:10.429236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.223 [2024-12-09 09:56:10.429242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.223 qpair failed and we were unable to recover it. 00:38:35.223 [2024-12-09 09:56:10.429588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.223 [2024-12-09 09:56:10.429595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.223 qpair failed and we were unable to recover it. 00:38:35.223 [2024-12-09 09:56:10.429889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.223 [2024-12-09 09:56:10.429896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.223 qpair failed and we were unable to recover it. 00:38:35.223 [2024-12-09 09:56:10.430194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.223 [2024-12-09 09:56:10.430202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.223 qpair failed and we were unable to recover it. 00:38:35.223 [2024-12-09 09:56:10.430511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.223 [2024-12-09 09:56:10.430518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.223 qpair failed and we were unable to recover it. 00:38:35.223 [2024-12-09 09:56:10.430837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.223 [2024-12-09 09:56:10.430844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.223 qpair failed and we were unable to recover it. 00:38:35.223 [2024-12-09 09:56:10.431259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.223 [2024-12-09 09:56:10.431265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.223 qpair failed and we were unable to recover it. 00:38:35.223 [2024-12-09 09:56:10.431426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.223 [2024-12-09 09:56:10.431433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.223 qpair failed and we were unable to recover it. 00:38:35.223 [2024-12-09 09:56:10.431720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.223 [2024-12-09 09:56:10.431727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.223 qpair failed and we were unable to recover it. 00:38:35.223 [2024-12-09 09:56:10.431939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.223 [2024-12-09 09:56:10.431947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.223 qpair failed and we were unable to recover it. 00:38:35.223 [2024-12-09 09:56:10.432117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.223 [2024-12-09 09:56:10.432124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.223 qpair failed and we were unable to recover it. 00:38:35.223 [2024-12-09 09:56:10.432489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.223 [2024-12-09 09:56:10.432496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.223 qpair failed and we were unable to recover it. 00:38:35.223 [2024-12-09 09:56:10.432813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.223 [2024-12-09 09:56:10.432819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.223 qpair failed and we were unable to recover it. 00:38:35.223 [2024-12-09 09:56:10.433109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.223 [2024-12-09 09:56:10.433116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.223 qpair failed and we were unable to recover it. 00:38:35.223 [2024-12-09 09:56:10.433338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.223 [2024-12-09 09:56:10.433345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.223 qpair failed and we were unable to recover it. 00:38:35.223 [2024-12-09 09:56:10.433562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.223 [2024-12-09 09:56:10.433569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.223 qpair failed and we were unable to recover it. 00:38:35.223 [2024-12-09 09:56:10.433888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.223 [2024-12-09 09:56:10.433897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.223 qpair failed and we were unable to recover it. 00:38:35.223 [2024-12-09 09:56:10.434077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.223 [2024-12-09 09:56:10.434084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.223 qpair failed and we were unable to recover it. 00:38:35.223 [2024-12-09 09:56:10.434299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.223 [2024-12-09 09:56:10.434306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.223 qpair failed and we were unable to recover it. 00:38:35.223 [2024-12-09 09:56:10.434617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.223 [2024-12-09 09:56:10.434624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.223 qpair failed and we were unable to recover it. 00:38:35.223 [2024-12-09 09:56:10.434805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.223 [2024-12-09 09:56:10.434813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.223 qpair failed and we were unable to recover it. 00:38:35.223 [2024-12-09 09:56:10.435146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.223 [2024-12-09 09:56:10.435155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.223 qpair failed and we were unable to recover it. 00:38:35.223 [2024-12-09 09:56:10.435457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.223 [2024-12-09 09:56:10.435465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.223 qpair failed and we were unable to recover it. 00:38:35.223 [2024-12-09 09:56:10.435782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.223 [2024-12-09 09:56:10.435789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.223 qpair failed and we were unable to recover it. 00:38:35.223 [2024-12-09 09:56:10.436092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.223 [2024-12-09 09:56:10.436099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.223 qpair failed and we were unable to recover it. 00:38:35.223 [2024-12-09 09:56:10.436445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.223 [2024-12-09 09:56:10.436452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.223 qpair failed and we were unable to recover it. 00:38:35.223 [2024-12-09 09:56:10.436636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.223 [2024-12-09 09:56:10.436648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.223 qpair failed and we were unable to recover it. 00:38:35.223 [2024-12-09 09:56:10.436944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.223 [2024-12-09 09:56:10.436951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.223 qpair failed and we were unable to recover it. 00:38:35.224 [2024-12-09 09:56:10.437239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.224 [2024-12-09 09:56:10.437245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.224 qpair failed and we were unable to recover it. 00:38:35.224 [2024-12-09 09:56:10.437569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.224 [2024-12-09 09:56:10.437576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.224 qpair failed and we were unable to recover it. 00:38:35.224 [2024-12-09 09:56:10.437890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.224 [2024-12-09 09:56:10.437897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.224 qpair failed and we were unable to recover it. 00:38:35.224 [2024-12-09 09:56:10.438070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.224 [2024-12-09 09:56:10.438076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.224 qpair failed and we were unable to recover it. 00:38:35.224 [2024-12-09 09:56:10.438384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.224 [2024-12-09 09:56:10.438390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.224 qpair failed and we were unable to recover it. 00:38:35.224 [2024-12-09 09:56:10.438702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.224 [2024-12-09 09:56:10.438709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.224 qpair failed and we were unable to recover it. 00:38:35.224 [2024-12-09 09:56:10.439009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.224 [2024-12-09 09:56:10.439016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.224 qpair failed and we were unable to recover it. 00:38:35.224 [2024-12-09 09:56:10.439274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.224 [2024-12-09 09:56:10.439286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.224 qpair failed and we were unable to recover it. 00:38:35.224 [2024-12-09 09:56:10.439609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.224 [2024-12-09 09:56:10.439616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.224 qpair failed and we were unable to recover it. 00:38:35.224 [2024-12-09 09:56:10.439994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.224 [2024-12-09 09:56:10.440002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.224 qpair failed and we were unable to recover it. 00:38:35.224 [2024-12-09 09:56:10.440184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.224 [2024-12-09 09:56:10.440191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.224 qpair failed and we were unable to recover it. 00:38:35.224 [2024-12-09 09:56:10.440475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.224 [2024-12-09 09:56:10.440482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.224 qpair failed and we were unable to recover it. 00:38:35.224 [2024-12-09 09:56:10.440830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.224 [2024-12-09 09:56:10.440837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.224 qpair failed and we were unable to recover it. 00:38:35.224 [2024-12-09 09:56:10.441140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.224 [2024-12-09 09:56:10.441147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.224 qpair failed and we were unable to recover it. 00:38:35.224 [2024-12-09 09:56:10.441304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.224 [2024-12-09 09:56:10.441311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.224 qpair failed and we were unable to recover it. 00:38:35.224 [2024-12-09 09:56:10.441582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.224 [2024-12-09 09:56:10.441590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.224 qpair failed and we were unable to recover it. 00:38:35.224 [2024-12-09 09:56:10.441886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.224 [2024-12-09 09:56:10.441893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.224 qpair failed and we were unable to recover it. 00:38:35.224 [2024-12-09 09:56:10.442183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.224 [2024-12-09 09:56:10.442189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.224 qpair failed and we were unable to recover it. 00:38:35.224 [2024-12-09 09:56:10.442266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.224 [2024-12-09 09:56:10.442273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.224 qpair failed and we were unable to recover it. 00:38:35.224 [2024-12-09 09:56:10.442417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.224 [2024-12-09 09:56:10.442424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.224 qpair failed and we were unable to recover it. 00:38:35.224 [2024-12-09 09:56:10.442689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.224 [2024-12-09 09:56:10.442696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.224 qpair failed and we were unable to recover it. 00:38:35.224 [2024-12-09 09:56:10.443014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.224 [2024-12-09 09:56:10.443021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.224 qpair failed and we were unable to recover it. 00:38:35.224 [2024-12-09 09:56:10.443352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.224 [2024-12-09 09:56:10.443360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.224 qpair failed and we were unable to recover it. 00:38:35.224 [2024-12-09 09:56:10.443646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.224 [2024-12-09 09:56:10.443653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.224 qpair failed and we were unable to recover it. 00:38:35.224 [2024-12-09 09:56:10.443815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.224 [2024-12-09 09:56:10.443822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.224 qpair failed and we were unable to recover it. 00:38:35.224 [2024-12-09 09:56:10.443926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.224 [2024-12-09 09:56:10.443934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.224 qpair failed and we were unable to recover it. 00:38:35.224 [2024-12-09 09:56:10.444212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.224 [2024-12-09 09:56:10.444221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.224 qpair failed and we were unable to recover it. 00:38:35.224 [2024-12-09 09:56:10.444394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.224 [2024-12-09 09:56:10.444402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.224 qpair failed and we were unable to recover it. 00:38:35.224 [2024-12-09 09:56:10.444720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.224 [2024-12-09 09:56:10.444731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.224 qpair failed and we were unable to recover it. 00:38:35.224 [2024-12-09 09:56:10.445025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.224 [2024-12-09 09:56:10.445031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.224 qpair failed and we were unable to recover it. 00:38:35.224 [2024-12-09 09:56:10.445210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.224 [2024-12-09 09:56:10.445217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.224 qpair failed and we were unable to recover it. 00:38:35.224 [2024-12-09 09:56:10.445566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.224 [2024-12-09 09:56:10.445572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.224 qpair failed and we were unable to recover it. 00:38:35.224 [2024-12-09 09:56:10.445759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.224 [2024-12-09 09:56:10.445767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.224 qpair failed and we were unable to recover it. 00:38:35.224 [2024-12-09 09:56:10.445939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.224 [2024-12-09 09:56:10.445946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.224 qpair failed and we were unable to recover it. 00:38:35.224 [2024-12-09 09:56:10.446250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.224 [2024-12-09 09:56:10.446256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.224 qpair failed and we were unable to recover it. 00:38:35.224 [2024-12-09 09:56:10.446548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.224 [2024-12-09 09:56:10.446555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.224 qpair failed and we were unable to recover it. 00:38:35.224 [2024-12-09 09:56:10.446889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.224 [2024-12-09 09:56:10.446896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.224 qpair failed and we were unable to recover it. 00:38:35.224 [2024-12-09 09:56:10.447245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.224 [2024-12-09 09:56:10.447252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.224 qpair failed and we were unable to recover it. 00:38:35.225 [2024-12-09 09:56:10.447624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.225 [2024-12-09 09:56:10.447631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.225 qpair failed and we were unable to recover it. 00:38:35.225 [2024-12-09 09:56:10.447871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.225 [2024-12-09 09:56:10.447878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.225 qpair failed and we were unable to recover it. 00:38:35.225 [2024-12-09 09:56:10.448092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.225 [2024-12-09 09:56:10.448098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.225 qpair failed and we were unable to recover it. 00:38:35.225 [2024-12-09 09:56:10.448423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.225 [2024-12-09 09:56:10.448431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.225 qpair failed and we were unable to recover it. 00:38:35.225 [2024-12-09 09:56:10.448708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.225 [2024-12-09 09:56:10.448716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.225 qpair failed and we were unable to recover it. 00:38:35.225 [2024-12-09 09:56:10.448989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.225 [2024-12-09 09:56:10.448996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.225 qpair failed and we were unable to recover it. 00:38:35.225 [2024-12-09 09:56:10.449286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.225 [2024-12-09 09:56:10.449293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.225 qpair failed and we were unable to recover it. 00:38:35.225 [2024-12-09 09:56:10.449481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.225 [2024-12-09 09:56:10.449488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.225 qpair failed and we were unable to recover it. 00:38:35.225 [2024-12-09 09:56:10.449785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.225 [2024-12-09 09:56:10.449792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.225 qpair failed and we were unable to recover it. 00:38:35.225 [2024-12-09 09:56:10.450110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.225 [2024-12-09 09:56:10.450117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.225 qpair failed and we were unable to recover it. 00:38:35.225 [2024-12-09 09:56:10.450307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.225 [2024-12-09 09:56:10.450314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.225 qpair failed and we were unable to recover it. 00:38:35.225 [2024-12-09 09:56:10.450708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.225 [2024-12-09 09:56:10.450716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.225 qpair failed and we were unable to recover it. 00:38:35.225 [2024-12-09 09:56:10.450913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.225 [2024-12-09 09:56:10.450920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.225 qpair failed and we were unable to recover it. 00:38:35.225 [2024-12-09 09:56:10.451086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.225 [2024-12-09 09:56:10.451092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.225 qpair failed and we were unable to recover it. 00:38:35.225 [2024-12-09 09:56:10.451411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.225 [2024-12-09 09:56:10.451418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.225 qpair failed and we were unable to recover it. 00:38:35.225 [2024-12-09 09:56:10.451680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.225 [2024-12-09 09:56:10.451688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.225 qpair failed and we were unable to recover it. 00:38:35.225 [2024-12-09 09:56:10.452021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.225 [2024-12-09 09:56:10.452028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.225 qpair failed and we were unable to recover it. 00:38:35.225 [2024-12-09 09:56:10.452353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.225 [2024-12-09 09:56:10.452359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.225 qpair failed and we were unable to recover it. 00:38:35.225 [2024-12-09 09:56:10.452741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.225 [2024-12-09 09:56:10.452748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.225 qpair failed and we were unable to recover it. 00:38:35.225 [2024-12-09 09:56:10.453040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.225 [2024-12-09 09:56:10.453046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.225 qpair failed and we were unable to recover it. 00:38:35.225 [2024-12-09 09:56:10.453243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.225 [2024-12-09 09:56:10.453261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.225 qpair failed and we were unable to recover it. 00:38:35.225 [2024-12-09 09:56:10.453576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.225 [2024-12-09 09:56:10.453583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.225 qpair failed and we were unable to recover it. 00:38:35.225 [2024-12-09 09:56:10.453892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.225 [2024-12-09 09:56:10.453899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.225 qpair failed and we were unable to recover it. 00:38:35.225 [2024-12-09 09:56:10.454090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.225 [2024-12-09 09:56:10.454097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.225 qpair failed and we were unable to recover it. 00:38:35.225 [2024-12-09 09:56:10.454466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.225 [2024-12-09 09:56:10.454473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.225 qpair failed and we were unable to recover it. 00:38:35.225 [2024-12-09 09:56:10.454734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.225 [2024-12-09 09:56:10.454741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.225 qpair failed and we were unable to recover it. 00:38:35.225 [2024-12-09 09:56:10.455036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.225 [2024-12-09 09:56:10.455043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.225 qpair failed and we were unable to recover it. 00:38:35.225 [2024-12-09 09:56:10.455358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.225 [2024-12-09 09:56:10.455365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.225 qpair failed and we were unable to recover it. 00:38:35.225 [2024-12-09 09:56:10.455694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.225 [2024-12-09 09:56:10.455702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.225 qpair failed and we were unable to recover it. 00:38:35.225 [2024-12-09 09:56:10.455926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.225 [2024-12-09 09:56:10.455933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.225 qpair failed and we were unable to recover it. 00:38:35.225 [2024-12-09 09:56:10.456251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.225 [2024-12-09 09:56:10.456260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.225 qpair failed and we were unable to recover it. 00:38:35.225 [2024-12-09 09:56:10.456597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.225 [2024-12-09 09:56:10.456605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.225 qpair failed and we were unable to recover it. 00:38:35.225 [2024-12-09 09:56:10.456941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.225 [2024-12-09 09:56:10.456949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.225 qpair failed and we were unable to recover it. 00:38:35.225 [2024-12-09 09:56:10.457258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.225 [2024-12-09 09:56:10.457265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.225 qpair failed and we were unable to recover it. 00:38:35.225 [2024-12-09 09:56:10.457474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.225 [2024-12-09 09:56:10.457482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.225 qpair failed and we were unable to recover it. 00:38:35.225 [2024-12-09 09:56:10.457750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.225 [2024-12-09 09:56:10.457759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.225 qpair failed and we were unable to recover it. 00:38:35.225 [2024-12-09 09:56:10.458074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.225 [2024-12-09 09:56:10.458081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.225 qpair failed and we were unable to recover it. 00:38:35.225 [2024-12-09 09:56:10.458358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.225 [2024-12-09 09:56:10.458366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.225 qpair failed and we were unable to recover it. 00:38:35.226 [2024-12-09 09:56:10.458673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.226 [2024-12-09 09:56:10.458680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.226 qpair failed and we were unable to recover it. 00:38:35.226 [2024-12-09 09:56:10.458984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.226 [2024-12-09 09:56:10.458990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.226 qpair failed and we were unable to recover it. 00:38:35.226 [2024-12-09 09:56:10.459288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.226 [2024-12-09 09:56:10.459295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.226 qpair failed and we were unable to recover it. 00:38:35.226 [2024-12-09 09:56:10.459612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.226 [2024-12-09 09:56:10.459619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.226 qpair failed and we were unable to recover it. 00:38:35.226 [2024-12-09 09:56:10.459957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.226 [2024-12-09 09:56:10.459964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.226 qpair failed and we were unable to recover it. 00:38:35.226 [2024-12-09 09:56:10.460298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.226 [2024-12-09 09:56:10.460305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.226 qpair failed and we were unable to recover it. 00:38:35.226 [2024-12-09 09:56:10.460599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.226 [2024-12-09 09:56:10.460607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.226 qpair failed and we were unable to recover it. 00:38:35.226 [2024-12-09 09:56:10.460762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.226 [2024-12-09 09:56:10.460769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.226 qpair failed and we were unable to recover it. 00:38:35.226 [2024-12-09 09:56:10.461056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.226 [2024-12-09 09:56:10.461064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.226 qpair failed and we were unable to recover it. 00:38:35.226 [2024-12-09 09:56:10.461364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.226 [2024-12-09 09:56:10.461371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.226 qpair failed and we were unable to recover it. 00:38:35.226 [2024-12-09 09:56:10.461527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.226 [2024-12-09 09:56:10.461535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.226 qpair failed and we were unable to recover it. 00:38:35.226 [2024-12-09 09:56:10.461848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.226 [2024-12-09 09:56:10.461856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.226 qpair failed and we were unable to recover it. 00:38:35.226 [2024-12-09 09:56:10.462161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.226 [2024-12-09 09:56:10.462169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.226 qpair failed and we were unable to recover it. 00:38:35.226 [2024-12-09 09:56:10.462474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.226 [2024-12-09 09:56:10.462481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.226 qpair failed and we were unable to recover it. 00:38:35.226 [2024-12-09 09:56:10.462801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.226 [2024-12-09 09:56:10.462808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.226 qpair failed and we were unable to recover it. 00:38:35.226 [2024-12-09 09:56:10.463135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.226 [2024-12-09 09:56:10.463143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.226 qpair failed and we were unable to recover it. 00:38:35.226 [2024-12-09 09:56:10.463481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.226 [2024-12-09 09:56:10.463487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.226 qpair failed and we were unable to recover it. 00:38:35.226 [2024-12-09 09:56:10.463808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.226 [2024-12-09 09:56:10.463815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.226 qpair failed and we were unable to recover it. 00:38:35.226 [2024-12-09 09:56:10.464009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.226 [2024-12-09 09:56:10.464015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.226 qpair failed and we were unable to recover it. 00:38:35.226 [2024-12-09 09:56:10.464318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.226 [2024-12-09 09:56:10.464325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.226 qpair failed and we were unable to recover it. 00:38:35.226 [2024-12-09 09:56:10.464504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.226 [2024-12-09 09:56:10.464512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.226 qpair failed and we were unable to recover it. 00:38:35.226 [2024-12-09 09:56:10.464745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.226 [2024-12-09 09:56:10.464752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.226 qpair failed and we were unable to recover it. 00:38:35.226 [2024-12-09 09:56:10.464942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.226 [2024-12-09 09:56:10.464950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.226 qpair failed and we were unable to recover it. 00:38:35.226 [2024-12-09 09:56:10.465256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.226 [2024-12-09 09:56:10.465263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.226 qpair failed and we were unable to recover it. 00:38:35.226 [2024-12-09 09:56:10.465529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.226 [2024-12-09 09:56:10.465536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.226 qpair failed and we were unable to recover it. 00:38:35.226 [2024-12-09 09:56:10.465735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.226 [2024-12-09 09:56:10.465743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.226 qpair failed and we were unable to recover it. 00:38:35.226 [2024-12-09 09:56:10.466070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.226 [2024-12-09 09:56:10.466077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.226 qpair failed and we were unable to recover it. 00:38:35.226 [2024-12-09 09:56:10.466357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.226 [2024-12-09 09:56:10.466364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.226 qpair failed and we were unable to recover it. 00:38:35.226 [2024-12-09 09:56:10.466686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.226 [2024-12-09 09:56:10.466693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.226 qpair failed and we were unable to recover it. 00:38:35.226 [2024-12-09 09:56:10.466892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.226 [2024-12-09 09:56:10.466900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.226 qpair failed and we were unable to recover it. 00:38:35.226 [2024-12-09 09:56:10.467065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.226 [2024-12-09 09:56:10.467082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.226 qpair failed and we were unable to recover it. 00:38:35.226 [2024-12-09 09:56:10.467412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.226 [2024-12-09 09:56:10.467419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.226 qpair failed and we were unable to recover it. 00:38:35.226 [2024-12-09 09:56:10.467747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.226 [2024-12-09 09:56:10.467756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.226 qpair failed and we were unable to recover it. 00:38:35.226 [2024-12-09 09:56:10.468104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.226 [2024-12-09 09:56:10.468112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.226 qpair failed and we were unable to recover it. 00:38:35.226 [2024-12-09 09:56:10.468420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.226 [2024-12-09 09:56:10.468428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.226 qpair failed and we were unable to recover it. 00:38:35.226 [2024-12-09 09:56:10.468730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.226 [2024-12-09 09:56:10.468739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.226 qpair failed and we were unable to recover it. 00:38:35.226 [2024-12-09 09:56:10.468955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.226 [2024-12-09 09:56:10.468963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.226 qpair failed and we were unable to recover it. 00:38:35.226 [2024-12-09 09:56:10.469279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.226 [2024-12-09 09:56:10.469286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.226 qpair failed and we were unable to recover it. 00:38:35.227 [2024-12-09 09:56:10.469577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.227 [2024-12-09 09:56:10.469585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.227 qpair failed and we were unable to recover it. 00:38:35.227 [2024-12-09 09:56:10.469894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.227 [2024-12-09 09:56:10.469901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.227 qpair failed and we were unable to recover it. 00:38:35.227 [2024-12-09 09:56:10.470180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.227 [2024-12-09 09:56:10.470188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.227 qpair failed and we were unable to recover it. 00:38:35.227 [2024-12-09 09:56:10.470498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.227 [2024-12-09 09:56:10.470505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.227 qpair failed and we were unable to recover it. 00:38:35.227 [2024-12-09 09:56:10.470809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.227 [2024-12-09 09:56:10.470817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.227 qpair failed and we were unable to recover it. 00:38:35.227 [2024-12-09 09:56:10.471153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.227 [2024-12-09 09:56:10.471160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.227 qpair failed and we were unable to recover it. 00:38:35.227 [2024-12-09 09:56:10.471203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.227 [2024-12-09 09:56:10.471209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.227 qpair failed and we were unable to recover it. 00:38:35.227 [2024-12-09 09:56:10.471482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.227 [2024-12-09 09:56:10.471490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.227 qpair failed and we were unable to recover it. 00:38:35.227 [2024-12-09 09:56:10.471782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.227 [2024-12-09 09:56:10.471789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.227 qpair failed and we were unable to recover it. 00:38:35.227 [2024-12-09 09:56:10.472127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.227 [2024-12-09 09:56:10.472133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.227 qpair failed and we were unable to recover it. 00:38:35.227 [2024-12-09 09:56:10.472501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.227 [2024-12-09 09:56:10.472508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.227 qpair failed and we were unable to recover it. 00:38:35.227 [2024-12-09 09:56:10.472899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.227 [2024-12-09 09:56:10.472906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.227 qpair failed and we were unable to recover it. 00:38:35.227 [2024-12-09 09:56:10.473202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.227 [2024-12-09 09:56:10.473209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.227 qpair failed and we were unable to recover it. 00:38:35.227 [2024-12-09 09:56:10.473404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.227 [2024-12-09 09:56:10.473412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.227 qpair failed and we were unable to recover it. 00:38:35.227 [2024-12-09 09:56:10.473597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.227 [2024-12-09 09:56:10.473604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.227 qpair failed and we were unable to recover it. 00:38:35.227 [2024-12-09 09:56:10.473906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.227 [2024-12-09 09:56:10.473913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.227 qpair failed and we were unable to recover it. 00:38:35.227 [2024-12-09 09:56:10.474098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.227 [2024-12-09 09:56:10.474107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.227 qpair failed and we were unable to recover it. 00:38:35.227 [2024-12-09 09:56:10.474387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.227 [2024-12-09 09:56:10.474395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.227 qpair failed and we were unable to recover it. 00:38:35.227 [2024-12-09 09:56:10.474581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.227 [2024-12-09 09:56:10.474589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.227 qpair failed and we were unable to recover it. 00:38:35.227 [2024-12-09 09:56:10.474896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.227 [2024-12-09 09:56:10.474904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.227 qpair failed and we were unable to recover it. 00:38:35.227 [2024-12-09 09:56:10.475163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.227 [2024-12-09 09:56:10.475170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.227 qpair failed and we were unable to recover it. 00:38:35.227 [2024-12-09 09:56:10.475333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.227 [2024-12-09 09:56:10.475340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.227 qpair failed and we were unable to recover it. 00:38:35.227 [2024-12-09 09:56:10.475502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.227 [2024-12-09 09:56:10.475508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.227 qpair failed and we were unable to recover it. 00:38:35.227 [2024-12-09 09:56:10.475830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.227 [2024-12-09 09:56:10.475837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.227 qpair failed and we were unable to recover it. 00:38:35.227 [2024-12-09 09:56:10.476129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.227 [2024-12-09 09:56:10.476135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.227 qpair failed and we were unable to recover it. 00:38:35.227 [2024-12-09 09:56:10.476444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.227 [2024-12-09 09:56:10.476451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.227 qpair failed and we were unable to recover it. 00:38:35.227 [2024-12-09 09:56:10.476791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.227 [2024-12-09 09:56:10.476798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.227 qpair failed and we were unable to recover it. 00:38:35.227 [2024-12-09 09:56:10.477120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.227 [2024-12-09 09:56:10.477127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.227 qpair failed and we were unable to recover it. 00:38:35.227 [2024-12-09 09:56:10.477448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.227 [2024-12-09 09:56:10.477456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.227 qpair failed and we were unable to recover it. 00:38:35.227 [2024-12-09 09:56:10.477634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.227 [2024-12-09 09:56:10.477646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.227 qpair failed and we were unable to recover it. 00:38:35.227 [2024-12-09 09:56:10.477939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.227 [2024-12-09 09:56:10.477947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.227 qpair failed and we were unable to recover it. 00:38:35.227 [2024-12-09 09:56:10.478216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.227 [2024-12-09 09:56:10.478223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.227 qpair failed and we were unable to recover it. 00:38:35.227 [2024-12-09 09:56:10.478492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.227 [2024-12-09 09:56:10.478499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.227 qpair failed and we were unable to recover it. 00:38:35.227 [2024-12-09 09:56:10.478751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.227 [2024-12-09 09:56:10.478758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.227 qpair failed and we were unable to recover it. 00:38:35.227 [2024-12-09 09:56:10.479052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.227 [2024-12-09 09:56:10.479059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.227 qpair failed and we were unable to recover it. 00:38:35.227 [2024-12-09 09:56:10.479348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.227 [2024-12-09 09:56:10.479355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.227 qpair failed and we were unable to recover it. 00:38:35.227 [2024-12-09 09:56:10.479657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.227 [2024-12-09 09:56:10.479664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.227 qpair failed and we were unable to recover it. 00:38:35.227 [2024-12-09 09:56:10.480000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.228 [2024-12-09 09:56:10.480007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.228 qpair failed and we were unable to recover it. 00:38:35.228 [2024-12-09 09:56:10.480282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.228 [2024-12-09 09:56:10.480288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.228 qpair failed and we were unable to recover it. 00:38:35.228 [2024-12-09 09:56:10.480604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.228 [2024-12-09 09:56:10.480611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.228 qpair failed and we were unable to recover it. 00:38:35.228 [2024-12-09 09:56:10.480935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.228 [2024-12-09 09:56:10.480943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.228 qpair failed and we were unable to recover it. 00:38:35.228 [2024-12-09 09:56:10.481125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.228 [2024-12-09 09:56:10.481132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.228 qpair failed and we were unable to recover it. 00:38:35.228 [2024-12-09 09:56:10.481393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.228 [2024-12-09 09:56:10.481400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.228 qpair failed and we were unable to recover it. 00:38:35.228 [2024-12-09 09:56:10.481573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.228 [2024-12-09 09:56:10.481579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.228 qpair failed and we were unable to recover it. 00:38:35.228 [2024-12-09 09:56:10.481791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.228 [2024-12-09 09:56:10.481798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.228 qpair failed and we were unable to recover it. 00:38:35.228 [2024-12-09 09:56:10.482073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.228 [2024-12-09 09:56:10.482080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.228 qpair failed and we were unable to recover it. 00:38:35.228 [2024-12-09 09:56:10.482388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.228 [2024-12-09 09:56:10.482396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.228 qpair failed and we were unable to recover it. 00:38:35.228 [2024-12-09 09:56:10.482730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.228 [2024-12-09 09:56:10.482738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.228 qpair failed and we were unable to recover it. 00:38:35.228 [2024-12-09 09:56:10.483022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.228 [2024-12-09 09:56:10.483029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.228 qpair failed and we were unable to recover it. 00:38:35.228 [2024-12-09 09:56:10.483234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.228 [2024-12-09 09:56:10.483241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.228 qpair failed and we were unable to recover it. 00:38:35.228 [2024-12-09 09:56:10.483531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.228 [2024-12-09 09:56:10.483539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.228 qpair failed and we were unable to recover it. 00:38:35.228 [2024-12-09 09:56:10.483849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.228 [2024-12-09 09:56:10.483856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.228 qpair failed and we were unable to recover it. 00:38:35.228 [2024-12-09 09:56:10.484177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.228 [2024-12-09 09:56:10.484184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.228 qpair failed and we were unable to recover it. 00:38:35.228 [2024-12-09 09:56:10.484474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.228 [2024-12-09 09:56:10.484482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.228 qpair failed and we were unable to recover it. 00:38:35.228 [2024-12-09 09:56:10.484817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.228 [2024-12-09 09:56:10.484824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.228 qpair failed and we were unable to recover it. 00:38:35.228 [2024-12-09 09:56:10.485113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.228 [2024-12-09 09:56:10.485120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.228 qpair failed and we were unable to recover it. 00:38:35.228 [2024-12-09 09:56:10.485300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.228 [2024-12-09 09:56:10.485307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.228 qpair failed and we were unable to recover it. 00:38:35.228 [2024-12-09 09:56:10.485542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.228 [2024-12-09 09:56:10.485549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.228 qpair failed and we were unable to recover it. 00:38:35.228 [2024-12-09 09:56:10.485864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.228 [2024-12-09 09:56:10.485871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.228 qpair failed and we were unable to recover it. 00:38:35.228 [2024-12-09 09:56:10.486178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.228 [2024-12-09 09:56:10.486185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.228 qpair failed and we were unable to recover it. 00:38:35.228 [2024-12-09 09:56:10.486495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.228 [2024-12-09 09:56:10.486502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.228 qpair failed and we were unable to recover it. 00:38:35.228 [2024-12-09 09:56:10.486802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.228 [2024-12-09 09:56:10.486812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.228 qpair failed and we were unable to recover it. 00:38:35.228 [2024-12-09 09:56:10.487089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.228 [2024-12-09 09:56:10.487095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.228 qpair failed and we were unable to recover it. 00:38:35.228 [2024-12-09 09:56:10.487373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.228 [2024-12-09 09:56:10.487380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.228 qpair failed and we were unable to recover it. 00:38:35.228 [2024-12-09 09:56:10.487693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.228 [2024-12-09 09:56:10.487700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.228 qpair failed and we were unable to recover it. 00:38:35.228 [2024-12-09 09:56:10.487995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.228 [2024-12-09 09:56:10.488002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.228 qpair failed and we were unable to recover it. 00:38:35.228 [2024-12-09 09:56:10.488315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.228 [2024-12-09 09:56:10.488322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.228 qpair failed and we were unable to recover it. 00:38:35.228 [2024-12-09 09:56:10.488486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.228 [2024-12-09 09:56:10.488493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.228 qpair failed and we were unable to recover it. 00:38:35.228 [2024-12-09 09:56:10.488800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.228 [2024-12-09 09:56:10.488807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.228 qpair failed and we were unable to recover it. 00:38:35.228 [2024-12-09 09:56:10.488967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.228 [2024-12-09 09:56:10.488974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.228 qpair failed and we were unable to recover it. 00:38:35.228 [2024-12-09 09:56:10.489248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.228 [2024-12-09 09:56:10.489255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.228 qpair failed and we were unable to recover it. 00:38:35.228 [2024-12-09 09:56:10.489518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.228 [2024-12-09 09:56:10.489526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.228 qpair failed and we were unable to recover it. 00:38:35.229 [2024-12-09 09:56:10.489711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.229 [2024-12-09 09:56:10.489718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.229 qpair failed and we were unable to recover it. 00:38:35.229 [2024-12-09 09:56:10.489897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.229 [2024-12-09 09:56:10.489904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.229 qpair failed and we were unable to recover it. 00:38:35.229 [2024-12-09 09:56:10.490118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.229 [2024-12-09 09:56:10.490125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.229 qpair failed and we were unable to recover it. 00:38:35.229 [2024-12-09 09:56:10.490439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.229 [2024-12-09 09:56:10.490446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.229 qpair failed and we were unable to recover it. 00:38:35.229 [2024-12-09 09:56:10.490756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.229 [2024-12-09 09:56:10.490763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.229 qpair failed and we were unable to recover it. 00:38:35.229 [2024-12-09 09:56:10.491077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.229 [2024-12-09 09:56:10.491085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.229 qpair failed and we were unable to recover it. 00:38:35.229 [2024-12-09 09:56:10.491370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.229 [2024-12-09 09:56:10.491376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.229 qpair failed and we were unable to recover it. 00:38:35.229 [2024-12-09 09:56:10.491669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.229 [2024-12-09 09:56:10.491676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.229 qpair failed and we were unable to recover it. 00:38:35.229 [2024-12-09 09:56:10.491866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.229 [2024-12-09 09:56:10.491875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.229 qpair failed and we were unable to recover it. 00:38:35.229 [2024-12-09 09:56:10.492022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.229 [2024-12-09 09:56:10.492030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.229 qpair failed and we were unable to recover it. 00:38:35.229 [2024-12-09 09:56:10.492206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.229 [2024-12-09 09:56:10.492213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.229 qpair failed and we were unable to recover it. 00:38:35.229 [2024-12-09 09:56:10.492531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.229 [2024-12-09 09:56:10.492538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.229 qpair failed and we were unable to recover it. 00:38:35.229 [2024-12-09 09:56:10.492869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.229 [2024-12-09 09:56:10.492875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.229 qpair failed and we were unable to recover it. 00:38:35.229 [2024-12-09 09:56:10.493074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.229 [2024-12-09 09:56:10.493088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.229 qpair failed and we were unable to recover it. 00:38:35.229 [2024-12-09 09:56:10.493392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.229 [2024-12-09 09:56:10.493399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.229 qpair failed and we were unable to recover it. 00:38:35.229 [2024-12-09 09:56:10.493681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.229 [2024-12-09 09:56:10.493688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.229 qpair failed and we were unable to recover it. 00:38:35.229 [2024-12-09 09:56:10.493875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.229 [2024-12-09 09:56:10.493882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.229 qpair failed and we were unable to recover it. 00:38:35.229 [2024-12-09 09:56:10.494147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.229 [2024-12-09 09:56:10.494154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.229 qpair failed and we were unable to recover it. 00:38:35.229 [2024-12-09 09:56:10.494435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.229 [2024-12-09 09:56:10.494442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.229 qpair failed and we were unable to recover it. 00:38:35.229 [2024-12-09 09:56:10.494626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.229 [2024-12-09 09:56:10.494634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.229 qpair failed and we were unable to recover it. 00:38:35.229 [2024-12-09 09:56:10.494807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.229 [2024-12-09 09:56:10.494814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.229 qpair failed and we were unable to recover it. 00:38:35.229 [2024-12-09 09:56:10.495124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.229 [2024-12-09 09:56:10.495131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.229 qpair failed and we were unable to recover it. 00:38:35.229 [2024-12-09 09:56:10.495299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.229 [2024-12-09 09:56:10.495307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.229 qpair failed and we were unable to recover it. 00:38:35.229 [2024-12-09 09:56:10.495572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.229 [2024-12-09 09:56:10.495579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.229 qpair failed and we were unable to recover it. 00:38:35.229 [2024-12-09 09:56:10.495898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.229 [2024-12-09 09:56:10.495906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.229 qpair failed and we were unable to recover it. 00:38:35.229 [2024-12-09 09:56:10.496229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.229 [2024-12-09 09:56:10.496236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.229 qpair failed and we were unable to recover it. 00:38:35.229 [2024-12-09 09:56:10.496550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.229 [2024-12-09 09:56:10.496558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.229 qpair failed and we were unable to recover it. 00:38:35.229 [2024-12-09 09:56:10.496741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.229 [2024-12-09 09:56:10.496749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.229 qpair failed and we were unable to recover it. 00:38:35.229 [2024-12-09 09:56:10.496922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.229 [2024-12-09 09:56:10.496930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.229 qpair failed and we were unable to recover it. 00:38:35.229 [2024-12-09 09:56:10.497260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.229 [2024-12-09 09:56:10.497270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.229 qpair failed and we were unable to recover it. 00:38:35.229 [2024-12-09 09:56:10.497453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.229 [2024-12-09 09:56:10.497461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.229 qpair failed and we were unable to recover it. 00:38:35.229 [2024-12-09 09:56:10.497765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.229 [2024-12-09 09:56:10.497772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.229 qpair failed and we were unable to recover it. 00:38:35.229 [2024-12-09 09:56:10.497967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.229 [2024-12-09 09:56:10.497974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.229 qpair failed and we were unable to recover it. 00:38:35.229 [2024-12-09 09:56:10.498287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.229 [2024-12-09 09:56:10.498294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.229 qpair failed and we were unable to recover it. 00:38:35.229 [2024-12-09 09:56:10.498567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.229 [2024-12-09 09:56:10.498573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.229 qpair failed and we were unable to recover it. 00:38:35.229 [2024-12-09 09:56:10.498773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.229 [2024-12-09 09:56:10.498779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.229 qpair failed and we were unable to recover it. 00:38:35.229 [2024-12-09 09:56:10.499067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.229 [2024-12-09 09:56:10.499074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.229 qpair failed and we were unable to recover it. 00:38:35.229 [2024-12-09 09:56:10.499370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.229 [2024-12-09 09:56:10.499377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.229 qpair failed and we were unable to recover it. 00:38:35.230 [2024-12-09 09:56:10.499577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.230 [2024-12-09 09:56:10.499584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.230 qpair failed and we were unable to recover it. 00:38:35.230 [2024-12-09 09:56:10.499896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.230 [2024-12-09 09:56:10.499904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.230 qpair failed and we were unable to recover it. 00:38:35.230 [2024-12-09 09:56:10.499991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.230 [2024-12-09 09:56:10.499998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.230 qpair failed and we were unable to recover it. 00:38:35.230 [2024-12-09 09:56:10.500295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.230 [2024-12-09 09:56:10.500301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.230 qpair failed and we were unable to recover it. 00:38:35.230 [2024-12-09 09:56:10.500614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.230 [2024-12-09 09:56:10.500621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.230 qpair failed and we were unable to recover it. 00:38:35.230 [2024-12-09 09:56:10.500945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.230 [2024-12-09 09:56:10.500952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.230 qpair failed and we were unable to recover it. 00:38:35.230 [2024-12-09 09:56:10.501282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.230 [2024-12-09 09:56:10.501289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.230 qpair failed and we were unable to recover it. 00:38:35.230 [2024-12-09 09:56:10.501586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.230 [2024-12-09 09:56:10.501592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.230 qpair failed and we were unable to recover it. 00:38:35.230 [2024-12-09 09:56:10.501915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.230 [2024-12-09 09:56:10.501922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.230 qpair failed and we were unable to recover it. 00:38:35.230 [2024-12-09 09:56:10.502233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.230 [2024-12-09 09:56:10.502240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.230 qpair failed and we were unable to recover it. 00:38:35.230 [2024-12-09 09:56:10.502576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.230 [2024-12-09 09:56:10.502583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.230 qpair failed and we were unable to recover it. 00:38:35.230 [2024-12-09 09:56:10.502910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.230 [2024-12-09 09:56:10.502917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.230 qpair failed and we were unable to recover it. 00:38:35.230 [2024-12-09 09:56:10.503197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.230 [2024-12-09 09:56:10.503204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.230 qpair failed and we were unable to recover it. 00:38:35.230 [2024-12-09 09:56:10.503431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.230 [2024-12-09 09:56:10.503440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.230 qpair failed and we were unable to recover it. 00:38:35.230 [2024-12-09 09:56:10.503652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.230 [2024-12-09 09:56:10.503659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.230 qpair failed and we were unable to recover it. 00:38:35.230 [2024-12-09 09:56:10.503762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.230 [2024-12-09 09:56:10.503769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.230 qpair failed and we were unable to recover it. 00:38:35.230 [2024-12-09 09:56:10.504082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.230 [2024-12-09 09:56:10.504089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.230 qpair failed and we were unable to recover it. 00:38:35.230 [2024-12-09 09:56:10.504271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.230 [2024-12-09 09:56:10.504277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.230 qpair failed and we were unable to recover it. 00:38:35.230 [2024-12-09 09:56:10.504475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.230 [2024-12-09 09:56:10.504482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.230 qpair failed and we were unable to recover it. 00:38:35.230 [2024-12-09 09:56:10.504787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.230 [2024-12-09 09:56:10.504794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.230 qpair failed and we were unable to recover it. 00:38:35.230 [2024-12-09 09:56:10.505055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.230 [2024-12-09 09:56:10.505063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.230 qpair failed and we were unable to recover it. 00:38:35.230 [2024-12-09 09:56:10.505362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.230 [2024-12-09 09:56:10.505369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.230 qpair failed and we were unable to recover it. 00:38:35.230 [2024-12-09 09:56:10.505681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.230 [2024-12-09 09:56:10.505689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.230 qpair failed and we were unable to recover it. 00:38:35.230 [2024-12-09 09:56:10.505985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.230 [2024-12-09 09:56:10.505991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.230 qpair failed and we were unable to recover it. 00:38:35.230 [2024-12-09 09:56:10.506161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.230 [2024-12-09 09:56:10.506169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.230 qpair failed and we were unable to recover it. 00:38:35.230 [2024-12-09 09:56:10.506476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.230 [2024-12-09 09:56:10.506483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.230 qpair failed and we were unable to recover it. 00:38:35.230 [2024-12-09 09:56:10.506749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.230 [2024-12-09 09:56:10.506756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.230 qpair failed and we were unable to recover it. 00:38:35.230 [2024-12-09 09:56:10.507052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.230 [2024-12-09 09:56:10.507059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.230 qpair failed and we were unable to recover it. 00:38:35.230 [2024-12-09 09:56:10.507346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.230 [2024-12-09 09:56:10.507354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.230 qpair failed and we were unable to recover it. 00:38:35.230 [2024-12-09 09:56:10.507652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.230 [2024-12-09 09:56:10.507659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.230 qpair failed and we were unable to recover it. 00:38:35.230 [2024-12-09 09:56:10.507936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.230 [2024-12-09 09:56:10.507943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.230 qpair failed and we were unable to recover it. 00:38:35.230 [2024-12-09 09:56:10.508256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.230 [2024-12-09 09:56:10.508264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.230 qpair failed and we were unable to recover it. 00:38:35.230 [2024-12-09 09:56:10.508606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.230 [2024-12-09 09:56:10.508613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.230 qpair failed and we were unable to recover it. 00:38:35.230 [2024-12-09 09:56:10.508823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.230 [2024-12-09 09:56:10.508830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.230 qpair failed and we were unable to recover it. 00:38:35.230 [2024-12-09 09:56:10.509047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.230 [2024-12-09 09:56:10.509053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.230 qpair failed and we were unable to recover it. 00:38:35.230 [2024-12-09 09:56:10.509358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.230 [2024-12-09 09:56:10.509366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.230 qpair failed and we were unable to recover it. 00:38:35.230 [2024-12-09 09:56:10.509694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.230 [2024-12-09 09:56:10.509701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.230 qpair failed and we were unable to recover it. 00:38:35.230 [2024-12-09 09:56:10.510009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.230 [2024-12-09 09:56:10.510016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.230 qpair failed and we were unable to recover it. 00:38:35.231 [2024-12-09 09:56:10.510348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.231 [2024-12-09 09:56:10.510356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.231 qpair failed and we were unable to recover it. 00:38:35.231 [2024-12-09 09:56:10.510659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.231 [2024-12-09 09:56:10.510666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.231 qpair failed and we were unable to recover it. 00:38:35.231 [2024-12-09 09:56:10.510981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.231 [2024-12-09 09:56:10.510988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.231 qpair failed and we were unable to recover it. 00:38:35.231 [2024-12-09 09:56:10.511259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.231 [2024-12-09 09:56:10.511267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.231 qpair failed and we were unable to recover it. 00:38:35.231 [2024-12-09 09:56:10.511581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.231 [2024-12-09 09:56:10.511588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.231 qpair failed and we were unable to recover it. 00:38:35.231 [2024-12-09 09:56:10.511826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.231 [2024-12-09 09:56:10.511835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.231 qpair failed and we were unable to recover it. 00:38:35.231 [2024-12-09 09:56:10.512189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.231 [2024-12-09 09:56:10.512196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.231 qpair failed and we were unable to recover it. 00:38:35.231 [2024-12-09 09:56:10.512515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.231 [2024-12-09 09:56:10.512522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.231 qpair failed and we were unable to recover it. 00:38:35.231 [2024-12-09 09:56:10.512851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.231 [2024-12-09 09:56:10.512858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.231 qpair failed and we were unable to recover it. 00:38:35.231 [2024-12-09 09:56:10.513129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.231 [2024-12-09 09:56:10.513135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.231 qpair failed and we were unable to recover it. 00:38:35.231 [2024-12-09 09:56:10.513447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.231 [2024-12-09 09:56:10.513454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.231 qpair failed and we were unable to recover it. 00:38:35.231 [2024-12-09 09:56:10.513631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.231 [2024-12-09 09:56:10.513641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.231 qpair failed and we were unable to recover it. 00:38:35.231 [2024-12-09 09:56:10.513852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.231 [2024-12-09 09:56:10.513859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.231 qpair failed and we were unable to recover it. 00:38:35.231 [2024-12-09 09:56:10.514143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.231 [2024-12-09 09:56:10.514150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.231 qpair failed and we were unable to recover it. 00:38:35.231 [2024-12-09 09:56:10.514407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.231 [2024-12-09 09:56:10.514415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.231 qpair failed and we were unable to recover it. 00:38:35.231 [2024-12-09 09:56:10.514728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.231 [2024-12-09 09:56:10.514735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.231 qpair failed and we were unable to recover it. 00:38:35.231 [2024-12-09 09:56:10.515023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.231 [2024-12-09 09:56:10.515029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.231 qpair failed and we were unable to recover it. 00:38:35.231 [2024-12-09 09:56:10.515297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.231 [2024-12-09 09:56:10.515304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.231 qpair failed and we were unable to recover it. 00:38:35.231 [2024-12-09 09:56:10.515474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.231 [2024-12-09 09:56:10.515480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.231 qpair failed and we were unable to recover it. 00:38:35.231 [2024-12-09 09:56:10.515759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.231 [2024-12-09 09:56:10.515766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.231 qpair failed and we were unable to recover it. 00:38:35.231 [2024-12-09 09:56:10.515988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.231 [2024-12-09 09:56:10.515996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.231 qpair failed and we were unable to recover it. 00:38:35.231 [2024-12-09 09:56:10.516296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.231 [2024-12-09 09:56:10.516302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.231 qpair failed and we were unable to recover it. 00:38:35.231 [2024-12-09 09:56:10.516585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.231 [2024-12-09 09:56:10.516592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.231 qpair failed and we were unable to recover it. 00:38:35.231 [2024-12-09 09:56:10.516897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.231 [2024-12-09 09:56:10.516904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.231 qpair failed and we were unable to recover it. 00:38:35.231 [2024-12-09 09:56:10.517082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.231 [2024-12-09 09:56:10.517089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.231 qpair failed and we were unable to recover it. 00:38:35.231 [2024-12-09 09:56:10.517434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.231 [2024-12-09 09:56:10.517441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.231 qpair failed and we were unable to recover it. 00:38:35.231 [2024-12-09 09:56:10.517723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.231 [2024-12-09 09:56:10.517730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.231 qpair failed and we were unable to recover it. 00:38:35.231 [2024-12-09 09:56:10.518010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.231 [2024-12-09 09:56:10.518017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.231 qpair failed and we were unable to recover it. 00:38:35.231 [2024-12-09 09:56:10.518326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.231 [2024-12-09 09:56:10.518334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.231 qpair failed and we were unable to recover it. 00:38:35.231 [2024-12-09 09:56:10.518630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.231 [2024-12-09 09:56:10.518643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.231 qpair failed and we were unable to recover it. 00:38:35.231 [2024-12-09 09:56:10.518828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.231 [2024-12-09 09:56:10.518835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.231 qpair failed and we were unable to recover it. 00:38:35.231 [2024-12-09 09:56:10.519167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.231 [2024-12-09 09:56:10.519174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.231 qpair failed and we were unable to recover it. 00:38:35.231 [2024-12-09 09:56:10.519461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.231 [2024-12-09 09:56:10.519469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.231 qpair failed and we were unable to recover it. 00:38:35.231 [2024-12-09 09:56:10.519661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.231 [2024-12-09 09:56:10.519670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.231 qpair failed and we were unable to recover it. 00:38:35.231 [2024-12-09 09:56:10.519949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.231 [2024-12-09 09:56:10.519956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.231 qpair failed and we were unable to recover it. 00:38:35.231 [2024-12-09 09:56:10.520124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.231 [2024-12-09 09:56:10.520131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.231 qpair failed and we were unable to recover it. 00:38:35.231 [2024-12-09 09:56:10.520398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.231 [2024-12-09 09:56:10.520405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.231 qpair failed and we were unable to recover it. 00:38:35.231 [2024-12-09 09:56:10.520719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.231 [2024-12-09 09:56:10.520726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.231 qpair failed and we were unable to recover it. 00:38:35.232 [2024-12-09 09:56:10.521018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.232 [2024-12-09 09:56:10.521025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.232 qpair failed and we were unable to recover it. 00:38:35.232 [2024-12-09 09:56:10.521366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.232 [2024-12-09 09:56:10.521372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.232 qpair failed and we were unable to recover it. 00:38:35.232 [2024-12-09 09:56:10.521667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.232 [2024-12-09 09:56:10.521674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.232 qpair failed and we were unable to recover it. 00:38:35.232 [2024-12-09 09:56:10.521988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.232 [2024-12-09 09:56:10.521995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.232 qpair failed and we were unable to recover it. 00:38:35.232 [2024-12-09 09:56:10.522287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.232 [2024-12-09 09:56:10.522294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.232 qpair failed and we were unable to recover it. 00:38:35.232 [2024-12-09 09:56:10.522459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.232 [2024-12-09 09:56:10.522466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.232 qpair failed and we were unable to recover it. 00:38:35.232 [2024-12-09 09:56:10.522698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.232 [2024-12-09 09:56:10.522708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.232 qpair failed and we were unable to recover it. 00:38:35.232 [2024-12-09 09:56:10.523038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.232 [2024-12-09 09:56:10.523045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.232 qpair failed and we were unable to recover it. 00:38:35.232 [2024-12-09 09:56:10.523358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.232 [2024-12-09 09:56:10.523364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.232 qpair failed and we were unable to recover it. 00:38:35.232 [2024-12-09 09:56:10.523694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.232 [2024-12-09 09:56:10.523702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.232 qpair failed and we were unable to recover it. 00:38:35.232 [2024-12-09 09:56:10.523743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.232 [2024-12-09 09:56:10.523750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.232 qpair failed and we were unable to recover it. 00:38:35.232 [2024-12-09 09:56:10.524026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.232 [2024-12-09 09:56:10.524033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.232 qpair failed and we were unable to recover it. 00:38:35.232 [2024-12-09 09:56:10.524257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.232 [2024-12-09 09:56:10.524264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.232 qpair failed and we were unable to recover it. 00:38:35.232 [2024-12-09 09:56:10.524483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.232 [2024-12-09 09:56:10.524489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.232 qpair failed and we were unable to recover it. 00:38:35.232 [2024-12-09 09:56:10.524677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.232 [2024-12-09 09:56:10.524684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.232 qpair failed and we were unable to recover it. 00:38:35.232 [2024-12-09 09:56:10.525065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.232 [2024-12-09 09:56:10.525072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.232 qpair failed and we were unable to recover it. 00:38:35.232 [2024-12-09 09:56:10.525332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.232 [2024-12-09 09:56:10.525338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.232 qpair failed and we were unable to recover it. 00:38:35.232 [2024-12-09 09:56:10.525564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.232 [2024-12-09 09:56:10.525572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.232 qpair failed and we were unable to recover it. 00:38:35.232 [2024-12-09 09:56:10.525914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.232 [2024-12-09 09:56:10.525921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.232 qpair failed and we were unable to recover it. 00:38:35.232 [2024-12-09 09:56:10.526244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.232 [2024-12-09 09:56:10.526250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.232 qpair failed and we were unable to recover it. 00:38:35.232 [2024-12-09 09:56:10.526449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.232 [2024-12-09 09:56:10.526456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.232 qpair failed and we were unable to recover it. 00:38:35.232 [2024-12-09 09:56:10.526653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.232 [2024-12-09 09:56:10.526661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.232 qpair failed and we were unable to recover it. 00:38:35.232 [2024-12-09 09:56:10.526973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.232 [2024-12-09 09:56:10.526980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.232 qpair failed and we were unable to recover it. 00:38:35.232 [2024-12-09 09:56:10.527272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.232 [2024-12-09 09:56:10.527279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.232 qpair failed and we were unable to recover it. 00:38:35.232 [2024-12-09 09:56:10.527611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.232 [2024-12-09 09:56:10.527618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.232 qpair failed and we were unable to recover it. 00:38:35.232 [2024-12-09 09:56:10.527796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.232 [2024-12-09 09:56:10.527804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.232 qpair failed and we were unable to recover it. 00:38:35.232 [2024-12-09 09:56:10.528007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.232 [2024-12-09 09:56:10.528014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.232 qpair failed and we were unable to recover it. 00:38:35.232 [2024-12-09 09:56:10.528330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.232 [2024-12-09 09:56:10.528337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.232 qpair failed and we were unable to recover it. 00:38:35.232 [2024-12-09 09:56:10.528544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.232 [2024-12-09 09:56:10.528551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.232 qpair failed and we were unable to recover it. 00:38:35.232 [2024-12-09 09:56:10.528872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.232 [2024-12-09 09:56:10.528880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.232 qpair failed and we were unable to recover it. 00:38:35.232 [2024-12-09 09:56:10.529193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.232 [2024-12-09 09:56:10.529200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.232 qpair failed and we were unable to recover it. 00:38:35.232 [2024-12-09 09:56:10.529488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.232 [2024-12-09 09:56:10.529495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.232 qpair failed and we were unable to recover it. 00:38:35.232 [2024-12-09 09:56:10.529819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.232 [2024-12-09 09:56:10.529827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.232 qpair failed and we were unable to recover it. 00:38:35.232 [2024-12-09 09:56:10.530100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.233 [2024-12-09 09:56:10.530108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.233 qpair failed and we were unable to recover it. 00:38:35.233 [2024-12-09 09:56:10.530409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.233 [2024-12-09 09:56:10.530417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.233 qpair failed and we were unable to recover it. 00:38:35.233 [2024-12-09 09:56:10.530701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.233 [2024-12-09 09:56:10.530709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.233 qpair failed and we were unable to recover it. 00:38:35.233 [2024-12-09 09:56:10.530977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.233 [2024-12-09 09:56:10.530985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.233 qpair failed and we were unable to recover it. 00:38:35.233 [2024-12-09 09:56:10.531272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.233 [2024-12-09 09:56:10.531279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.233 qpair failed and we were unable to recover it. 00:38:35.233 [2024-12-09 09:56:10.531556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.233 [2024-12-09 09:56:10.531562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.233 qpair failed and we were unable to recover it. 00:38:35.233 [2024-12-09 09:56:10.531772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.233 [2024-12-09 09:56:10.531781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.233 qpair failed and we were unable to recover it. 00:38:35.233 [2024-12-09 09:56:10.531979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.233 [2024-12-09 09:56:10.531986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.233 qpair failed and we were unable to recover it. 00:38:35.233 [2024-12-09 09:56:10.532348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.233 [2024-12-09 09:56:10.532355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.233 qpair failed and we were unable to recover it. 00:38:35.233 [2024-12-09 09:56:10.532717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.233 [2024-12-09 09:56:10.532724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.233 qpair failed and we were unable to recover it. 00:38:35.233 [2024-12-09 09:56:10.532941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.233 [2024-12-09 09:56:10.532947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.233 qpair failed and we were unable to recover it. 00:38:35.233 [2024-12-09 09:56:10.533133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.233 [2024-12-09 09:56:10.533140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.233 qpair failed and we were unable to recover it. 00:38:35.233 [2024-12-09 09:56:10.533181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.233 [2024-12-09 09:56:10.533188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.233 qpair failed and we were unable to recover it. 00:38:35.233 [2024-12-09 09:56:10.533477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.233 [2024-12-09 09:56:10.533484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.233 qpair failed and we were unable to recover it. 00:38:35.233 [2024-12-09 09:56:10.533830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.233 [2024-12-09 09:56:10.533838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.233 qpair failed and we were unable to recover it. 00:38:35.233 [2024-12-09 09:56:10.534134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.233 [2024-12-09 09:56:10.534141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.233 qpair failed and we were unable to recover it. 00:38:35.233 [2024-12-09 09:56:10.534324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.233 [2024-12-09 09:56:10.534342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.233 qpair failed and we were unable to recover it. 00:38:35.233 [2024-12-09 09:56:10.534498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.233 [2024-12-09 09:56:10.534505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.233 qpair failed and we were unable to recover it. 00:38:35.233 [2024-12-09 09:56:10.534719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.233 [2024-12-09 09:56:10.534727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.233 qpair failed and we were unable to recover it. 00:38:35.233 [2024-12-09 09:56:10.535034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.233 [2024-12-09 09:56:10.535041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.233 qpair failed and we were unable to recover it. 00:38:35.233 [2024-12-09 09:56:10.535333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.233 [2024-12-09 09:56:10.535340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.233 qpair failed and we were unable to recover it. 00:38:35.233 [2024-12-09 09:56:10.535501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.233 [2024-12-09 09:56:10.535518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.233 qpair failed and we were unable to recover it. 00:38:35.233 [2024-12-09 09:56:10.535745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.233 [2024-12-09 09:56:10.535755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.233 qpair failed and we were unable to recover it. 00:38:35.233 [2024-12-09 09:56:10.536211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.233 [2024-12-09 09:56:10.536218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.233 qpair failed and we were unable to recover it. 00:38:35.233 [2024-12-09 09:56:10.536380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.233 [2024-12-09 09:56:10.536387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.233 qpair failed and we were unable to recover it. 00:38:35.233 [2024-12-09 09:56:10.536717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.233 [2024-12-09 09:56:10.536725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.233 qpair failed and we were unable to recover it. 00:38:35.233 [2024-12-09 09:56:10.536887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.233 [2024-12-09 09:56:10.536895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.233 qpair failed and we were unable to recover it. 00:38:35.233 [2024-12-09 09:56:10.537191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.233 [2024-12-09 09:56:10.537198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.233 qpair failed and we were unable to recover it. 00:38:35.233 [2024-12-09 09:56:10.537360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.233 [2024-12-09 09:56:10.537367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.233 qpair failed and we were unable to recover it. 00:38:35.233 [2024-12-09 09:56:10.537642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.233 [2024-12-09 09:56:10.537650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.233 qpair failed and we were unable to recover it. 00:38:35.233 [2024-12-09 09:56:10.537949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.233 [2024-12-09 09:56:10.537957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.233 qpair failed and we were unable to recover it. 00:38:35.233 [2024-12-09 09:56:10.538137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.233 [2024-12-09 09:56:10.538146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.233 qpair failed and we were unable to recover it. 00:38:35.233 [2024-12-09 09:56:10.538470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.233 [2024-12-09 09:56:10.538477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.233 qpair failed and we were unable to recover it. 00:38:35.233 [2024-12-09 09:56:10.538774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.233 [2024-12-09 09:56:10.538781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.233 qpair failed and we were unable to recover it. 00:38:35.233 [2024-12-09 09:56:10.538853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.233 [2024-12-09 09:56:10.538859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.233 qpair failed and we were unable to recover it. 00:38:35.233 [2024-12-09 09:56:10.539192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.233 [2024-12-09 09:56:10.539199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.233 qpair failed and we were unable to recover it. 00:38:35.233 [2024-12-09 09:56:10.539513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.233 [2024-12-09 09:56:10.539520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.233 qpair failed and we were unable to recover it. 00:38:35.233 [2024-12-09 09:56:10.539814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.233 [2024-12-09 09:56:10.539823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.233 qpair failed and we were unable to recover it. 00:38:35.234 [2024-12-09 09:56:10.540131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.234 [2024-12-09 09:56:10.540137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.234 qpair failed and we were unable to recover it. 00:38:35.234 [2024-12-09 09:56:10.540290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.234 [2024-12-09 09:56:10.540296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.234 qpair failed and we were unable to recover it. 00:38:35.234 [2024-12-09 09:56:10.540606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.234 [2024-12-09 09:56:10.540613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.234 qpair failed and we were unable to recover it. 00:38:35.234 [2024-12-09 09:56:10.540983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.234 [2024-12-09 09:56:10.540990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.234 qpair failed and we were unable to recover it. 00:38:35.234 [2024-12-09 09:56:10.541312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.234 [2024-12-09 09:56:10.541321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.234 qpair failed and we were unable to recover it. 00:38:35.234 [2024-12-09 09:56:10.541640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.234 [2024-12-09 09:56:10.541647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.234 qpair failed and we were unable to recover it. 00:38:35.234 [2024-12-09 09:56:10.541861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.234 [2024-12-09 09:56:10.541868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.234 qpair failed and we were unable to recover it. 00:38:35.234 [2024-12-09 09:56:10.542241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.234 [2024-12-09 09:56:10.542248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.234 qpair failed and we were unable to recover it. 00:38:35.234 [2024-12-09 09:56:10.542560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.234 [2024-12-09 09:56:10.542567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.234 qpair failed and we were unable to recover it. 00:38:35.234 [2024-12-09 09:56:10.542824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.234 [2024-12-09 09:56:10.542832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.234 qpair failed and we were unable to recover it. 00:38:35.234 [2024-12-09 09:56:10.543086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.234 [2024-12-09 09:56:10.543092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.234 qpair failed and we were unable to recover it. 00:38:35.234 [2024-12-09 09:56:10.543396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.234 [2024-12-09 09:56:10.543404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.234 qpair failed and we were unable to recover it. 00:38:35.234 [2024-12-09 09:56:10.543616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.234 [2024-12-09 09:56:10.543624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.234 qpair failed and we were unable to recover it. 00:38:35.234 [2024-12-09 09:56:10.543922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.234 [2024-12-09 09:56:10.543929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.234 qpair failed and we were unable to recover it. 00:38:35.234 [2024-12-09 09:56:10.544245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.234 [2024-12-09 09:56:10.544252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.234 qpair failed and we were unable to recover it. 00:38:35.234 [2024-12-09 09:56:10.544328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.234 [2024-12-09 09:56:10.544336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.234 qpair failed and we were unable to recover it. 00:38:35.234 [2024-12-09 09:56:10.544518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.234 [2024-12-09 09:56:10.544526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.234 qpair failed and we were unable to recover it. 00:38:35.234 [2024-12-09 09:56:10.544851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.234 [2024-12-09 09:56:10.544858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.234 qpair failed and we were unable to recover it. 00:38:35.234 [2024-12-09 09:56:10.545127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.234 [2024-12-09 09:56:10.545135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.234 qpair failed and we were unable to recover it. 00:38:35.234 [2024-12-09 09:56:10.545440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.234 [2024-12-09 09:56:10.545446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.234 qpair failed and we were unable to recover it. 00:38:35.234 [2024-12-09 09:56:10.545738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.234 [2024-12-09 09:56:10.545745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.234 qpair failed and we were unable to recover it. 00:38:35.234 [2024-12-09 09:56:10.546078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.234 [2024-12-09 09:56:10.546085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.234 qpair failed and we were unable to recover it. 00:38:35.234 [2024-12-09 09:56:10.546394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.234 [2024-12-09 09:56:10.546401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.234 qpair failed and we were unable to recover it. 00:38:35.234 [2024-12-09 09:56:10.546696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.234 [2024-12-09 09:56:10.546703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.234 qpair failed and we were unable to recover it. 00:38:35.234 [2024-12-09 09:56:10.547003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.234 [2024-12-09 09:56:10.547011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.234 qpair failed and we were unable to recover it. 00:38:35.234 [2024-12-09 09:56:10.547198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.234 [2024-12-09 09:56:10.547208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.234 qpair failed and we were unable to recover it. 00:38:35.234 [2024-12-09 09:56:10.547494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.234 [2024-12-09 09:56:10.547501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.234 qpair failed and we were unable to recover it. 00:38:35.234 [2024-12-09 09:56:10.547892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.234 [2024-12-09 09:56:10.547899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.234 qpair failed and we were unable to recover it. 00:38:35.234 [2024-12-09 09:56:10.548178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.234 [2024-12-09 09:56:10.548185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.234 qpair failed and we were unable to recover it. 00:38:35.234 [2024-12-09 09:56:10.548499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.234 [2024-12-09 09:56:10.548507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.234 qpair failed and we were unable to recover it. 00:38:35.234 [2024-12-09 09:56:10.548801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.234 [2024-12-09 09:56:10.548808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.234 qpair failed and we were unable to recover it. 00:38:35.234 [2024-12-09 09:56:10.549122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.234 [2024-12-09 09:56:10.549129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.234 qpair failed and we were unable to recover it. 00:38:35.234 [2024-12-09 09:56:10.549311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.234 [2024-12-09 09:56:10.549317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.234 qpair failed and we were unable to recover it. 00:38:35.234 [2024-12-09 09:56:10.549621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.234 [2024-12-09 09:56:10.549628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.234 qpair failed and we were unable to recover it. 00:38:35.234 [2024-12-09 09:56:10.549950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.234 [2024-12-09 09:56:10.549958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.234 qpair failed and we were unable to recover it. 00:38:35.234 [2024-12-09 09:56:10.550276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.234 [2024-12-09 09:56:10.550284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.234 qpair failed and we were unable to recover it. 00:38:35.234 [2024-12-09 09:56:10.550498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.234 [2024-12-09 09:56:10.550505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.234 qpair failed and we were unable to recover it. 00:38:35.234 [2024-12-09 09:56:10.550688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.234 [2024-12-09 09:56:10.550695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.234 qpair failed and we were unable to recover it. 00:38:35.234 [2024-12-09 09:56:10.551020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.235 [2024-12-09 09:56:10.551027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.235 qpair failed and we were unable to recover it. 00:38:35.235 [2024-12-09 09:56:10.551240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.235 [2024-12-09 09:56:10.551249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.235 qpair failed and we were unable to recover it. 00:38:35.235 [2024-12-09 09:56:10.551567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.235 [2024-12-09 09:56:10.551574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.235 qpair failed and we were unable to recover it. 00:38:35.235 [2024-12-09 09:56:10.551923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.235 [2024-12-09 09:56:10.551931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.235 qpair failed and we were unable to recover it. 00:38:35.235 [2024-12-09 09:56:10.552251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.235 [2024-12-09 09:56:10.552258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.235 qpair failed and we were unable to recover it. 00:38:35.235 [2024-12-09 09:56:10.552547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.235 [2024-12-09 09:56:10.552555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.235 qpair failed and we were unable to recover it. 00:38:35.235 [2024-12-09 09:56:10.552897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.235 [2024-12-09 09:56:10.552907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.235 qpair failed and we were unable to recover it. 00:38:35.235 [2024-12-09 09:56:10.553201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.235 [2024-12-09 09:56:10.553209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.235 qpair failed and we were unable to recover it. 00:38:35.235 [2024-12-09 09:56:10.553520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.235 [2024-12-09 09:56:10.553528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.235 qpair failed and we were unable to recover it. 00:38:35.235 [2024-12-09 09:56:10.553915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.235 [2024-12-09 09:56:10.553922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.235 qpair failed and we were unable to recover it. 00:38:35.235 [2024-12-09 09:56:10.554226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.235 [2024-12-09 09:56:10.554234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.235 qpair failed and we were unable to recover it. 00:38:35.235 [2024-12-09 09:56:10.554506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.235 [2024-12-09 09:56:10.554514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.235 qpair failed and we were unable to recover it. 00:38:35.235 [2024-12-09 09:56:10.554556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.235 [2024-12-09 09:56:10.554562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.235 qpair failed and we were unable to recover it. 00:38:35.235 [2024-12-09 09:56:10.554849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.235 [2024-12-09 09:56:10.554857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.235 qpair failed and we were unable to recover it. 00:38:35.235 [2024-12-09 09:56:10.555076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.235 [2024-12-09 09:56:10.555082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.235 qpair failed and we were unable to recover it. 00:38:35.235 [2024-12-09 09:56:10.555248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.235 [2024-12-09 09:56:10.555255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.235 qpair failed and we were unable to recover it. 00:38:35.235 [2024-12-09 09:56:10.555541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.235 [2024-12-09 09:56:10.555548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.235 qpair failed and we were unable to recover it. 00:38:35.235 [2024-12-09 09:56:10.555855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.235 [2024-12-09 09:56:10.555862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.235 qpair failed and we were unable to recover it. 00:38:35.235 [2024-12-09 09:56:10.556200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.235 [2024-12-09 09:56:10.556206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.235 qpair failed and we were unable to recover it. 00:38:35.235 [2024-12-09 09:56:10.556534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.235 [2024-12-09 09:56:10.556541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.235 qpair failed and we were unable to recover it. 00:38:35.235 [2024-12-09 09:56:10.556843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.235 [2024-12-09 09:56:10.556851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.235 qpair failed and we were unable to recover it. 00:38:35.235 [2024-12-09 09:56:10.557138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.235 [2024-12-09 09:56:10.557145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.235 qpair failed and we were unable to recover it. 00:38:35.235 [2024-12-09 09:56:10.557458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.235 [2024-12-09 09:56:10.557465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.235 qpair failed and we were unable to recover it. 00:38:35.235 [2024-12-09 09:56:10.557768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.235 [2024-12-09 09:56:10.557775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.235 qpair failed and we were unable to recover it. 00:38:35.235 [2024-12-09 09:56:10.558063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.235 [2024-12-09 09:56:10.558071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.235 qpair failed and we were unable to recover it. 00:38:35.235 [2024-12-09 09:56:10.558366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.235 [2024-12-09 09:56:10.558373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.235 qpair failed and we were unable to recover it. 00:38:35.235 [2024-12-09 09:56:10.558653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.235 [2024-12-09 09:56:10.558661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.235 qpair failed and we were unable to recover it. 00:38:35.235 [2024-12-09 09:56:10.559030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.235 [2024-12-09 09:56:10.559037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.235 qpair failed and we were unable to recover it. 00:38:35.235 [2024-12-09 09:56:10.559340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.235 [2024-12-09 09:56:10.559348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.235 qpair failed and we were unable to recover it. 00:38:35.235 [2024-12-09 09:56:10.559560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.235 [2024-12-09 09:56:10.559568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.235 qpair failed and we were unable to recover it. 00:38:35.235 [2024-12-09 09:56:10.559723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.235 [2024-12-09 09:56:10.559730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.235 qpair failed and we were unable to recover it. 00:38:35.235 [2024-12-09 09:56:10.560018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.235 [2024-12-09 09:56:10.560025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.235 qpair failed and we were unable to recover it. 00:38:35.235 [2024-12-09 09:56:10.560303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.235 [2024-12-09 09:56:10.560310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.235 qpair failed and we were unable to recover it. 00:38:35.235 [2024-12-09 09:56:10.560515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.235 [2024-12-09 09:56:10.560523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.235 qpair failed and we were unable to recover it. 00:38:35.235 [2024-12-09 09:56:10.560832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.235 [2024-12-09 09:56:10.560839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.235 qpair failed and we were unable to recover it. 00:38:35.235 [2024-12-09 09:56:10.561152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.235 [2024-12-09 09:56:10.561160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.235 qpair failed and we were unable to recover it. 00:38:35.235 [2024-12-09 09:56:10.561457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.235 [2024-12-09 09:56:10.561464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.235 qpair failed and we were unable to recover it. 00:38:35.235 [2024-12-09 09:56:10.561760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.235 [2024-12-09 09:56:10.561768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.235 qpair failed and we were unable to recover it. 00:38:35.235 [2024-12-09 09:56:10.562081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.236 [2024-12-09 09:56:10.562088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.236 qpair failed and we were unable to recover it. 00:38:35.236 [2024-12-09 09:56:10.562281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.236 [2024-12-09 09:56:10.562295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.236 qpair failed and we were unable to recover it. 00:38:35.236 [2024-12-09 09:56:10.562506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.236 [2024-12-09 09:56:10.562513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.236 qpair failed and we were unable to recover it. 00:38:35.236 [2024-12-09 09:56:10.562675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.236 [2024-12-09 09:56:10.562683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.236 qpair failed and we were unable to recover it. 00:38:35.236 [2024-12-09 09:56:10.563006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.236 [2024-12-09 09:56:10.563013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.236 qpair failed and we were unable to recover it. 00:38:35.236 [2024-12-09 09:56:10.563185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.236 [2024-12-09 09:56:10.563194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.236 qpair failed and we were unable to recover it. 00:38:35.236 [2024-12-09 09:56:10.563332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.236 [2024-12-09 09:56:10.563340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.236 qpair failed and we were unable to recover it. 00:38:35.236 [2024-12-09 09:56:10.563523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.236 [2024-12-09 09:56:10.563530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.236 qpair failed and we were unable to recover it. 00:38:35.236 [2024-12-09 09:56:10.563881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.236 [2024-12-09 09:56:10.563891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.236 qpair failed and we were unable to recover it. 00:38:35.236 [2024-12-09 09:56:10.564156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.236 [2024-12-09 09:56:10.564163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.236 qpair failed and we were unable to recover it. 00:38:35.236 [2024-12-09 09:56:10.564475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.236 [2024-12-09 09:56:10.564483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.236 qpair failed and we were unable to recover it. 00:38:35.236 [2024-12-09 09:56:10.564808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.236 [2024-12-09 09:56:10.564816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.236 qpair failed and we were unable to recover it. 00:38:35.236 [2024-12-09 09:56:10.565018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.236 [2024-12-09 09:56:10.565025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.236 qpair failed and we were unable to recover it. 00:38:35.236 [2024-12-09 09:56:10.565187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.236 [2024-12-09 09:56:10.565194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.236 qpair failed and we were unable to recover it. 00:38:35.236 [2024-12-09 09:56:10.565542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.236 [2024-12-09 09:56:10.565550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.236 qpair failed and we were unable to recover it. 00:38:35.236 [2024-12-09 09:56:10.565867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.236 [2024-12-09 09:56:10.565875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.236 qpair failed and we were unable to recover it. 00:38:35.236 [2024-12-09 09:56:10.566192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.236 [2024-12-09 09:56:10.566198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.236 qpair failed and we were unable to recover it. 00:38:35.236 [2024-12-09 09:56:10.566370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.236 [2024-12-09 09:56:10.566386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.236 qpair failed and we were unable to recover it. 00:38:35.236 [2024-12-09 09:56:10.566679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.236 [2024-12-09 09:56:10.566686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.236 qpair failed and we were unable to recover it. 00:38:35.236 [2024-12-09 09:56:10.566910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.236 [2024-12-09 09:56:10.566917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.236 qpair failed and we were unable to recover it. 00:38:35.236 [2024-12-09 09:56:10.567209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.236 [2024-12-09 09:56:10.567215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.236 qpair failed and we were unable to recover it. 00:38:35.236 [2024-12-09 09:56:10.567441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.236 [2024-12-09 09:56:10.567449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.236 qpair failed and we were unable to recover it. 00:38:35.236 [2024-12-09 09:56:10.567731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.236 [2024-12-09 09:56:10.567739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.236 qpair failed and we were unable to recover it. 00:38:35.236 [2024-12-09 09:56:10.568071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.236 [2024-12-09 09:56:10.568078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.236 qpair failed and we were unable to recover it. 00:38:35.236 [2024-12-09 09:56:10.568352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.236 [2024-12-09 09:56:10.568358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.236 qpair failed and we were unable to recover it. 00:38:35.236 [2024-12-09 09:56:10.568647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.236 [2024-12-09 09:56:10.568654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.236 qpair failed and we were unable to recover it. 00:38:35.236 [2024-12-09 09:56:10.568836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.236 [2024-12-09 09:56:10.568846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.236 qpair failed and we were unable to recover it. 00:38:35.236 [2024-12-09 09:56:10.569115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.236 [2024-12-09 09:56:10.569122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.236 qpair failed and we were unable to recover it. 00:38:35.236 [2024-12-09 09:56:10.569451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.236 [2024-12-09 09:56:10.569458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.236 qpair failed and we were unable to recover it. 00:38:35.236 [2024-12-09 09:56:10.569763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.236 [2024-12-09 09:56:10.569771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.236 qpair failed and we were unable to recover it. 00:38:35.236 [2024-12-09 09:56:10.570085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.236 [2024-12-09 09:56:10.570092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.236 qpair failed and we were unable to recover it. 00:38:35.236 [2024-12-09 09:56:10.570300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.236 [2024-12-09 09:56:10.570307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.236 qpair failed and we were unable to recover it. 00:38:35.236 [2024-12-09 09:56:10.570582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.236 [2024-12-09 09:56:10.570589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.236 qpair failed and we were unable to recover it. 00:38:35.236 [2024-12-09 09:56:10.570728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.236 [2024-12-09 09:56:10.570735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.236 qpair failed and we were unable to recover it. 00:38:35.236 [2024-12-09 09:56:10.571004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.236 [2024-12-09 09:56:10.571011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.236 qpair failed and we were unable to recover it. 00:38:35.236 [2024-12-09 09:56:10.571131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.236 [2024-12-09 09:56:10.571138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.236 qpair failed and we were unable to recover it. 00:38:35.236 [2024-12-09 09:56:10.571447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.236 [2024-12-09 09:56:10.571453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.236 qpair failed and we were unable to recover it. 00:38:35.236 [2024-12-09 09:56:10.571732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.236 [2024-12-09 09:56:10.571739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.236 qpair failed and we were unable to recover it. 00:38:35.236 [2024-12-09 09:56:10.572044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.237 [2024-12-09 09:56:10.572051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.237 qpair failed and we were unable to recover it. 00:38:35.237 [2024-12-09 09:56:10.572365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.237 [2024-12-09 09:56:10.572372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.237 qpair failed and we were unable to recover it. 00:38:35.237 [2024-12-09 09:56:10.572560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.237 [2024-12-09 09:56:10.572568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.237 qpair failed and we were unable to recover it. 00:38:35.237 [2024-12-09 09:56:10.572857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.237 [2024-12-09 09:56:10.572864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.237 qpair failed and we were unable to recover it. 00:38:35.237 [2024-12-09 09:56:10.573148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.237 [2024-12-09 09:56:10.573155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.237 qpair failed and we were unable to recover it. 00:38:35.237 [2024-12-09 09:56:10.573470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.237 [2024-12-09 09:56:10.573478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.237 qpair failed and we were unable to recover it. 00:38:35.237 [2024-12-09 09:56:10.573804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.237 [2024-12-09 09:56:10.573811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.237 qpair failed and we were unable to recover it. 00:38:35.237 [2024-12-09 09:56:10.574142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.237 [2024-12-09 09:56:10.574149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.237 qpair failed and we were unable to recover it. 00:38:35.237 [2024-12-09 09:56:10.574352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.237 [2024-12-09 09:56:10.574359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.237 qpair failed and we were unable to recover it. 00:38:35.237 [2024-12-09 09:56:10.574645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.237 [2024-12-09 09:56:10.574652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.237 qpair failed and we were unable to recover it. 00:38:35.237 [2024-12-09 09:56:10.574940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.237 [2024-12-09 09:56:10.574949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.237 qpair failed and we were unable to recover it. 00:38:35.237 [2024-12-09 09:56:10.575221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.237 [2024-12-09 09:56:10.575228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.237 qpair failed and we were unable to recover it. 00:38:35.237 [2024-12-09 09:56:10.575494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.237 [2024-12-09 09:56:10.575501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.237 qpair failed and we were unable to recover it. 00:38:35.237 [2024-12-09 09:56:10.575675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.237 [2024-12-09 09:56:10.575684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.237 qpair failed and we were unable to recover it. 00:38:35.237 [2024-12-09 09:56:10.576098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.237 [2024-12-09 09:56:10.576105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.237 qpair failed and we were unable to recover it. 00:38:35.237 [2024-12-09 09:56:10.576388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.237 [2024-12-09 09:56:10.576395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.237 qpair failed and we were unable to recover it. 00:38:35.237 [2024-12-09 09:56:10.576743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.237 [2024-12-09 09:56:10.576750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.237 qpair failed and we were unable to recover it. 00:38:35.237 [2024-12-09 09:56:10.577064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.237 [2024-12-09 09:56:10.577071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.237 qpair failed and we were unable to recover it. 00:38:35.237 [2024-12-09 09:56:10.577293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.237 [2024-12-09 09:56:10.577300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.237 qpair failed and we were unable to recover it. 00:38:35.237 [2024-12-09 09:56:10.577493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.237 [2024-12-09 09:56:10.577500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.237 qpair failed and we were unable to recover it. 00:38:35.237 [2024-12-09 09:56:10.577754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.237 [2024-12-09 09:56:10.577761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.237 qpair failed and we were unable to recover it. 00:38:35.237 [2024-12-09 09:56:10.578009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.237 [2024-12-09 09:56:10.578017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.237 qpair failed and we were unable to recover it. 00:38:35.237 [2024-12-09 09:56:10.578294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.237 [2024-12-09 09:56:10.578301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.237 qpair failed and we were unable to recover it. 00:38:35.237 [2024-12-09 09:56:10.578466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.237 [2024-12-09 09:56:10.578473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.237 qpair failed and we were unable to recover it. 00:38:35.237 [2024-12-09 09:56:10.578749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.237 [2024-12-09 09:56:10.578756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.237 qpair failed and we were unable to recover it. 00:38:35.237 [2024-12-09 09:56:10.578816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.237 [2024-12-09 09:56:10.578823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.237 qpair failed and we were unable to recover it. 00:38:35.237 [2024-12-09 09:56:10.579105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.237 [2024-12-09 09:56:10.579112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.237 qpair failed and we were unable to recover it. 00:38:35.237 [2024-12-09 09:56:10.579277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.237 [2024-12-09 09:56:10.579284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.237 qpair failed and we were unable to recover it. 00:38:35.237 [2024-12-09 09:56:10.579551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.237 [2024-12-09 09:56:10.579558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.237 qpair failed and we were unable to recover it. 00:38:35.237 [2024-12-09 09:56:10.579866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.237 [2024-12-09 09:56:10.579873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.237 qpair failed and we were unable to recover it. 00:38:35.237 [2024-12-09 09:56:10.580180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.237 [2024-12-09 09:56:10.580187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.237 qpair failed and we were unable to recover it. 00:38:35.237 [2024-12-09 09:56:10.580531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.237 [2024-12-09 09:56:10.580539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.237 qpair failed and we were unable to recover it. 00:38:35.237 [2024-12-09 09:56:10.580842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.237 [2024-12-09 09:56:10.580850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.237 qpair failed and we were unable to recover it. 00:38:35.237 [2024-12-09 09:56:10.581161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.237 [2024-12-09 09:56:10.581169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.237 qpair failed and we were unable to recover it. 00:38:35.237 [2024-12-09 09:56:10.581518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.238 [2024-12-09 09:56:10.581527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.238 qpair failed and we were unable to recover it. 00:38:35.238 [2024-12-09 09:56:10.581844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.238 [2024-12-09 09:56:10.581852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.238 qpair failed and we were unable to recover it. 00:38:35.238 [2024-12-09 09:56:10.582117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.238 [2024-12-09 09:56:10.582124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.238 qpair failed and we were unable to recover it. 00:38:35.238 [2024-12-09 09:56:10.582349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.238 [2024-12-09 09:56:10.582356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.238 qpair failed and we were unable to recover it. 00:38:35.238 [2024-12-09 09:56:10.582573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.238 [2024-12-09 09:56:10.582580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.238 qpair failed and we were unable to recover it. 00:38:35.238 [2024-12-09 09:56:10.582831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.238 [2024-12-09 09:56:10.582838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.238 qpair failed and we were unable to recover it. 00:38:35.238 [2024-12-09 09:56:10.583016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.238 [2024-12-09 09:56:10.583023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.238 qpair failed and we were unable to recover it. 00:38:35.238 [2024-12-09 09:56:10.583472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.238 [2024-12-09 09:56:10.583567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf4000b90 with addr=10.0.0.2, port=4420 00:38:35.238 qpair failed and we were unable to recover it. 00:38:35.238 [2024-12-09 09:56:10.584014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.238 [2024-12-09 09:56:10.584057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf4000b90 with addr=10.0.0.2, port=4420 00:38:35.238 qpair failed and we were unable to recover it. 00:38:35.238 [2024-12-09 09:56:10.584432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.238 [2024-12-09 09:56:10.584464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf4000b90 with addr=10.0.0.2, port=4420 00:38:35.238 qpair failed and we were unable to recover it. 00:38:35.238 [2024-12-09 09:56:10.584634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.238 [2024-12-09 09:56:10.584648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.238 qpair failed and we were unable to recover it. 00:38:35.238 [2024-12-09 09:56:10.584969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.238 [2024-12-09 09:56:10.584976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.238 qpair failed and we were unable to recover it. 00:38:35.238 [2024-12-09 09:56:10.585144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.238 [2024-12-09 09:56:10.585151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.238 qpair failed and we were unable to recover it. 00:38:35.238 [2024-12-09 09:56:10.585477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.238 [2024-12-09 09:56:10.585484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.238 qpair failed and we were unable to recover it. 00:38:35.238 [2024-12-09 09:56:10.585671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.238 [2024-12-09 09:56:10.585678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.238 qpair failed and we were unable to recover it. 00:38:35.238 [2024-12-09 09:56:10.585937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.238 [2024-12-09 09:56:10.585944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.238 qpair failed and we were unable to recover it. 00:38:35.238 [2024-12-09 09:56:10.586210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.238 [2024-12-09 09:56:10.586219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.238 qpair failed and we were unable to recover it. 00:38:35.238 [2024-12-09 09:56:10.586548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.238 [2024-12-09 09:56:10.586555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.238 qpair failed and we were unable to recover it. 00:38:35.238 [2024-12-09 09:56:10.586816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.238 [2024-12-09 09:56:10.586823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.238 qpair failed and we were unable to recover it. 00:38:35.238 [2024-12-09 09:56:10.587148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.238 [2024-12-09 09:56:10.587155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.238 qpair failed and we were unable to recover it. 00:38:35.238 [2024-12-09 09:56:10.587487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.238 [2024-12-09 09:56:10.587494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.238 qpair failed and we were unable to recover it. 00:38:35.238 [2024-12-09 09:56:10.587807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.238 [2024-12-09 09:56:10.587815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.238 qpair failed and we were unable to recover it. 00:38:35.238 [2024-12-09 09:56:10.587991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.238 [2024-12-09 09:56:10.587999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.238 qpair failed and we were unable to recover it. 00:38:35.238 [2024-12-09 09:56:10.588266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.238 [2024-12-09 09:56:10.588274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.238 qpair failed and we were unable to recover it. 00:38:35.238 [2024-12-09 09:56:10.588484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.238 [2024-12-09 09:56:10.588491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.238 qpair failed and we were unable to recover it. 00:38:35.238 [2024-12-09 09:56:10.588646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.238 [2024-12-09 09:56:10.588654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.238 qpair failed and we were unable to recover it. 00:38:35.238 [2024-12-09 09:56:10.588938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.238 [2024-12-09 09:56:10.588945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.238 qpair failed and we were unable to recover it. 00:38:35.238 [2024-12-09 09:56:10.589225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.238 [2024-12-09 09:56:10.589232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.238 qpair failed and we were unable to recover it. 00:38:35.238 [2024-12-09 09:56:10.589401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.238 [2024-12-09 09:56:10.589408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.238 qpair failed and we were unable to recover it. 00:38:35.238 [2024-12-09 09:56:10.589643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.238 [2024-12-09 09:56:10.589650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.238 qpair failed and we were unable to recover it. 00:38:35.238 [2024-12-09 09:56:10.589963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.238 [2024-12-09 09:56:10.589970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.238 qpair failed and we were unable to recover it. 00:38:35.238 [2024-12-09 09:56:10.590152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.238 [2024-12-09 09:56:10.590159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.238 qpair failed and we were unable to recover it. 00:38:35.238 [2024-12-09 09:56:10.590434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.238 [2024-12-09 09:56:10.590441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.238 qpair failed and we were unable to recover it. 00:38:35.238 [2024-12-09 09:56:10.590723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.238 [2024-12-09 09:56:10.590730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.238 qpair failed and we were unable to recover it. 00:38:35.238 [2024-12-09 09:56:10.590951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.238 [2024-12-09 09:56:10.590958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.238 qpair failed and we were unable to recover it. 00:38:35.238 [2024-12-09 09:56:10.591107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.238 [2024-12-09 09:56:10.591114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.238 qpair failed and we were unable to recover it. 00:38:35.238 [2024-12-09 09:56:10.591387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.238 [2024-12-09 09:56:10.591394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.238 qpair failed and we were unable to recover it. 00:38:35.238 [2024-12-09 09:56:10.591658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.238 [2024-12-09 09:56:10.591665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.238 qpair failed and we were unable to recover it. 00:38:35.238 [2024-12-09 09:56:10.591873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.239 [2024-12-09 09:56:10.591880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.239 qpair failed and we were unable to recover it. 00:38:35.239 [2024-12-09 09:56:10.592174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.239 [2024-12-09 09:56:10.592182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.239 qpair failed and we were unable to recover it. 00:38:35.239 [2024-12-09 09:56:10.592515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.239 [2024-12-09 09:56:10.592522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.239 qpair failed and we were unable to recover it. 00:38:35.239 [2024-12-09 09:56:10.592829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.239 [2024-12-09 09:56:10.592836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.239 qpair failed and we were unable to recover it. 00:38:35.239 [2024-12-09 09:56:10.593124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.239 [2024-12-09 09:56:10.593131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.239 qpair failed and we were unable to recover it. 00:38:35.239 [2024-12-09 09:56:10.593306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.239 [2024-12-09 09:56:10.593313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.239 qpair failed and we were unable to recover it. 00:38:35.239 [2024-12-09 09:56:10.593587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.239 [2024-12-09 09:56:10.593595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.239 qpair failed and we were unable to recover it. 00:38:35.239 [2024-12-09 09:56:10.593912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.239 [2024-12-09 09:56:10.593921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.239 qpair failed and we were unable to recover it. 00:38:35.239 [2024-12-09 09:56:10.594134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.239 [2024-12-09 09:56:10.594142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.239 qpair failed and we were unable to recover it. 00:38:35.239 [2024-12-09 09:56:10.594474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.239 [2024-12-09 09:56:10.594482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.239 qpair failed and we were unable to recover it. 00:38:35.239 [2024-12-09 09:56:10.594787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.239 [2024-12-09 09:56:10.594794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.239 qpair failed and we were unable to recover it. 00:38:35.239 [2024-12-09 09:56:10.595077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.239 [2024-12-09 09:56:10.595084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.239 qpair failed and we were unable to recover it. 00:38:35.239 [2024-12-09 09:56:10.595390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.239 [2024-12-09 09:56:10.595398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.239 qpair failed and we were unable to recover it. 00:38:35.239 [2024-12-09 09:56:10.595695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.239 [2024-12-09 09:56:10.595703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.239 qpair failed and we were unable to recover it. 00:38:35.239 [2024-12-09 09:56:10.596002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.239 [2024-12-09 09:56:10.596009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.239 qpair failed and we were unable to recover it. 00:38:35.239 [2024-12-09 09:56:10.596291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.239 [2024-12-09 09:56:10.596298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.239 qpair failed and we were unable to recover it. 00:38:35.239 [2024-12-09 09:56:10.596600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.239 [2024-12-09 09:56:10.596608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.239 qpair failed and we were unable to recover it. 00:38:35.239 [2024-12-09 09:56:10.596798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.239 [2024-12-09 09:56:10.596805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.239 qpair failed and we were unable to recover it. 00:38:35.239 [2024-12-09 09:56:10.597149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.239 [2024-12-09 09:56:10.597158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.239 qpair failed and we were unable to recover it. 00:38:35.239 [2024-12-09 09:56:10.597439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.239 [2024-12-09 09:56:10.597446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.239 qpair failed and we were unable to recover it. 00:38:35.239 [2024-12-09 09:56:10.597609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.239 [2024-12-09 09:56:10.597617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.239 qpair failed and we were unable to recover it. 00:38:35.239 [2024-12-09 09:56:10.597787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.239 [2024-12-09 09:56:10.597795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.239 qpair failed and we were unable to recover it. 00:38:35.239 [2024-12-09 09:56:10.597996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.239 [2024-12-09 09:56:10.598003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.239 qpair failed and we were unable to recover it. 00:38:35.239 [2024-12-09 09:56:10.598425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.239 [2024-12-09 09:56:10.598432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.239 qpair failed and we were unable to recover it. 00:38:35.239 [2024-12-09 09:56:10.598716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.239 [2024-12-09 09:56:10.598724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.239 qpair failed and we were unable to recover it. 00:38:35.239 [2024-12-09 09:56:10.599021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.239 [2024-12-09 09:56:10.599030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.239 qpair failed and we were unable to recover it. 00:38:35.239 [2024-12-09 09:56:10.599357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.239 [2024-12-09 09:56:10.599364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.239 qpair failed and we were unable to recover it. 00:38:35.239 [2024-12-09 09:56:10.599677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.239 [2024-12-09 09:56:10.599684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.239 qpair failed and we were unable to recover it. 00:38:35.239 [2024-12-09 09:56:10.599992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.239 [2024-12-09 09:56:10.599999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.239 qpair failed and we were unable to recover it. 00:38:35.239 [2024-12-09 09:56:10.600323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.239 [2024-12-09 09:56:10.600330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.239 qpair failed and we were unable to recover it. 00:38:35.239 [2024-12-09 09:56:10.600516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.239 [2024-12-09 09:56:10.600524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.239 qpair failed and we were unable to recover it. 00:38:35.239 [2024-12-09 09:56:10.600850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.239 [2024-12-09 09:56:10.600858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.239 qpair failed and we were unable to recover it. 00:38:35.239 [2024-12-09 09:56:10.601158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.239 [2024-12-09 09:56:10.601165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.239 qpair failed and we were unable to recover it. 00:38:35.239 [2024-12-09 09:56:10.601433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.239 [2024-12-09 09:56:10.601441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.239 qpair failed and we were unable to recover it. 00:38:35.239 [2024-12-09 09:56:10.601771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.239 [2024-12-09 09:56:10.601779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.239 qpair failed and we were unable to recover it. 00:38:35.239 [2024-12-09 09:56:10.601922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.239 [2024-12-09 09:56:10.601928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.239 qpair failed and we were unable to recover it. 00:38:35.239 [2024-12-09 09:56:10.602189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.239 [2024-12-09 09:56:10.602196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.239 qpair failed and we were unable to recover it. 00:38:35.239 [2024-12-09 09:56:10.602363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.239 [2024-12-09 09:56:10.602370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.239 qpair failed and we were unable to recover it. 00:38:35.239 [2024-12-09 09:56:10.602632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.240 [2024-12-09 09:56:10.602644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.240 qpair failed and we were unable to recover it. 00:38:35.240 [2024-12-09 09:56:10.602941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.240 [2024-12-09 09:56:10.602948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.240 qpair failed and we were unable to recover it. 00:38:35.240 [2024-12-09 09:56:10.603122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.240 [2024-12-09 09:56:10.603128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.240 qpair failed and we were unable to recover it. 00:38:35.240 [2024-12-09 09:56:10.603353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.240 [2024-12-09 09:56:10.603361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.240 qpair failed and we were unable to recover it. 00:38:35.240 [2024-12-09 09:56:10.603653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.240 [2024-12-09 09:56:10.603661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.240 qpair failed and we were unable to recover it. 00:38:35.240 [2024-12-09 09:56:10.603974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.240 [2024-12-09 09:56:10.603981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.240 qpair failed and we were unable to recover it. 00:38:35.240 [2024-12-09 09:56:10.604164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.240 [2024-12-09 09:56:10.604171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.240 qpair failed and we were unable to recover it. 00:38:35.240 [2024-12-09 09:56:10.604507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.240 [2024-12-09 09:56:10.604515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.240 qpair failed and we were unable to recover it. 00:38:35.240 [2024-12-09 09:56:10.604858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.240 [2024-12-09 09:56:10.604866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.240 qpair failed and we were unable to recover it. 00:38:35.240 [2024-12-09 09:56:10.605216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.240 [2024-12-09 09:56:10.605224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.240 qpair failed and we were unable to recover it. 00:38:35.240 [2024-12-09 09:56:10.605513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.240 [2024-12-09 09:56:10.605521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.240 qpair failed and we were unable to recover it. 00:38:35.240 [2024-12-09 09:56:10.605813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.240 [2024-12-09 09:56:10.605820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.240 qpair failed and we were unable to recover it. 00:38:35.240 [2024-12-09 09:56:10.606032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.240 [2024-12-09 09:56:10.606038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.240 qpair failed and we were unable to recover it. 00:38:35.240 [2024-12-09 09:56:10.606338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.240 [2024-12-09 09:56:10.606346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.240 qpair failed and we were unable to recover it. 00:38:35.240 [2024-12-09 09:56:10.606671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.240 [2024-12-09 09:56:10.606679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.240 qpair failed and we were unable to recover it. 00:38:35.240 [2024-12-09 09:56:10.607006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.240 [2024-12-09 09:56:10.607013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.240 qpair failed and we were unable to recover it. 00:38:35.240 [2024-12-09 09:56:10.607299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.240 [2024-12-09 09:56:10.607306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.240 qpair failed and we were unable to recover it. 00:38:35.240 [2024-12-09 09:56:10.607627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.240 [2024-12-09 09:56:10.607635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.240 qpair failed and we were unable to recover it. 00:38:35.240 [2024-12-09 09:56:10.607873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.240 [2024-12-09 09:56:10.607880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.240 qpair failed and we were unable to recover it. 00:38:35.240 [2024-12-09 09:56:10.608178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.240 [2024-12-09 09:56:10.608185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.240 qpair failed and we were unable to recover it. 00:38:35.240 [2024-12-09 09:56:10.608513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.240 [2024-12-09 09:56:10.608521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.240 qpair failed and we were unable to recover it. 00:38:35.240 [2024-12-09 09:56:10.608812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.240 [2024-12-09 09:56:10.608820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.240 qpair failed and we were unable to recover it. 00:38:35.240 [2024-12-09 09:56:10.609031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.240 [2024-12-09 09:56:10.609038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.240 qpair failed and we were unable to recover it. 00:38:35.240 [2024-12-09 09:56:10.609221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.240 [2024-12-09 09:56:10.609228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.240 qpair failed and we were unable to recover it. 00:38:35.240 [2024-12-09 09:56:10.609410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.240 [2024-12-09 09:56:10.609417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.240 qpair failed and we were unable to recover it. 00:38:35.240 [2024-12-09 09:56:10.609607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.240 [2024-12-09 09:56:10.609623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.240 qpair failed and we were unable to recover it. 00:38:35.240 [2024-12-09 09:56:10.609835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.240 [2024-12-09 09:56:10.609842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.240 qpair failed and we were unable to recover it. 00:38:35.240 [2024-12-09 09:56:10.610114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.240 [2024-12-09 09:56:10.610121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.240 qpair failed and we were unable to recover it. 00:38:35.240 [2024-12-09 09:56:10.610434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.240 [2024-12-09 09:56:10.610441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.240 qpair failed and we were unable to recover it. 00:38:35.240 [2024-12-09 09:56:10.610777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.240 [2024-12-09 09:56:10.610785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.240 qpair failed and we were unable to recover it. 00:38:35.240 [2024-12-09 09:56:10.611056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.240 [2024-12-09 09:56:10.611063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.240 qpair failed and we were unable to recover it. 00:38:35.240 [2024-12-09 09:56:10.611377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.240 [2024-12-09 09:56:10.611383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.240 qpair failed and we were unable to recover it. 00:38:35.240 [2024-12-09 09:56:10.611653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.240 [2024-12-09 09:56:10.611661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.240 qpair failed and we were unable to recover it. 00:38:35.240 [2024-12-09 09:56:10.611838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.240 [2024-12-09 09:56:10.611844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.240 qpair failed and we were unable to recover it. 00:38:35.240 [2024-12-09 09:56:10.612125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.240 [2024-12-09 09:56:10.612132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.240 qpair failed and we were unable to recover it. 00:38:35.240 [2024-12-09 09:56:10.612322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.240 [2024-12-09 09:56:10.612329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.240 qpair failed and we were unable to recover it. 00:38:35.240 [2024-12-09 09:56:10.612555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.240 [2024-12-09 09:56:10.612563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.240 qpair failed and we were unable to recover it. 00:38:35.240 [2024-12-09 09:56:10.612746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.240 [2024-12-09 09:56:10.612753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.240 qpair failed and we were unable to recover it. 00:38:35.240 [2024-12-09 09:56:10.613075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.240 [2024-12-09 09:56:10.613082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.241 qpair failed and we were unable to recover it. 00:38:35.241 [2024-12-09 09:56:10.613264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.241 [2024-12-09 09:56:10.613270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.241 qpair failed and we were unable to recover it. 00:38:35.241 [2024-12-09 09:56:10.613536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.241 [2024-12-09 09:56:10.613543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.241 qpair failed and we were unable to recover it. 00:38:35.241 [2024-12-09 09:56:10.613834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.241 [2024-12-09 09:56:10.613841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.241 qpair failed and we were unable to recover it. 00:38:35.241 [2024-12-09 09:56:10.614165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.241 [2024-12-09 09:56:10.614172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.241 qpair failed and we were unable to recover it. 00:38:35.241 [2024-12-09 09:56:10.614462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.241 [2024-12-09 09:56:10.614468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.241 qpair failed and we were unable to recover it. 00:38:35.241 [2024-12-09 09:56:10.614646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.241 [2024-12-09 09:56:10.614653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.241 qpair failed and we were unable to recover it. 00:38:35.241 [2024-12-09 09:56:10.614952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.241 [2024-12-09 09:56:10.614959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.241 qpair failed and we were unable to recover it. 00:38:35.241 [2024-12-09 09:56:10.615150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.241 [2024-12-09 09:56:10.615157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.241 qpair failed and we were unable to recover it. 00:38:35.241 [2024-12-09 09:56:10.615473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.241 [2024-12-09 09:56:10.615481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.241 qpair failed and we were unable to recover it. 00:38:35.241 [2024-12-09 09:56:10.615654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.241 [2024-12-09 09:56:10.615661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.241 qpair failed and we were unable to recover it. 00:38:35.241 [2024-12-09 09:56:10.615943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.241 [2024-12-09 09:56:10.615949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.241 qpair failed and we were unable to recover it. 00:38:35.241 [2024-12-09 09:56:10.616218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.241 [2024-12-09 09:56:10.616225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.241 qpair failed and we were unable to recover it. 00:38:35.241 [2024-12-09 09:56:10.616407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.241 [2024-12-09 09:56:10.616413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.241 qpair failed and we were unable to recover it. 00:38:35.241 [2024-12-09 09:56:10.616679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.241 [2024-12-09 09:56:10.616686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.241 qpair failed and we were unable to recover it. 00:38:35.241 [2024-12-09 09:56:10.616990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.241 [2024-12-09 09:56:10.616997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.241 qpair failed and we were unable to recover it. 00:38:35.241 [2024-12-09 09:56:10.617314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.241 [2024-12-09 09:56:10.617321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.241 qpair failed and we were unable to recover it. 00:38:35.241 [2024-12-09 09:56:10.617618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.241 [2024-12-09 09:56:10.617625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.241 qpair failed and we were unable to recover it. 00:38:35.241 [2024-12-09 09:56:10.617947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.241 [2024-12-09 09:56:10.617953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.241 qpair failed and we were unable to recover it. 00:38:35.241 [2024-12-09 09:56:10.618125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.241 [2024-12-09 09:56:10.618131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.241 qpair failed and we were unable to recover it. 00:38:35.241 [2024-12-09 09:56:10.618404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.241 [2024-12-09 09:56:10.618411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.241 qpair failed and we were unable to recover it. 00:38:35.241 [2024-12-09 09:56:10.618695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.241 [2024-12-09 09:56:10.618708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.241 qpair failed and we were unable to recover it. 00:38:35.241 [2024-12-09 09:56:10.619010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.241 [2024-12-09 09:56:10.619019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.241 qpair failed and we were unable to recover it. 00:38:35.241 [2024-12-09 09:56:10.619180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.241 [2024-12-09 09:56:10.619187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.241 qpair failed and we were unable to recover it. 00:38:35.241 [2024-12-09 09:56:10.619348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.241 [2024-12-09 09:56:10.619356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.241 qpair failed and we were unable to recover it. 00:38:35.241 [2024-12-09 09:56:10.619643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.241 [2024-12-09 09:56:10.619650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.241 qpair failed and we were unable to recover it. 00:38:35.241 [2024-12-09 09:56:10.619952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.241 [2024-12-09 09:56:10.619959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.241 qpair failed and we were unable to recover it. 00:38:35.241 [2024-12-09 09:56:10.620114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.241 [2024-12-09 09:56:10.620120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.241 qpair failed and we were unable to recover it. 00:38:35.241 [2024-12-09 09:56:10.620438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.241 [2024-12-09 09:56:10.620445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.241 qpair failed and we were unable to recover it. 00:38:35.241 [2024-12-09 09:56:10.620819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.241 [2024-12-09 09:56:10.620826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.241 qpair failed and we were unable to recover it. 00:38:35.241 [2024-12-09 09:56:10.621181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.241 [2024-12-09 09:56:10.621188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.241 qpair failed and we were unable to recover it. 00:38:35.241 [2024-12-09 09:56:10.621505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.241 [2024-12-09 09:56:10.621512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.241 qpair failed and we were unable to recover it. 00:38:35.241 [2024-12-09 09:56:10.621838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.241 [2024-12-09 09:56:10.621845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.241 qpair failed and we were unable to recover it. 00:38:35.241 [2024-12-09 09:56:10.622028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.241 [2024-12-09 09:56:10.622035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.241 qpair failed and we were unable to recover it. 00:38:35.241 [2024-12-09 09:56:10.622213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.242 [2024-12-09 09:56:10.622220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.242 qpair failed and we were unable to recover it. 00:38:35.242 [2024-12-09 09:56:10.622395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.242 [2024-12-09 09:56:10.622404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.242 qpair failed and we were unable to recover it. 00:38:35.242 [2024-12-09 09:56:10.622689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.242 [2024-12-09 09:56:10.622696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.242 qpair failed and we were unable to recover it. 00:38:35.242 [2024-12-09 09:56:10.623035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.242 [2024-12-09 09:56:10.623042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.242 qpair failed and we were unable to recover it. 00:38:35.242 [2024-12-09 09:56:10.623369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.242 [2024-12-09 09:56:10.623376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.242 qpair failed and we were unable to recover it. 00:38:35.242 [2024-12-09 09:56:10.623686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.242 [2024-12-09 09:56:10.623693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.242 qpair failed and we were unable to recover it. 00:38:35.242 [2024-12-09 09:56:10.624000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.242 [2024-12-09 09:56:10.624008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.242 qpair failed and we were unable to recover it. 00:38:35.242 [2024-12-09 09:56:10.624317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.242 [2024-12-09 09:56:10.624323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.242 qpair failed and we were unable to recover it. 00:38:35.242 [2024-12-09 09:56:10.624649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.242 [2024-12-09 09:56:10.624657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.242 qpair failed and we were unable to recover it. 00:38:35.242 [2024-12-09 09:56:10.624971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.242 [2024-12-09 09:56:10.624978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.242 qpair failed and we were unable to recover it. 00:38:35.242 [2024-12-09 09:56:10.625183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.242 [2024-12-09 09:56:10.625191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.242 qpair failed and we were unable to recover it. 00:38:35.242 [2024-12-09 09:56:10.625475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.242 [2024-12-09 09:56:10.625483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.242 qpair failed and we were unable to recover it. 00:38:35.242 [2024-12-09 09:56:10.625800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.242 [2024-12-09 09:56:10.625808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.242 qpair failed and we were unable to recover it. 00:38:35.242 [2024-12-09 09:56:10.626100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.242 [2024-12-09 09:56:10.626107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.242 qpair failed and we were unable to recover it. 00:38:35.242 [2024-12-09 09:56:10.626366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.242 [2024-12-09 09:56:10.626373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.242 qpair failed and we were unable to recover it. 00:38:35.242 [2024-12-09 09:56:10.626666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.242 [2024-12-09 09:56:10.626676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.242 qpair failed and we were unable to recover it. 00:38:35.242 [2024-12-09 09:56:10.627088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.242 [2024-12-09 09:56:10.627095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.242 qpair failed and we were unable to recover it. 00:38:35.242 [2024-12-09 09:56:10.627380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.242 [2024-12-09 09:56:10.627387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.242 qpair failed and we were unable to recover it. 00:38:35.242 [2024-12-09 09:56:10.627664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.242 [2024-12-09 09:56:10.627671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.242 qpair failed and we were unable to recover it. 00:38:35.242 [2024-12-09 09:56:10.627971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.242 [2024-12-09 09:56:10.627980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.242 qpair failed and we were unable to recover it. 00:38:35.242 [2024-12-09 09:56:10.628275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.242 [2024-12-09 09:56:10.628282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.242 qpair failed and we were unable to recover it. 00:38:35.242 [2024-12-09 09:56:10.628468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.242 [2024-12-09 09:56:10.628476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.242 qpair failed and we were unable to recover it. 00:38:35.242 [2024-12-09 09:56:10.628746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.242 [2024-12-09 09:56:10.628754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.242 qpair failed and we were unable to recover it. 00:38:35.242 [2024-12-09 09:56:10.629018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.242 [2024-12-09 09:56:10.629025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.242 qpair failed and we were unable to recover it. 00:38:35.242 [2024-12-09 09:56:10.629309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.242 [2024-12-09 09:56:10.629317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.242 qpair failed and we were unable to recover it. 00:38:35.242 [2024-12-09 09:56:10.629464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.242 [2024-12-09 09:56:10.629471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.242 qpair failed and we were unable to recover it. 00:38:35.242 [2024-12-09 09:56:10.629762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.242 [2024-12-09 09:56:10.629769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.242 qpair failed and we were unable to recover it. 00:38:35.242 [2024-12-09 09:56:10.630066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.242 [2024-12-09 09:56:10.630072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.242 qpair failed and we were unable to recover it. 00:38:35.242 [2024-12-09 09:56:10.630361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.242 [2024-12-09 09:56:10.630369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.242 qpair failed and we were unable to recover it. 00:38:35.242 [2024-12-09 09:56:10.630531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.242 [2024-12-09 09:56:10.630540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.242 qpair failed and we were unable to recover it. 00:38:35.242 [2024-12-09 09:56:10.630832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.242 [2024-12-09 09:56:10.630839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.242 qpair failed and we were unable to recover it. 00:38:35.242 [2024-12-09 09:56:10.631136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.242 [2024-12-09 09:56:10.631142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.242 qpair failed and we were unable to recover it. 00:38:35.242 [2024-12-09 09:56:10.631458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.242 [2024-12-09 09:56:10.631465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.242 qpair failed and we were unable to recover it. 00:38:35.242 [2024-12-09 09:56:10.631646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.242 [2024-12-09 09:56:10.631654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.242 qpair failed and we were unable to recover it. 00:38:35.242 [2024-12-09 09:56:10.631956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.242 [2024-12-09 09:56:10.631964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.242 qpair failed and we were unable to recover it. 00:38:35.242 [2024-12-09 09:56:10.632231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.242 [2024-12-09 09:56:10.632238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.242 qpair failed and we were unable to recover it. 00:38:35.242 [2024-12-09 09:56:10.632411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.242 [2024-12-09 09:56:10.632418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.242 qpair failed and we were unable to recover it. 00:38:35.242 [2024-12-09 09:56:10.632718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.242 [2024-12-09 09:56:10.632725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.242 qpair failed and we were unable to recover it. 00:38:35.242 [2024-12-09 09:56:10.633061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.243 [2024-12-09 09:56:10.633068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.243 qpair failed and we were unable to recover it. 00:38:35.243 [2024-12-09 09:56:10.633372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.243 [2024-12-09 09:56:10.633379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.243 qpair failed and we were unable to recover it. 00:38:35.243 [2024-12-09 09:56:10.633712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.243 [2024-12-09 09:56:10.633721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.243 qpair failed and we were unable to recover it. 00:38:35.243 [2024-12-09 09:56:10.633876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.243 [2024-12-09 09:56:10.633882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.243 qpair failed and we were unable to recover it. 00:38:35.243 [2024-12-09 09:56:10.634203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.243 [2024-12-09 09:56:10.634210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.243 qpair failed and we were unable to recover it. 00:38:35.243 [2024-12-09 09:56:10.634388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.243 [2024-12-09 09:56:10.634397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.243 qpair failed and we were unable to recover it. 00:38:35.243 [2024-12-09 09:56:10.634562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.243 [2024-12-09 09:56:10.634569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.243 qpair failed and we were unable to recover it. 00:38:35.243 [2024-12-09 09:56:10.634914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.243 [2024-12-09 09:56:10.634922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.243 qpair failed and we were unable to recover it. 00:38:35.243 [2024-12-09 09:56:10.635266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.243 [2024-12-09 09:56:10.635274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.243 qpair failed and we were unable to recover it. 00:38:35.243 [2024-12-09 09:56:10.635466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.243 [2024-12-09 09:56:10.635474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.243 qpair failed and we were unable to recover it. 00:38:35.243 [2024-12-09 09:56:10.635849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.243 [2024-12-09 09:56:10.635856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.243 qpair failed and we were unable to recover it. 00:38:35.243 [2024-12-09 09:56:10.636173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.243 [2024-12-09 09:56:10.636180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.243 qpair failed and we were unable to recover it. 00:38:35.243 [2024-12-09 09:56:10.636476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.243 [2024-12-09 09:56:10.636483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.243 qpair failed and we were unable to recover it. 00:38:35.243 [2024-12-09 09:56:10.636798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.243 [2024-12-09 09:56:10.636806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.243 qpair failed and we were unable to recover it. 00:38:35.243 [2024-12-09 09:56:10.637087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.243 [2024-12-09 09:56:10.637094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.243 qpair failed and we were unable to recover it. 00:38:35.243 [2024-12-09 09:56:10.637394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.243 [2024-12-09 09:56:10.637402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.243 qpair failed and we were unable to recover it. 00:38:35.243 [2024-12-09 09:56:10.637702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.243 [2024-12-09 09:56:10.637710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.243 qpair failed and we were unable to recover it. 00:38:35.243 [2024-12-09 09:56:10.638042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.243 [2024-12-09 09:56:10.638051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.243 qpair failed and we were unable to recover it. 00:38:35.243 [2024-12-09 09:56:10.638225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.243 [2024-12-09 09:56:10.638232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.243 qpair failed and we were unable to recover it. 00:38:35.243 [2024-12-09 09:56:10.638583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.243 [2024-12-09 09:56:10.638591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.243 qpair failed and we were unable to recover it. 00:38:35.243 [2024-12-09 09:56:10.638851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.243 [2024-12-09 09:56:10.638859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.243 qpair failed and we were unable to recover it. 00:38:35.243 [2024-12-09 09:56:10.639152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.243 [2024-12-09 09:56:10.639160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.243 qpair failed and we were unable to recover it. 00:38:35.243 [2024-12-09 09:56:10.639496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.243 [2024-12-09 09:56:10.639505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.243 qpair failed and we were unable to recover it. 00:38:35.243 [2024-12-09 09:56:10.639809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.243 [2024-12-09 09:56:10.639817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.243 qpair failed and we were unable to recover it. 00:38:35.243 [2024-12-09 09:56:10.640180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.243 [2024-12-09 09:56:10.640187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.243 qpair failed and we were unable to recover it. 00:38:35.243 [2024-12-09 09:56:10.640479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.243 [2024-12-09 09:56:10.640487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.243 qpair failed and we were unable to recover it. 00:38:35.243 [2024-12-09 09:56:10.640803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.243 [2024-12-09 09:56:10.640810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.243 qpair failed and we were unable to recover it. 00:38:35.243 [2024-12-09 09:56:10.641142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.243 [2024-12-09 09:56:10.641150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.243 qpair failed and we were unable to recover it. 00:38:35.243 [2024-12-09 09:56:10.641471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.243 [2024-12-09 09:56:10.641479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.243 qpair failed and we were unable to recover it. 00:38:35.243 [2024-12-09 09:56:10.641783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.243 [2024-12-09 09:56:10.641791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.243 qpair failed and we were unable to recover it. 00:38:35.243 [2024-12-09 09:56:10.641949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.243 [2024-12-09 09:56:10.641957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.243 qpair failed and we were unable to recover it. 00:38:35.243 [2024-12-09 09:56:10.642439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.243 [2024-12-09 09:56:10.642532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:35.243 qpair failed and we were unable to recover it. 00:38:35.243 [2024-12-09 09:56:10.643170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.243 [2024-12-09 09:56:10.643261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc74130 with addr=10.0.0.2, port=4420 00:38:35.243 qpair failed and we were unable to recover it. 00:38:35.243 [2024-12-09 09:56:10.643610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.243 [2024-12-09 09:56:10.643619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.243 qpair failed and we were unable to recover it. 00:38:35.243 [2024-12-09 09:56:10.643908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.243 [2024-12-09 09:56:10.643916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.243 qpair failed and we were unable to recover it. 00:38:35.243 [2024-12-09 09:56:10.644264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.243 [2024-12-09 09:56:10.644272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.243 qpair failed and we were unable to recover it. 00:38:35.243 [2024-12-09 09:56:10.644482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.243 [2024-12-09 09:56:10.644495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.243 qpair failed and we were unable to recover it. 00:38:35.243 [2024-12-09 09:56:10.644839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.243 [2024-12-09 09:56:10.644846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.243 qpair failed and we were unable to recover it. 00:38:35.243 [2024-12-09 09:56:10.645150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.244 [2024-12-09 09:56:10.645157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.244 qpair failed and we were unable to recover it. 00:38:35.244 [2024-12-09 09:56:10.645458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.244 [2024-12-09 09:56:10.645465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.244 qpair failed and we were unable to recover it. 00:38:35.244 [2024-12-09 09:56:10.645630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.244 [2024-12-09 09:56:10.645642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.244 qpair failed and we were unable to recover it. 00:38:35.244 [2024-12-09 09:56:10.645869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.244 [2024-12-09 09:56:10.645876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.244 qpair failed and we were unable to recover it. 00:38:35.244 [2024-12-09 09:56:10.646183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.244 [2024-12-09 09:56:10.646191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.244 qpair failed and we were unable to recover it. 00:38:35.244 [2024-12-09 09:56:10.646330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.244 [2024-12-09 09:56:10.646337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.244 qpair failed and we were unable to recover it. 00:38:35.244 [2024-12-09 09:56:10.646601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.244 [2024-12-09 09:56:10.646608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.244 qpair failed and we were unable to recover it. 00:38:35.244 [2024-12-09 09:56:10.646924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.244 [2024-12-09 09:56:10.646932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.244 qpair failed and we were unable to recover it. 00:38:35.244 [2024-12-09 09:56:10.647261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.244 [2024-12-09 09:56:10.647270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.244 qpair failed and we were unable to recover it. 00:38:35.244 [2024-12-09 09:56:10.647608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.244 [2024-12-09 09:56:10.647616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.244 qpair failed and we were unable to recover it. 00:38:35.244 [2024-12-09 09:56:10.647928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.244 [2024-12-09 09:56:10.647937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.244 qpair failed and we were unable to recover it. 00:38:35.244 [2024-12-09 09:56:10.648170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.244 [2024-12-09 09:56:10.648179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.244 qpair failed and we were unable to recover it. 00:38:35.244 [2024-12-09 09:56:10.648521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.244 [2024-12-09 09:56:10.648529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.244 qpair failed and we were unable to recover it. 00:38:35.518 [2024-12-09 09:56:10.648707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.518 [2024-12-09 09:56:10.648715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.518 qpair failed and we were unable to recover it. 00:38:35.518 [2024-12-09 09:56:10.649030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.518 [2024-12-09 09:56:10.649038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.518 qpair failed and we were unable to recover it. 00:38:35.518 [2024-12-09 09:56:10.649343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.518 [2024-12-09 09:56:10.649350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.518 qpair failed and we were unable to recover it. 00:38:35.518 [2024-12-09 09:56:10.649612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.518 [2024-12-09 09:56:10.649620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.518 qpair failed and we were unable to recover it. 00:38:35.518 [2024-12-09 09:56:10.649919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.518 [2024-12-09 09:56:10.649927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.518 qpair failed and we were unable to recover it. 00:38:35.518 [2024-12-09 09:56:10.650212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.518 [2024-12-09 09:56:10.650219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.518 qpair failed and we were unable to recover it. 00:38:35.518 [2024-12-09 09:56:10.650534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.518 [2024-12-09 09:56:10.650544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.518 qpair failed and we were unable to recover it. 00:38:35.518 [2024-12-09 09:56:10.650718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.518 [2024-12-09 09:56:10.650727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.518 qpair failed and we were unable to recover it. 00:38:35.518 [2024-12-09 09:56:10.651111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.518 [2024-12-09 09:56:10.651118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.518 qpair failed and we were unable to recover it. 00:38:35.518 [2024-12-09 09:56:10.651397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.518 [2024-12-09 09:56:10.651405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.518 qpair failed and we were unable to recover it. 00:38:35.518 [2024-12-09 09:56:10.651721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.518 [2024-12-09 09:56:10.651729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.518 qpair failed and we were unable to recover it. 00:38:35.518 [2024-12-09 09:56:10.651933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.518 [2024-12-09 09:56:10.651942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.518 qpair failed and we were unable to recover it. 00:38:35.518 [2024-12-09 09:56:10.652130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.518 [2024-12-09 09:56:10.652139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.518 qpair failed and we were unable to recover it. 00:38:35.518 [2024-12-09 09:56:10.652294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.518 [2024-12-09 09:56:10.652301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.518 qpair failed and we were unable to recover it. 00:38:35.518 [2024-12-09 09:56:10.652640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.518 [2024-12-09 09:56:10.652648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.518 qpair failed and we were unable to recover it. 00:38:35.518 [2024-12-09 09:56:10.652933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.518 [2024-12-09 09:56:10.652940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.518 qpair failed and we were unable to recover it. 00:38:35.518 [2024-12-09 09:56:10.653283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.518 [2024-12-09 09:56:10.653291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.518 qpair failed and we were unable to recover it. 00:38:35.518 [2024-12-09 09:56:10.653476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.518 [2024-12-09 09:56:10.653484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.518 qpair failed and we were unable to recover it. 00:38:35.518 [2024-12-09 09:56:10.653665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.518 [2024-12-09 09:56:10.653673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.518 qpair failed and we were unable to recover it. 00:38:35.518 [2024-12-09 09:56:10.654007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.518 [2024-12-09 09:56:10.654014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.518 qpair failed and we were unable to recover it. 00:38:35.518 [2024-12-09 09:56:10.654286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.518 [2024-12-09 09:56:10.654294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.518 qpair failed and we were unable to recover it. 00:38:35.518 [2024-12-09 09:56:10.654461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.518 [2024-12-09 09:56:10.654468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.519 qpair failed and we were unable to recover it. 00:38:35.519 [2024-12-09 09:56:10.654774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.519 [2024-12-09 09:56:10.654782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.519 qpair failed and we were unable to recover it. 00:38:35.519 [2024-12-09 09:56:10.655125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.519 [2024-12-09 09:56:10.655133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.519 qpair failed and we were unable to recover it. 00:38:35.519 [2024-12-09 09:56:10.655418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.519 [2024-12-09 09:56:10.655425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.519 qpair failed and we were unable to recover it. 00:38:35.519 [2024-12-09 09:56:10.655729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.519 [2024-12-09 09:56:10.655737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.519 qpair failed and we were unable to recover it. 00:38:35.519 [2024-12-09 09:56:10.656087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.519 [2024-12-09 09:56:10.656094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.519 qpair failed and we were unable to recover it. 00:38:35.519 [2024-12-09 09:56:10.656350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.519 [2024-12-09 09:56:10.656357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.519 qpair failed and we were unable to recover it. 00:38:35.519 [2024-12-09 09:56:10.656664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.519 [2024-12-09 09:56:10.656671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.519 qpair failed and we were unable to recover it. 00:38:35.519 [2024-12-09 09:56:10.657010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.519 [2024-12-09 09:56:10.657017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.519 qpair failed and we were unable to recover it. 00:38:35.519 [2024-12-09 09:56:10.657262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.519 [2024-12-09 09:56:10.657271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.519 qpair failed and we were unable to recover it. 00:38:35.519 [2024-12-09 09:56:10.657446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.519 [2024-12-09 09:56:10.657453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.519 qpair failed and we were unable to recover it. 00:38:35.519 [2024-12-09 09:56:10.657717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.519 [2024-12-09 09:56:10.657726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.519 qpair failed and we were unable to recover it. 00:38:35.519 [2024-12-09 09:56:10.658029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.519 [2024-12-09 09:56:10.658036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.519 qpair failed and we were unable to recover it. 00:38:35.519 [2024-12-09 09:56:10.658201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.519 [2024-12-09 09:56:10.658208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.519 qpair failed and we were unable to recover it. 00:38:35.519 [2024-12-09 09:56:10.658483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.519 [2024-12-09 09:56:10.658490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.519 qpair failed and we were unable to recover it. 00:38:35.519 [2024-12-09 09:56:10.658810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.519 [2024-12-09 09:56:10.658817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.519 qpair failed and we were unable to recover it. 00:38:35.519 [2024-12-09 09:56:10.659007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.519 [2024-12-09 09:56:10.659015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.519 qpair failed and we were unable to recover it. 00:38:35.519 [2024-12-09 09:56:10.659215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.519 [2024-12-09 09:56:10.659223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.519 qpair failed and we were unable to recover it. 00:38:35.519 [2024-12-09 09:56:10.659565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.519 [2024-12-09 09:56:10.659573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.519 qpair failed and we were unable to recover it. 00:38:35.519 [2024-12-09 09:56:10.659881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.519 [2024-12-09 09:56:10.659888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.519 qpair failed and we were unable to recover it. 00:38:35.519 [2024-12-09 09:56:10.660187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.519 [2024-12-09 09:56:10.660195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.519 qpair failed and we were unable to recover it. 00:38:35.519 [2024-12-09 09:56:10.660505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.519 [2024-12-09 09:56:10.660512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.519 qpair failed and we were unable to recover it. 00:38:35.519 [2024-12-09 09:56:10.660811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.519 [2024-12-09 09:56:10.660818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.519 qpair failed and we were unable to recover it. 00:38:35.519 [2024-12-09 09:56:10.661145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.519 [2024-12-09 09:56:10.661153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.519 qpair failed and we were unable to recover it. 00:38:35.519 [2024-12-09 09:56:10.661450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.519 [2024-12-09 09:56:10.661458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.519 qpair failed and we were unable to recover it. 00:38:35.519 [2024-12-09 09:56:10.661671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.519 [2024-12-09 09:56:10.661686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.519 qpair failed and we were unable to recover it. 00:38:35.519 [2024-12-09 09:56:10.661869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.519 [2024-12-09 09:56:10.661876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.519 qpair failed and we were unable to recover it. 00:38:35.519 [2024-12-09 09:56:10.662192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.519 [2024-12-09 09:56:10.662199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.519 qpair failed and we were unable to recover it. 00:38:35.519 [2024-12-09 09:56:10.662513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.519 [2024-12-09 09:56:10.662520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.519 qpair failed and we were unable to recover it. 00:38:35.519 [2024-12-09 09:56:10.662932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.519 [2024-12-09 09:56:10.662940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.519 qpair failed and we were unable to recover it. 00:38:35.519 [2024-12-09 09:56:10.663161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.519 [2024-12-09 09:56:10.663168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.519 qpair failed and we were unable to recover it. 00:38:35.519 [2024-12-09 09:56:10.663485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.519 [2024-12-09 09:56:10.663493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.519 qpair failed and we were unable to recover it. 00:38:35.519 [2024-12-09 09:56:10.663790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.519 [2024-12-09 09:56:10.663798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.519 qpair failed and we were unable to recover it. 00:38:35.519 [2024-12-09 09:56:10.663974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.519 [2024-12-09 09:56:10.663981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.519 qpair failed and we were unable to recover it. 00:38:35.519 [2024-12-09 09:56:10.664360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.519 [2024-12-09 09:56:10.664367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.519 qpair failed and we were unable to recover it. 00:38:35.519 [2024-12-09 09:56:10.664683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.519 [2024-12-09 09:56:10.664692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.519 qpair failed and we were unable to recover it. 00:38:35.519 [2024-12-09 09:56:10.664990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.519 [2024-12-09 09:56:10.664997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.519 qpair failed and we were unable to recover it. 00:38:35.519 [2024-12-09 09:56:10.665267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.519 [2024-12-09 09:56:10.665274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.519 qpair failed and we were unable to recover it. 00:38:35.519 [2024-12-09 09:56:10.665609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.520 [2024-12-09 09:56:10.665617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.520 qpair failed and we were unable to recover it. 00:38:35.520 [2024-12-09 09:56:10.665811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.520 [2024-12-09 09:56:10.665818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.520 qpair failed and we were unable to recover it. 00:38:35.520 [2024-12-09 09:56:10.666001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.520 [2024-12-09 09:56:10.666009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.520 qpair failed and we were unable to recover it. 00:38:35.520 [2024-12-09 09:56:10.666352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.520 [2024-12-09 09:56:10.666359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.520 qpair failed and we were unable to recover it. 00:38:35.520 [2024-12-09 09:56:10.666723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.520 [2024-12-09 09:56:10.666731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.520 qpair failed and we were unable to recover it. 00:38:35.520 [2024-12-09 09:56:10.667086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.520 [2024-12-09 09:56:10.667093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.520 qpair failed and we were unable to recover it. 00:38:35.520 [2024-12-09 09:56:10.667407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.520 [2024-12-09 09:56:10.667415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.520 qpair failed and we were unable to recover it. 00:38:35.520 [2024-12-09 09:56:10.667715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.520 [2024-12-09 09:56:10.667723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.520 qpair failed and we were unable to recover it. 00:38:35.520 [2024-12-09 09:56:10.668042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.520 [2024-12-09 09:56:10.668049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.520 qpair failed and we were unable to recover it. 00:38:35.520 [2024-12-09 09:56:10.668385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.520 [2024-12-09 09:56:10.668392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.520 qpair failed and we were unable to recover it. 00:38:35.520 [2024-12-09 09:56:10.668564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.520 [2024-12-09 09:56:10.668581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.520 qpair failed and we were unable to recover it. 00:38:35.520 [2024-12-09 09:56:10.668861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.520 [2024-12-09 09:56:10.668869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.520 qpair failed and we were unable to recover it. 00:38:35.520 [2024-12-09 09:56:10.669153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.520 [2024-12-09 09:56:10.669160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.520 qpair failed and we were unable to recover it. 00:38:35.520 [2024-12-09 09:56:10.669457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.520 [2024-12-09 09:56:10.669464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.520 qpair failed and we were unable to recover it. 00:38:35.520 [2024-12-09 09:56:10.669509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.520 [2024-12-09 09:56:10.669516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.520 qpair failed and we were unable to recover it. 00:38:35.520 [2024-12-09 09:56:10.669642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.520 [2024-12-09 09:56:10.669649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.520 qpair failed and we were unable to recover it. 00:38:35.520 [2024-12-09 09:56:10.669823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.520 [2024-12-09 09:56:10.669830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.520 qpair failed and we were unable to recover it. 00:38:35.520 [2024-12-09 09:56:10.670155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.520 [2024-12-09 09:56:10.670163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.520 qpair failed and we were unable to recover it. 00:38:35.520 [2024-12-09 09:56:10.670445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.520 [2024-12-09 09:56:10.670453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.520 qpair failed and we were unable to recover it. 00:38:35.520 [2024-12-09 09:56:10.670642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.520 [2024-12-09 09:56:10.670651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.520 qpair failed and we were unable to recover it. 00:38:35.520 [2024-12-09 09:56:10.670940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.520 [2024-12-09 09:56:10.670947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.520 qpair failed and we were unable to recover it. 00:38:35.520 [2024-12-09 09:56:10.671273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.520 [2024-12-09 09:56:10.671281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.520 qpair failed and we were unable to recover it. 00:38:35.520 [2024-12-09 09:56:10.671614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.520 [2024-12-09 09:56:10.671622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.520 qpair failed and we were unable to recover it. 00:38:35.520 [2024-12-09 09:56:10.671934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.520 [2024-12-09 09:56:10.671942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.520 qpair failed and we were unable to recover it. 00:38:35.520 [2024-12-09 09:56:10.672212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.520 [2024-12-09 09:56:10.672220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.520 qpair failed and we were unable to recover it. 00:38:35.520 [2024-12-09 09:56:10.672561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.520 [2024-12-09 09:56:10.672570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.520 qpair failed and we were unable to recover it. 00:38:35.520 [2024-12-09 09:56:10.672807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.520 [2024-12-09 09:56:10.672814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.520 qpair failed and we were unable to recover it. 00:38:35.520 [2024-12-09 09:56:10.673135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.520 [2024-12-09 09:56:10.673144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.520 qpair failed and we were unable to recover it. 00:38:35.520 [2024-12-09 09:56:10.673299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.520 [2024-12-09 09:56:10.673307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.520 qpair failed and we were unable to recover it. 00:38:35.520 [2024-12-09 09:56:10.673618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.520 [2024-12-09 09:56:10.673626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.520 qpair failed and we were unable to recover it. 00:38:35.520 [2024-12-09 09:56:10.673809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.520 [2024-12-09 09:56:10.673818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.520 qpair failed and we were unable to recover it. 00:38:35.520 [2024-12-09 09:56:10.674107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.520 [2024-12-09 09:56:10.674114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.520 qpair failed and we were unable to recover it. 00:38:35.520 [2024-12-09 09:56:10.674419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.520 [2024-12-09 09:56:10.674427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.520 qpair failed and we were unable to recover it. 00:38:35.520 [2024-12-09 09:56:10.674610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.520 [2024-12-09 09:56:10.674618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.520 qpair failed and we were unable to recover it. 00:38:35.520 [2024-12-09 09:56:10.674986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.520 [2024-12-09 09:56:10.674993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.520 qpair failed and we were unable to recover it. 00:38:35.520 [2024-12-09 09:56:10.675309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.520 [2024-12-09 09:56:10.675316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.520 qpair failed and we were unable to recover it. 00:38:35.520 [2024-12-09 09:56:10.675594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.520 [2024-12-09 09:56:10.675601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.520 qpair failed and we were unable to recover it. 00:38:35.520 [2024-12-09 09:56:10.675780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.520 [2024-12-09 09:56:10.675788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.520 qpair failed and we were unable to recover it. 00:38:35.520 [2024-12-09 09:56:10.676096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.521 [2024-12-09 09:56:10.676103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.521 qpair failed and we were unable to recover it. 00:38:35.521 [2024-12-09 09:56:10.676420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.521 [2024-12-09 09:56:10.676428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.521 qpair failed and we were unable to recover it. 00:38:35.521 [2024-12-09 09:56:10.676610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.521 [2024-12-09 09:56:10.676617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.521 qpair failed and we were unable to recover it. 00:38:35.521 [2024-12-09 09:56:10.676986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.521 [2024-12-09 09:56:10.676994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.521 qpair failed and we were unable to recover it. 00:38:35.521 [2024-12-09 09:56:10.677286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.521 [2024-12-09 09:56:10.677293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.521 qpair failed and we were unable to recover it. 00:38:35.521 [2024-12-09 09:56:10.677589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.521 [2024-12-09 09:56:10.677596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.521 qpair failed and we were unable to recover it. 00:38:35.521 [2024-12-09 09:56:10.677898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.521 [2024-12-09 09:56:10.677906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.521 qpair failed and we were unable to recover it. 00:38:35.521 [2024-12-09 09:56:10.678217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.521 [2024-12-09 09:56:10.678225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.521 qpair failed and we were unable to recover it. 00:38:35.521 [2024-12-09 09:56:10.678386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.521 [2024-12-09 09:56:10.678394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.521 qpair failed and we were unable to recover it. 00:38:35.521 [2024-12-09 09:56:10.678743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.521 [2024-12-09 09:56:10.678752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.521 qpair failed and we were unable to recover it. 00:38:35.521 [2024-12-09 09:56:10.679052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.521 [2024-12-09 09:56:10.679060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.521 qpair failed and we were unable to recover it. 00:38:35.521 [2024-12-09 09:56:10.679372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.521 [2024-12-09 09:56:10.679381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.521 qpair failed and we were unable to recover it. 00:38:35.521 [2024-12-09 09:56:10.679684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.521 [2024-12-09 09:56:10.679692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.521 qpair failed and we were unable to recover it. 00:38:35.521 [2024-12-09 09:56:10.680002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.521 [2024-12-09 09:56:10.680009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.521 qpair failed and we were unable to recover it. 00:38:35.521 [2024-12-09 09:56:10.680327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.521 [2024-12-09 09:56:10.680335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.521 qpair failed and we were unable to recover it. 00:38:35.521 [2024-12-09 09:56:10.680516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.521 [2024-12-09 09:56:10.680530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.521 qpair failed and we were unable to recover it. 00:38:35.521 [2024-12-09 09:56:10.680743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.521 [2024-12-09 09:56:10.680751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.521 qpair failed and we were unable to recover it. 00:38:35.521 [2024-12-09 09:56:10.681152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.521 [2024-12-09 09:56:10.681159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.521 qpair failed and we were unable to recover it. 00:38:35.521 [2024-12-09 09:56:10.681466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.521 [2024-12-09 09:56:10.681474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.521 qpair failed and we were unable to recover it. 00:38:35.521 [2024-12-09 09:56:10.681783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.521 [2024-12-09 09:56:10.681791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.521 qpair failed and we were unable to recover it. 00:38:35.521 [2024-12-09 09:56:10.682105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.521 [2024-12-09 09:56:10.682112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.521 qpair failed and we were unable to recover it. 00:38:35.521 [2024-12-09 09:56:10.682428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.521 [2024-12-09 09:56:10.682437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.521 qpair failed and we were unable to recover it. 00:38:35.521 [2024-12-09 09:56:10.682646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.521 [2024-12-09 09:56:10.682655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.521 qpair failed and we were unable to recover it. 00:38:35.521 [2024-12-09 09:56:10.682928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.521 [2024-12-09 09:56:10.682936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.521 qpair failed and we were unable to recover it. 00:38:35.521 [2024-12-09 09:56:10.683099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.521 [2024-12-09 09:56:10.683107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.521 qpair failed and we were unable to recover it. 00:38:35.521 [2024-12-09 09:56:10.683378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.521 [2024-12-09 09:56:10.683385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.521 qpair failed and we were unable to recover it. 00:38:35.521 [2024-12-09 09:56:10.683580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.521 [2024-12-09 09:56:10.683588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.521 qpair failed and we were unable to recover it. 00:38:35.521 [2024-12-09 09:56:10.683919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.521 [2024-12-09 09:56:10.683926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.521 qpair failed and we were unable to recover it. 00:38:35.521 [2024-12-09 09:56:10.684100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.521 [2024-12-09 09:56:10.684107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.521 qpair failed and we were unable to recover it. 00:38:35.521 [2024-12-09 09:56:10.684150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.521 [2024-12-09 09:56:10.684158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.521 qpair failed and we were unable to recover it. 00:38:35.521 [2024-12-09 09:56:10.684442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.521 [2024-12-09 09:56:10.684448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.521 qpair failed and we were unable to recover it. 00:38:35.521 [2024-12-09 09:56:10.684626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.521 [2024-12-09 09:56:10.684633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.521 qpair failed and we were unable to recover it. 00:38:35.521 [2024-12-09 09:56:10.684907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.521 [2024-12-09 09:56:10.684914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.521 qpair failed and we were unable to recover it. 00:38:35.521 [2024-12-09 09:56:10.685234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.521 [2024-12-09 09:56:10.685241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.521 qpair failed and we were unable to recover it. 00:38:35.521 [2024-12-09 09:56:10.685432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.521 [2024-12-09 09:56:10.685439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.521 qpair failed and we were unable to recover it. 00:38:35.521 [2024-12-09 09:56:10.685783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.521 [2024-12-09 09:56:10.685790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.521 qpair failed and we were unable to recover it. 00:38:35.521 [2024-12-09 09:56:10.685949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.521 [2024-12-09 09:56:10.685957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.521 qpair failed and we were unable to recover it. 00:38:35.521 [2024-12-09 09:56:10.686030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.521 [2024-12-09 09:56:10.686037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.521 qpair failed and we were unable to recover it. 00:38:35.522 [2024-12-09 09:56:10.686305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.522 [2024-12-09 09:56:10.686311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.522 qpair failed and we were unable to recover it. 00:38:35.522 [2024-12-09 09:56:10.686605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.522 [2024-12-09 09:56:10.686612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.522 qpair failed and we were unable to recover it. 00:38:35.522 [2024-12-09 09:56:10.686920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.522 [2024-12-09 09:56:10.686927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.522 qpair failed and we were unable to recover it. 00:38:35.522 [2024-12-09 09:56:10.687262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.522 [2024-12-09 09:56:10.687269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.522 qpair failed and we were unable to recover it. 00:38:35.522 [2024-12-09 09:56:10.687452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.522 [2024-12-09 09:56:10.687460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.522 qpair failed and we were unable to recover it. 00:38:35.522 [2024-12-09 09:56:10.687628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.522 [2024-12-09 09:56:10.687635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.522 qpair failed and we were unable to recover it. 00:38:35.522 [2024-12-09 09:56:10.687916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.522 [2024-12-09 09:56:10.687923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.522 qpair failed and we were unable to recover it. 00:38:35.522 [2024-12-09 09:56:10.688087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.522 [2024-12-09 09:56:10.688094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.522 qpair failed and we were unable to recover it. 00:38:35.522 [2024-12-09 09:56:10.688323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.522 [2024-12-09 09:56:10.688330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.522 qpair failed and we were unable to recover it. 00:38:35.522 [2024-12-09 09:56:10.688610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.522 [2024-12-09 09:56:10.688618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.522 qpair failed and we were unable to recover it. 00:38:35.522 [2024-12-09 09:56:10.688925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.522 [2024-12-09 09:56:10.688932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.522 qpair failed and we were unable to recover it. 00:38:35.522 [2024-12-09 09:56:10.689250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.522 [2024-12-09 09:56:10.689258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.522 qpair failed and we were unable to recover it. 00:38:35.522 [2024-12-09 09:56:10.689596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.522 [2024-12-09 09:56:10.689604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.522 qpair failed and we were unable to recover it. 00:38:35.522 [2024-12-09 09:56:10.689807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.522 [2024-12-09 09:56:10.689814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.522 qpair failed and we were unable to recover it. 00:38:35.522 [2024-12-09 09:56:10.690111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.522 [2024-12-09 09:56:10.690118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.522 qpair failed and we were unable to recover it. 00:38:35.522 [2024-12-09 09:56:10.690443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.522 [2024-12-09 09:56:10.690449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.522 qpair failed and we were unable to recover it. 00:38:35.522 [2024-12-09 09:56:10.690624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.522 [2024-12-09 09:56:10.690630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.522 qpair failed and we were unable to recover it. 00:38:35.522 [2024-12-09 09:56:10.690957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.522 [2024-12-09 09:56:10.690964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.522 qpair failed and we were unable to recover it. 00:38:35.522 [2024-12-09 09:56:10.691033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.522 [2024-12-09 09:56:10.691040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.522 qpair failed and we were unable to recover it. 00:38:35.522 [2024-12-09 09:56:10.691191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.522 [2024-12-09 09:56:10.691199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.522 qpair failed and we were unable to recover it. 00:38:35.522 [2024-12-09 09:56:10.691485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.522 [2024-12-09 09:56:10.691492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.522 qpair failed and we were unable to recover it. 00:38:35.522 [2024-12-09 09:56:10.691765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.522 [2024-12-09 09:56:10.691774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.522 qpair failed and we were unable to recover it. 00:38:35.522 [2024-12-09 09:56:10.692154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.522 [2024-12-09 09:56:10.692160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.522 qpair failed and we were unable to recover it. 00:38:35.522 [2024-12-09 09:56:10.692475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.522 [2024-12-09 09:56:10.692482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.522 qpair failed and we were unable to recover it. 00:38:35.522 [2024-12-09 09:56:10.692802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.522 [2024-12-09 09:56:10.692809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.522 qpair failed and we were unable to recover it. 00:38:35.522 [2024-12-09 09:56:10.693003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.522 [2024-12-09 09:56:10.693010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.522 qpair failed and we were unable to recover it. 00:38:35.522 [2024-12-09 09:56:10.693222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.522 [2024-12-09 09:56:10.693230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.522 qpair failed and we were unable to recover it. 00:38:35.522 [2024-12-09 09:56:10.693567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.522 [2024-12-09 09:56:10.693574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.522 qpair failed and we were unable to recover it. 00:38:35.522 [2024-12-09 09:56:10.693893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.522 [2024-12-09 09:56:10.693900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.522 qpair failed and we were unable to recover it. 00:38:35.522 [2024-12-09 09:56:10.693940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.522 [2024-12-09 09:56:10.693946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.522 qpair failed and we were unable to recover it. 00:38:35.522 [2024-12-09 09:56:10.694304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.522 [2024-12-09 09:56:10.694311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.522 qpair failed and we were unable to recover it. 00:38:35.522 [2024-12-09 09:56:10.694596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.522 [2024-12-09 09:56:10.694605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.522 qpair failed and we were unable to recover it. 00:38:35.522 [2024-12-09 09:56:10.694926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.522 [2024-12-09 09:56:10.694933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.522 qpair failed and we were unable to recover it. 00:38:35.522 [2024-12-09 09:56:10.695110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.522 [2024-12-09 09:56:10.695118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.522 qpair failed and we were unable to recover it. 00:38:35.523 [2024-12-09 09:56:10.695225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.523 [2024-12-09 09:56:10.695232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.523 qpair failed and we were unable to recover it. 00:38:35.523 [2024-12-09 09:56:10.695411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.523 [2024-12-09 09:56:10.695418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.523 qpair failed and we were unable to recover it. 00:38:35.523 [2024-12-09 09:56:10.695553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.523 [2024-12-09 09:56:10.695560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.523 qpair failed and we were unable to recover it. 00:38:35.523 [2024-12-09 09:56:10.695863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.523 [2024-12-09 09:56:10.695870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.523 qpair failed and we were unable to recover it. 00:38:35.523 [2024-12-09 09:56:10.696164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.523 [2024-12-09 09:56:10.696171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.523 qpair failed and we were unable to recover it. 00:38:35.523 [2024-12-09 09:56:10.696474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.523 [2024-12-09 09:56:10.696482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.523 qpair failed and we were unable to recover it. 00:38:35.523 [2024-12-09 09:56:10.696760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.523 [2024-12-09 09:56:10.696767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.523 qpair failed and we were unable to recover it. 00:38:35.523 [2024-12-09 09:56:10.697041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.523 [2024-12-09 09:56:10.697049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.523 qpair failed and we were unable to recover it. 00:38:35.523 [2024-12-09 09:56:10.697259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.523 [2024-12-09 09:56:10.697267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.523 qpair failed and we were unable to recover it. 00:38:35.523 [2024-12-09 09:56:10.697560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.523 [2024-12-09 09:56:10.697567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.523 qpair failed and we were unable to recover it. 00:38:35.523 [2024-12-09 09:56:10.697848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.523 [2024-12-09 09:56:10.697855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.523 qpair failed and we were unable to recover it. 00:38:35.523 [2024-12-09 09:56:10.698063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.523 [2024-12-09 09:56:10.698070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.523 qpair failed and we were unable to recover it. 00:38:35.523 [2024-12-09 09:56:10.698334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.523 [2024-12-09 09:56:10.698341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.523 qpair failed and we were unable to recover it. 00:38:35.523 [2024-12-09 09:56:10.698529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.523 [2024-12-09 09:56:10.698537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.523 qpair failed and we were unable to recover it. 00:38:35.523 [2024-12-09 09:56:10.698853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.523 [2024-12-09 09:56:10.698860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.523 qpair failed and we were unable to recover it. 00:38:35.523 [2024-12-09 09:56:10.699162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.523 [2024-12-09 09:56:10.699169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.523 qpair failed and we were unable to recover it. 00:38:35.523 [2024-12-09 09:56:10.699519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.523 [2024-12-09 09:56:10.699525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.523 qpair failed and we were unable to recover it. 00:38:35.523 [2024-12-09 09:56:10.699829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.523 [2024-12-09 09:56:10.699837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.523 qpair failed and we were unable to recover it. 00:38:35.523 [2024-12-09 09:56:10.700037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.523 [2024-12-09 09:56:10.700044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.523 qpair failed and we were unable to recover it. 00:38:35.523 [2024-12-09 09:56:10.700256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.523 [2024-12-09 09:56:10.700264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.523 qpair failed and we were unable to recover it. 00:38:35.523 [2024-12-09 09:56:10.700414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.523 [2024-12-09 09:56:10.700422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.523 qpair failed and we were unable to recover it. 00:38:35.523 [2024-12-09 09:56:10.700742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.523 [2024-12-09 09:56:10.700750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.523 qpair failed and we were unable to recover it. 00:38:35.523 [2024-12-09 09:56:10.701067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.523 [2024-12-09 09:56:10.701074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.523 qpair failed and we were unable to recover it. 00:38:35.523 [2024-12-09 09:56:10.701352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.523 [2024-12-09 09:56:10.701358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.523 qpair failed and we were unable to recover it. 00:38:35.523 [2024-12-09 09:56:10.701646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.523 [2024-12-09 09:56:10.701654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.523 qpair failed and we were unable to recover it. 00:38:35.523 [2024-12-09 09:56:10.701832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.523 [2024-12-09 09:56:10.701840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.523 qpair failed and we were unable to recover it. 00:38:35.523 [2024-12-09 09:56:10.702173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.523 [2024-12-09 09:56:10.702180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.523 qpair failed and we were unable to recover it. 00:38:35.523 [2024-12-09 09:56:10.702354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.523 [2024-12-09 09:56:10.702363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.523 qpair failed and we were unable to recover it. 00:38:35.523 [2024-12-09 09:56:10.702646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.523 [2024-12-09 09:56:10.702654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.523 qpair failed and we were unable to recover it. 00:38:35.523 [2024-12-09 09:56:10.702975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.523 [2024-12-09 09:56:10.702981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.523 qpair failed and we were unable to recover it. 00:38:35.523 [2024-12-09 09:56:10.703148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.523 [2024-12-09 09:56:10.703155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.523 qpair failed and we were unable to recover it. 00:38:35.523 [2024-12-09 09:56:10.703478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.523 [2024-12-09 09:56:10.703485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.523 qpair failed and we were unable to recover it. 00:38:35.523 [2024-12-09 09:56:10.703794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.523 [2024-12-09 09:56:10.703801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.523 qpair failed and we were unable to recover it. 00:38:35.523 [2024-12-09 09:56:10.704113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.523 [2024-12-09 09:56:10.704120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.523 qpair failed and we were unable to recover it. 00:38:35.523 [2024-12-09 09:56:10.704461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.523 [2024-12-09 09:56:10.704468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.523 qpair failed and we were unable to recover it. 00:38:35.523 [2024-12-09 09:56:10.704636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.523 [2024-12-09 09:56:10.704646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.523 qpair failed and we were unable to recover it. 00:38:35.523 [2024-12-09 09:56:10.704963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.524 [2024-12-09 09:56:10.704970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.524 qpair failed and we were unable to recover it. 00:38:35.524 [2024-12-09 09:56:10.705283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.524 [2024-12-09 09:56:10.705292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.524 qpair failed and we were unable to recover it. 00:38:35.524 [2024-12-09 09:56:10.705516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.524 [2024-12-09 09:56:10.705523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.524 qpair failed and we were unable to recover it. 00:38:35.524 [2024-12-09 09:56:10.705873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.524 [2024-12-09 09:56:10.705880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.524 qpair failed and we were unable to recover it. 00:38:35.524 [2024-12-09 09:56:10.706149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.524 [2024-12-09 09:56:10.706156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.524 qpair failed and we were unable to recover it. 00:38:35.524 [2024-12-09 09:56:10.706374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.524 [2024-12-09 09:56:10.706381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.524 qpair failed and we were unable to recover it. 00:38:35.524 [2024-12-09 09:56:10.706507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.524 [2024-12-09 09:56:10.706514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.524 qpair failed and we were unable to recover it. 00:38:35.524 [2024-12-09 09:56:10.706575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.524 [2024-12-09 09:56:10.706581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.524 qpair failed and we were unable to recover it. 00:38:35.524 [2024-12-09 09:56:10.706782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.524 [2024-12-09 09:56:10.706789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.524 qpair failed and we were unable to recover it. 00:38:35.524 [2024-12-09 09:56:10.707063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.524 [2024-12-09 09:56:10.707070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.524 qpair failed and we were unable to recover it. 00:38:35.524 [2024-12-09 09:56:10.707385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.524 [2024-12-09 09:56:10.707392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.524 qpair failed and we were unable to recover it. 00:38:35.524 [2024-12-09 09:56:10.707726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.524 [2024-12-09 09:56:10.707734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.524 qpair failed and we were unable to recover it. 00:38:35.524 [2024-12-09 09:56:10.708019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.524 [2024-12-09 09:56:10.708026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.524 qpair failed and we were unable to recover it. 00:38:35.524 [2024-12-09 09:56:10.708344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.524 [2024-12-09 09:56:10.708352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.524 qpair failed and we were unable to recover it. 00:38:35.524 [2024-12-09 09:56:10.708546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.524 [2024-12-09 09:56:10.708553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.524 qpair failed and we were unable to recover it. 00:38:35.524 [2024-12-09 09:56:10.708866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.524 [2024-12-09 09:56:10.708873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.524 qpair failed and we were unable to recover it. 00:38:35.524 [2024-12-09 09:56:10.709166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.524 [2024-12-09 09:56:10.709173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.524 qpair failed and we were unable to recover it. 00:38:35.524 [2024-12-09 09:56:10.709486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.524 [2024-12-09 09:56:10.709492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.524 qpair failed and we were unable to recover it. 00:38:35.524 [2024-12-09 09:56:10.709796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.524 [2024-12-09 09:56:10.709803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.524 qpair failed and we were unable to recover it. 00:38:35.524 [2024-12-09 09:56:10.709987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.524 [2024-12-09 09:56:10.709994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.524 qpair failed and we were unable to recover it. 00:38:35.524 [2024-12-09 09:56:10.710315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.524 [2024-12-09 09:56:10.710322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.524 qpair failed and we were unable to recover it. 00:38:35.524 [2024-12-09 09:56:10.710495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.524 [2024-12-09 09:56:10.710503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.524 qpair failed and we were unable to recover it. 00:38:35.524 [2024-12-09 09:56:10.710810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.524 [2024-12-09 09:56:10.710817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.524 qpair failed and we were unable to recover it. 00:38:35.524 [2024-12-09 09:56:10.711106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.524 [2024-12-09 09:56:10.711113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.524 qpair failed and we were unable to recover it. 00:38:35.524 [2024-12-09 09:56:10.711440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.524 [2024-12-09 09:56:10.711447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.524 qpair failed and we were unable to recover it. 00:38:35.524 [2024-12-09 09:56:10.711677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.524 [2024-12-09 09:56:10.711684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.524 qpair failed and we were unable to recover it. 00:38:35.524 [2024-12-09 09:56:10.711846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.524 [2024-12-09 09:56:10.711854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.524 qpair failed and we were unable to recover it. 00:38:35.524 [2024-12-09 09:56:10.712061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.524 [2024-12-09 09:56:10.712068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.524 qpair failed and we were unable to recover it. 00:38:35.524 [2024-12-09 09:56:10.712390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.524 [2024-12-09 09:56:10.712397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.524 qpair failed and we were unable to recover it. 00:38:35.524 [2024-12-09 09:56:10.712669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.524 [2024-12-09 09:56:10.712677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.524 qpair failed and we were unable to recover it. 00:38:35.524 [2024-12-09 09:56:10.712988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.524 [2024-12-09 09:56:10.712995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.524 qpair failed and we were unable to recover it. 00:38:35.524 [2024-12-09 09:56:10.713325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.524 [2024-12-09 09:56:10.713332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.524 qpair failed and we were unable to recover it. 00:38:35.524 [2024-12-09 09:56:10.713646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.524 [2024-12-09 09:56:10.713654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.524 qpair failed and we were unable to recover it. 00:38:35.524 [2024-12-09 09:56:10.713915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.524 [2024-12-09 09:56:10.713922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.524 qpair failed and we were unable to recover it. 00:38:35.524 [2024-12-09 09:56:10.714235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.524 [2024-12-09 09:56:10.714243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.524 qpair failed and we were unable to recover it. 00:38:35.524 [2024-12-09 09:56:10.714453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.524 [2024-12-09 09:56:10.714461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.524 qpair failed and we were unable to recover it. 00:38:35.524 [2024-12-09 09:56:10.714661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.524 [2024-12-09 09:56:10.714668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.524 qpair failed and we were unable to recover it. 00:38:35.524 [2024-12-09 09:56:10.714950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.524 [2024-12-09 09:56:10.714957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.524 qpair failed and we were unable to recover it. 00:38:35.524 [2024-12-09 09:56:10.715276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.525 [2024-12-09 09:56:10.715282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.525 qpair failed and we were unable to recover it. 00:38:35.525 [2024-12-09 09:56:10.715452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.525 [2024-12-09 09:56:10.715459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.525 qpair failed and we were unable to recover it. 00:38:35.525 [2024-12-09 09:56:10.715614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.525 [2024-12-09 09:56:10.715621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.525 qpair failed and we were unable to recover it. 00:38:35.525 [2024-12-09 09:56:10.715988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.525 [2024-12-09 09:56:10.715997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.525 qpair failed and we were unable to recover it. 00:38:35.525 [2024-12-09 09:56:10.716300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.525 [2024-12-09 09:56:10.716307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.525 qpair failed and we were unable to recover it. 00:38:35.525 [2024-12-09 09:56:10.716626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.525 [2024-12-09 09:56:10.716632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.525 qpair failed and we were unable to recover it. 00:38:35.525 [2024-12-09 09:56:10.716942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.525 [2024-12-09 09:56:10.716948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.525 qpair failed and we were unable to recover it. 00:38:35.525 [2024-12-09 09:56:10.717311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.525 [2024-12-09 09:56:10.717317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.525 qpair failed and we were unable to recover it. 00:38:35.525 [2024-12-09 09:56:10.717606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.525 [2024-12-09 09:56:10.717614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.525 qpair failed and we were unable to recover it. 00:38:35.525 [2024-12-09 09:56:10.717959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.525 [2024-12-09 09:56:10.717967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.525 qpair failed and we were unable to recover it. 00:38:35.525 [2024-12-09 09:56:10.718237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.525 [2024-12-09 09:56:10.718245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.525 qpair failed and we were unable to recover it. 00:38:35.525 [2024-12-09 09:56:10.718546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.525 [2024-12-09 09:56:10.718554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.525 qpair failed and we were unable to recover it. 00:38:35.525 [2024-12-09 09:56:10.718843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.525 [2024-12-09 09:56:10.718850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.525 qpair failed and we were unable to recover it. 00:38:35.525 09:56:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:35.525 [2024-12-09 09:56:10.719157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.525 [2024-12-09 09:56:10.719164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.525 qpair failed and we were unable to recover it. 00:38:35.525 09:56:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:38:35.525 [2024-12-09 09:56:10.719501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.525 [2024-12-09 09:56:10.719509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.525 qpair failed and we were unable to recover it. 00:38:35.525 [2024-12-09 09:56:10.719729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.525 [2024-12-09 09:56:10.719736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.525 qpair failed and we were unable to recover it. 00:38:35.525 09:56:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:35.525 [2024-12-09 09:56:10.720050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.525 [2024-12-09 09:56:10.720058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.525 qpair failed and we were unable to recover it. 00:38:35.525 09:56:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:35.525 09:56:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:35.525 [2024-12-09 09:56:10.720384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.525 [2024-12-09 09:56:10.720392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.525 qpair failed and we were unable to recover it. 00:38:35.525 [2024-12-09 09:56:10.720689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.525 [2024-12-09 09:56:10.720696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.525 qpair failed and we were unable to recover it. 00:38:35.525 [2024-12-09 09:56:10.721011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.525 [2024-12-09 09:56:10.721019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.525 qpair failed and we were unable to recover it. 00:38:35.525 [2024-12-09 09:56:10.721160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.525 [2024-12-09 09:56:10.721168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.525 qpair failed and we were unable to recover it. 00:38:35.525 [2024-12-09 09:56:10.721454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.525 [2024-12-09 09:56:10.721461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.525 qpair failed and we were unable to recover it. 00:38:35.525 [2024-12-09 09:56:10.721634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.525 [2024-12-09 09:56:10.721647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.525 qpair failed and we were unable to recover it. 00:38:35.525 [2024-12-09 09:56:10.721970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.525 [2024-12-09 09:56:10.721978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.525 qpair failed and we were unable to recover it. 00:38:35.525 [2024-12-09 09:56:10.722154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.525 [2024-12-09 09:56:10.722165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.525 qpair failed and we were unable to recover it. 00:38:35.525 [2024-12-09 09:56:10.722328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.525 [2024-12-09 09:56:10.722372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.525 qpair failed and we were unable to recover it. 00:38:35.525 [2024-12-09 09:56:10.722645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.525 [2024-12-09 09:56:10.722653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.525 qpair failed and we were unable to recover it. 00:38:35.525 [2024-12-09 09:56:10.722869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.525 [2024-12-09 09:56:10.722877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.525 qpair failed and we were unable to recover it. 00:38:35.525 [2024-12-09 09:56:10.723070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.525 [2024-12-09 09:56:10.723076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.525 qpair failed and we were unable to recover it. 00:38:35.525 [2024-12-09 09:56:10.723372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.525 [2024-12-09 09:56:10.723380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.525 qpair failed and we were unable to recover it. 00:38:35.525 [2024-12-09 09:56:10.723662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.525 [2024-12-09 09:56:10.723670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.525 qpair failed and we were unable to recover it. 00:38:35.525 [2024-12-09 09:56:10.723896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.525 [2024-12-09 09:56:10.723903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.525 qpair failed and we were unable to recover it. 00:38:35.525 [2024-12-09 09:56:10.724084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.525 [2024-12-09 09:56:10.724091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.525 qpair failed and we were unable to recover it. 00:38:35.525 [2024-12-09 09:56:10.724355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.525 [2024-12-09 09:56:10.724362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.525 qpair failed and we were unable to recover it. 00:38:35.525 [2024-12-09 09:56:10.724672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.525 [2024-12-09 09:56:10.724681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.525 qpair failed and we were unable to recover it. 00:38:35.525 [2024-12-09 09:56:10.724996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.525 [2024-12-09 09:56:10.725004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.526 qpair failed and we were unable to recover it. 00:38:35.526 [2024-12-09 09:56:10.725292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.526 [2024-12-09 09:56:10.725299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.526 qpair failed and we were unable to recover it. 00:38:35.526 [2024-12-09 09:56:10.725564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.526 [2024-12-09 09:56:10.725572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.526 qpair failed and we were unable to recover it. 00:38:35.526 [2024-12-09 09:56:10.725891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.526 [2024-12-09 09:56:10.725898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.526 qpair failed and we were unable to recover it. 00:38:35.526 [2024-12-09 09:56:10.726263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.526 [2024-12-09 09:56:10.726270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.526 qpair failed and we were unable to recover it. 00:38:35.526 [2024-12-09 09:56:10.726593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.526 [2024-12-09 09:56:10.726600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.526 qpair failed and we were unable to recover it. 00:38:35.526 [2024-12-09 09:56:10.726913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.526 [2024-12-09 09:56:10.726923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.526 qpair failed and we were unable to recover it. 00:38:35.526 [2024-12-09 09:56:10.727251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.526 [2024-12-09 09:56:10.727258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.526 qpair failed and we were unable to recover it. 00:38:35.526 [2024-12-09 09:56:10.727430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.526 [2024-12-09 09:56:10.727436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.526 qpair failed and we were unable to recover it. 00:38:35.526 [2024-12-09 09:56:10.727699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.526 [2024-12-09 09:56:10.727707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.526 qpair failed and we were unable to recover it. 00:38:35.526 [2024-12-09 09:56:10.728007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.526 [2024-12-09 09:56:10.728014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.526 qpair failed and we were unable to recover it. 00:38:35.526 [2024-12-09 09:56:10.728327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.526 [2024-12-09 09:56:10.728335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.526 qpair failed and we were unable to recover it. 00:38:35.526 [2024-12-09 09:56:10.728643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.526 [2024-12-09 09:56:10.728651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.526 qpair failed and we were unable to recover it. 00:38:35.526 [2024-12-09 09:56:10.728957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.526 [2024-12-09 09:56:10.728965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.526 qpair failed and we were unable to recover it. 00:38:35.526 [2024-12-09 09:56:10.729276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.526 [2024-12-09 09:56:10.729284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.526 qpair failed and we were unable to recover it. 00:38:35.526 [2024-12-09 09:56:10.729594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.526 [2024-12-09 09:56:10.729601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.526 qpair failed and we were unable to recover it. 00:38:35.526 [2024-12-09 09:56:10.729782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.526 [2024-12-09 09:56:10.729790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.526 qpair failed and we were unable to recover it. 00:38:35.526 [2024-12-09 09:56:10.730011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.526 [2024-12-09 09:56:10.730018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.526 qpair failed and we were unable to recover it. 00:38:35.526 [2024-12-09 09:56:10.730333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.526 [2024-12-09 09:56:10.730340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.526 qpair failed and we were unable to recover it. 00:38:35.526 [2024-12-09 09:56:10.730640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.526 [2024-12-09 09:56:10.730647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.526 qpair failed and we were unable to recover it. 00:38:35.526 [2024-12-09 09:56:10.730947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.526 [2024-12-09 09:56:10.730955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.526 qpair failed and we were unable to recover it. 00:38:35.526 [2024-12-09 09:56:10.731260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.526 [2024-12-09 09:56:10.731269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.526 qpair failed and we were unable to recover it. 00:38:35.526 [2024-12-09 09:56:10.731582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.526 [2024-12-09 09:56:10.731591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.526 qpair failed and we were unable to recover it. 00:38:35.526 [2024-12-09 09:56:10.731839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.526 [2024-12-09 09:56:10.731848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.526 qpair failed and we were unable to recover it. 00:38:35.526 [2024-12-09 09:56:10.732147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.526 [2024-12-09 09:56:10.732154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.526 qpair failed and we were unable to recover it. 00:38:35.526 [2024-12-09 09:56:10.732327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.526 [2024-12-09 09:56:10.732335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.526 qpair failed and we were unable to recover it. 00:38:35.526 [2024-12-09 09:56:10.732635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.526 [2024-12-09 09:56:10.732649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.526 qpair failed and we were unable to recover it. 00:38:35.526 [2024-12-09 09:56:10.732945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.526 [2024-12-09 09:56:10.732954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.526 qpair failed and we were unable to recover it. 00:38:35.526 [2024-12-09 09:56:10.733231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.526 [2024-12-09 09:56:10.733240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.526 qpair failed and we were unable to recover it. 00:38:35.526 [2024-12-09 09:56:10.733555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.526 [2024-12-09 09:56:10.733564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.526 qpair failed and we were unable to recover it. 00:38:35.526 [2024-12-09 09:56:10.733726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.526 [2024-12-09 09:56:10.733734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.526 qpair failed and we were unable to recover it. 00:38:35.526 [2024-12-09 09:56:10.734011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.526 [2024-12-09 09:56:10.734018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.526 qpair failed and we were unable to recover it. 00:38:35.526 [2024-12-09 09:56:10.734358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.526 [2024-12-09 09:56:10.734367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.526 qpair failed and we were unable to recover it. 00:38:35.526 [2024-12-09 09:56:10.734663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.526 [2024-12-09 09:56:10.734671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.526 qpair failed and we were unable to recover it. 00:38:35.526 [2024-12-09 09:56:10.734985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.526 [2024-12-09 09:56:10.734992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.526 qpair failed and we were unable to recover it. 00:38:35.526 [2024-12-09 09:56:10.735150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.526 [2024-12-09 09:56:10.735158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.526 qpair failed and we were unable to recover it. 00:38:35.526 [2024-12-09 09:56:10.735429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.526 [2024-12-09 09:56:10.735436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.526 qpair failed and we were unable to recover it. 00:38:35.526 [2024-12-09 09:56:10.735719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.526 [2024-12-09 09:56:10.735726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.526 qpair failed and we were unable to recover it. 00:38:35.526 [2024-12-09 09:56:10.736005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.526 [2024-12-09 09:56:10.736012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.526 qpair failed and we were unable to recover it. 00:38:35.527 [2024-12-09 09:56:10.736185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.527 [2024-12-09 09:56:10.736192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.527 qpair failed and we were unable to recover it. 00:38:35.527 [2024-12-09 09:56:10.736459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.527 [2024-12-09 09:56:10.736466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.527 qpair failed and we were unable to recover it. 00:38:35.527 [2024-12-09 09:56:10.736760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.527 [2024-12-09 09:56:10.736767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.527 qpair failed and we were unable to recover it. 00:38:35.527 [2024-12-09 09:56:10.737049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.527 [2024-12-09 09:56:10.737057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.527 qpair failed and we were unable to recover it. 00:38:35.527 [2024-12-09 09:56:10.737389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.527 [2024-12-09 09:56:10.737396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.527 qpair failed and we were unable to recover it. 00:38:35.527 [2024-12-09 09:56:10.737696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.527 [2024-12-09 09:56:10.737705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.527 qpair failed and we were unable to recover it. 00:38:35.527 [2024-12-09 09:56:10.738083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.527 [2024-12-09 09:56:10.738090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.527 qpair failed and we were unable to recover it. 00:38:35.527 [2024-12-09 09:56:10.738369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.527 [2024-12-09 09:56:10.738377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.527 qpair failed and we were unable to recover it. 00:38:35.527 [2024-12-09 09:56:10.738561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.527 [2024-12-09 09:56:10.738569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.527 qpair failed and we were unable to recover it. 00:38:35.527 [2024-12-09 09:56:10.738883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.527 [2024-12-09 09:56:10.738890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.527 qpair failed and we were unable to recover it. 00:38:35.527 [2024-12-09 09:56:10.739071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.527 [2024-12-09 09:56:10.739085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.527 qpair failed and we were unable to recover it. 00:38:35.527 [2024-12-09 09:56:10.739360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.527 [2024-12-09 09:56:10.739369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.527 qpair failed and we were unable to recover it. 00:38:35.527 [2024-12-09 09:56:10.739680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.527 [2024-12-09 09:56:10.739687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.527 qpair failed and we were unable to recover it. 00:38:35.527 [2024-12-09 09:56:10.739968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.527 [2024-12-09 09:56:10.739975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.527 qpair failed and we were unable to recover it. 00:38:35.527 [2024-12-09 09:56:10.740179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.527 [2024-12-09 09:56:10.740187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.527 qpair failed and we were unable to recover it. 00:38:35.527 [2024-12-09 09:56:10.740375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.527 [2024-12-09 09:56:10.740383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.527 qpair failed and we were unable to recover it. 00:38:35.527 [2024-12-09 09:56:10.740666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.527 [2024-12-09 09:56:10.740673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.527 qpair failed and we were unable to recover it. 00:38:35.527 [2024-12-09 09:56:10.740947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.527 [2024-12-09 09:56:10.740954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.527 qpair failed and we were unable to recover it. 00:38:35.527 [2024-12-09 09:56:10.741322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.527 [2024-12-09 09:56:10.741329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.527 qpair failed and we were unable to recover it. 00:38:35.527 [2024-12-09 09:56:10.741649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.527 [2024-12-09 09:56:10.741656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.527 qpair failed and we were unable to recover it. 00:38:35.527 [2024-12-09 09:56:10.741848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.527 [2024-12-09 09:56:10.741856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.527 qpair failed and we were unable to recover it. 00:38:35.527 [2024-12-09 09:56:10.742193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.527 [2024-12-09 09:56:10.742200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.527 qpair failed and we were unable to recover it. 00:38:35.527 [2024-12-09 09:56:10.742476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.527 [2024-12-09 09:56:10.742484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.527 qpair failed and we were unable to recover it. 00:38:35.527 [2024-12-09 09:56:10.742892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.527 [2024-12-09 09:56:10.742900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.527 qpair failed and we were unable to recover it. 00:38:35.527 [2024-12-09 09:56:10.743168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.527 [2024-12-09 09:56:10.743175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.527 qpair failed and we were unable to recover it. 00:38:35.527 [2024-12-09 09:56:10.743488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.527 [2024-12-09 09:56:10.743494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.527 qpair failed and we were unable to recover it. 00:38:35.527 [2024-12-09 09:56:10.743806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.527 [2024-12-09 09:56:10.743814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.527 qpair failed and we were unable to recover it. 00:38:35.527 [2024-12-09 09:56:10.744144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.527 [2024-12-09 09:56:10.744151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.527 qpair failed and we were unable to recover it. 00:38:35.527 [2024-12-09 09:56:10.744484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.527 [2024-12-09 09:56:10.744491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.527 qpair failed and we were unable to recover it. 00:38:35.527 [2024-12-09 09:56:10.744780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.527 [2024-12-09 09:56:10.744787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.527 qpair failed and we were unable to recover it. 00:38:35.527 [2024-12-09 09:56:10.745100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.527 [2024-12-09 09:56:10.745107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.527 qpair failed and we were unable to recover it. 00:38:35.527 [2024-12-09 09:56:10.745407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.527 [2024-12-09 09:56:10.745414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.527 qpair failed and we were unable to recover it. 00:38:35.527 [2024-12-09 09:56:10.745753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.527 [2024-12-09 09:56:10.745760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.527 qpair failed and we were unable to recover it. 00:38:35.527 [2024-12-09 09:56:10.746059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.528 [2024-12-09 09:56:10.746066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.528 qpair failed and we were unable to recover it. 00:38:35.528 [2024-12-09 09:56:10.746378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.528 [2024-12-09 09:56:10.746386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.528 qpair failed and we were unable to recover it. 00:38:35.528 [2024-12-09 09:56:10.746535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.528 [2024-12-09 09:56:10.746542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.528 qpair failed and we were unable to recover it. 00:38:35.528 [2024-12-09 09:56:10.746828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.528 [2024-12-09 09:56:10.746836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.528 qpair failed and we were unable to recover it. 00:38:35.528 [2024-12-09 09:56:10.746875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.528 [2024-12-09 09:56:10.746882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.528 qpair failed and we were unable to recover it. 00:38:35.528 [2024-12-09 09:56:10.747048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.528 [2024-12-09 09:56:10.747056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.528 qpair failed and we were unable to recover it. 00:38:35.528 [2024-12-09 09:56:10.747267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.528 [2024-12-09 09:56:10.747275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.528 qpair failed and we were unable to recover it. 00:38:35.528 [2024-12-09 09:56:10.747463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.528 [2024-12-09 09:56:10.747470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.528 qpair failed and we were unable to recover it. 00:38:35.528 [2024-12-09 09:56:10.747655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.528 [2024-12-09 09:56:10.747663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.528 qpair failed and we were unable to recover it. 00:38:35.528 [2024-12-09 09:56:10.747929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.528 [2024-12-09 09:56:10.747937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.528 qpair failed and we were unable to recover it. 00:38:35.528 [2024-12-09 09:56:10.748232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.528 [2024-12-09 09:56:10.748239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.528 qpair failed and we were unable to recover it. 00:38:35.528 [2024-12-09 09:56:10.748550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.528 [2024-12-09 09:56:10.748557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.528 qpair failed and we were unable to recover it. 00:38:35.528 [2024-12-09 09:56:10.748774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.528 [2024-12-09 09:56:10.748781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.528 qpair failed and we were unable to recover it. 00:38:35.528 [2024-12-09 09:56:10.749067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.528 [2024-12-09 09:56:10.749074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.528 qpair failed and we were unable to recover it. 00:38:35.528 [2024-12-09 09:56:10.749361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.528 [2024-12-09 09:56:10.749372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.528 qpair failed and we were unable to recover it. 00:38:35.528 [2024-12-09 09:56:10.749689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.528 [2024-12-09 09:56:10.749698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.528 qpair failed and we were unable to recover it. 00:38:35.528 [2024-12-09 09:56:10.749885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.528 [2024-12-09 09:56:10.749893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.528 qpair failed and we were unable to recover it. 00:38:35.528 [2024-12-09 09:56:10.750163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.528 [2024-12-09 09:56:10.750170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.528 qpair failed and we were unable to recover it. 00:38:35.528 [2024-12-09 09:56:10.750483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.528 [2024-12-09 09:56:10.750491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.528 qpair failed and we were unable to recover it. 00:38:35.528 [2024-12-09 09:56:10.750663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.528 [2024-12-09 09:56:10.750671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.528 qpair failed and we were unable to recover it. 00:38:35.528 [2024-12-09 09:56:10.750967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.528 [2024-12-09 09:56:10.750975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.528 qpair failed and we were unable to recover it. 00:38:35.528 [2024-12-09 09:56:10.751242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.528 [2024-12-09 09:56:10.751249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.528 qpair failed and we were unable to recover it. 00:38:35.528 [2024-12-09 09:56:10.751589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.528 [2024-12-09 09:56:10.751597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.528 qpair failed and we were unable to recover it. 00:38:35.528 [2024-12-09 09:56:10.751886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.528 [2024-12-09 09:56:10.751894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.528 qpair failed and we were unable to recover it. 00:38:35.528 [2024-12-09 09:56:10.752219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.528 [2024-12-09 09:56:10.752227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.528 qpair failed and we were unable to recover it. 00:38:35.528 [2024-12-09 09:56:10.752510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.528 [2024-12-09 09:56:10.752516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.528 qpair failed and we were unable to recover it. 00:38:35.528 [2024-12-09 09:56:10.752710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.528 [2024-12-09 09:56:10.752728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.528 qpair failed and we were unable to recover it. 00:38:35.528 [2024-12-09 09:56:10.753020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.528 [2024-12-09 09:56:10.753027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.528 qpair failed and we were unable to recover it. 00:38:35.528 [2024-12-09 09:56:10.753306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.528 [2024-12-09 09:56:10.753315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.528 qpair failed and we were unable to recover it. 00:38:35.528 [2024-12-09 09:56:10.753607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.528 [2024-12-09 09:56:10.753614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.528 qpair failed and we were unable to recover it. 00:38:35.528 [2024-12-09 09:56:10.753886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.528 [2024-12-09 09:56:10.753893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.528 qpair failed and we were unable to recover it. 00:38:35.528 [2024-12-09 09:56:10.754221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.528 [2024-12-09 09:56:10.754229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.528 qpair failed and we were unable to recover it. 00:38:35.528 [2024-12-09 09:56:10.754544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.528 [2024-12-09 09:56:10.754552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.528 qpair failed and we were unable to recover it. 00:38:35.528 [2024-12-09 09:56:10.754861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.528 [2024-12-09 09:56:10.754869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.528 qpair failed and we were unable to recover it. 00:38:35.528 [2024-12-09 09:56:10.755035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.528 [2024-12-09 09:56:10.755043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.528 qpair failed and we were unable to recover it. 00:38:35.528 [2024-12-09 09:56:10.755330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.528 [2024-12-09 09:56:10.755338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.528 qpair failed and we were unable to recover it. 00:38:35.528 [2024-12-09 09:56:10.755508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.528 [2024-12-09 09:56:10.755517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.528 qpair failed and we were unable to recover it. 00:38:35.528 [2024-12-09 09:56:10.755813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.528 [2024-12-09 09:56:10.755820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.528 qpair failed and we were unable to recover it. 00:38:35.529 [2024-12-09 09:56:10.756009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.529 [2024-12-09 09:56:10.756020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.529 qpair failed and we were unable to recover it. 00:38:35.529 [2024-12-09 09:56:10.756342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.529 [2024-12-09 09:56:10.756349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.529 qpair failed and we were unable to recover it. 00:38:35.529 [2024-12-09 09:56:10.756659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.529 [2024-12-09 09:56:10.756666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.529 qpair failed and we were unable to recover it. 00:38:35.529 [2024-12-09 09:56:10.756839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.529 [2024-12-09 09:56:10.756846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.529 qpair failed and we were unable to recover it. 00:38:35.529 [2024-12-09 09:56:10.757209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.529 [2024-12-09 09:56:10.757217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.529 qpair failed and we were unable to recover it. 00:38:35.529 [2024-12-09 09:56:10.757378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.529 [2024-12-09 09:56:10.757392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.529 qpair failed and we were unable to recover it. 00:38:35.529 [2024-12-09 09:56:10.757710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.529 [2024-12-09 09:56:10.757717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.529 qpair failed and we were unable to recover it. 00:38:35.529 [2024-12-09 09:56:10.758003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.529 [2024-12-09 09:56:10.758011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.529 qpair failed and we were unable to recover it. 00:38:35.529 [2024-12-09 09:56:10.758376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.529 [2024-12-09 09:56:10.758382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.529 qpair failed and we were unable to recover it. 00:38:35.529 [2024-12-09 09:56:10.758718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.529 [2024-12-09 09:56:10.758725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.529 qpair failed and we were unable to recover it. 00:38:35.529 [2024-12-09 09:56:10.759025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.529 [2024-12-09 09:56:10.759032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.529 qpair failed and we were unable to recover it. 00:38:35.529 [2024-12-09 09:56:10.759357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.529 [2024-12-09 09:56:10.759364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.529 qpair failed and we were unable to recover it. 00:38:35.529 09:56:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:35.529 [2024-12-09 09:56:10.759692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.529 [2024-12-09 09:56:10.759702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.529 qpair failed and we were unable to recover it. 00:38:35.529 [2024-12-09 09:56:10.759896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.529 [2024-12-09 09:56:10.759903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.529 qpair failed and we were unable to recover it. 00:38:35.529 09:56:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:35.529 [2024-12-09 09:56:10.760165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.529 [2024-12-09 09:56:10.760173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.529 qpair failed and we were unable to recover it. 00:38:35.529 09:56:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:35.529 [2024-12-09 09:56:10.760501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.529 [2024-12-09 09:56:10.760509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.529 qpair failed and we were unable to recover it. 00:38:35.529 09:56:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:35.529 [2024-12-09 09:56:10.760808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.529 [2024-12-09 09:56:10.760816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.529 qpair failed and we were unable to recover it. 00:38:35.529 [2024-12-09 09:56:10.761003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.529 [2024-12-09 09:56:10.761010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.529 qpair failed and we were unable to recover it. 00:38:35.529 [2024-12-09 09:56:10.761513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.529 [2024-12-09 09:56:10.761603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf4000b90 with addr=10.0.0.2, port=4420 00:38:35.529 qpair failed and we were unable to recover it. 00:38:35.529 [2024-12-09 09:56:10.762031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.529 [2024-12-09 09:56:10.762068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf4000b90 with addr=10.0.0.2, port=4420 00:38:35.529 qpair failed and we were unable to recover it. 00:38:35.529 [2024-12-09 09:56:10.762346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.529 [2024-12-09 09:56:10.762356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.529 qpair failed and we were unable to recover it. 00:38:35.529 [2024-12-09 09:56:10.762585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.529 [2024-12-09 09:56:10.762593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.529 qpair failed and we were unable to recover it. 00:38:35.529 [2024-12-09 09:56:10.762940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.529 [2024-12-09 09:56:10.762948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.529 qpair failed and we were unable to recover it. 00:38:35.529 [2024-12-09 09:56:10.763264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.529 [2024-12-09 09:56:10.763271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.529 qpair failed and we were unable to recover it. 00:38:35.529 [2024-12-09 09:56:10.763566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.529 [2024-12-09 09:56:10.763573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.529 qpair failed and we were unable to recover it. 00:38:35.529 [2024-12-09 09:56:10.763878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.529 [2024-12-09 09:56:10.763886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.529 qpair failed and we were unable to recover it. 00:38:35.529 [2024-12-09 09:56:10.764168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.529 [2024-12-09 09:56:10.764176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.529 qpair failed and we were unable to recover it. 00:38:35.529 [2024-12-09 09:56:10.764475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.529 [2024-12-09 09:56:10.764483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.529 qpair failed and we were unable to recover it. 00:38:35.529 [2024-12-09 09:56:10.764762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.529 [2024-12-09 09:56:10.764769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.529 qpair failed and we were unable to recover it. 00:38:35.529 [2024-12-09 09:56:10.765085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.529 [2024-12-09 09:56:10.765092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.529 qpair failed and we were unable to recover it. 00:38:35.529 [2024-12-09 09:56:10.765387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.529 [2024-12-09 09:56:10.765395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.529 qpair failed and we were unable to recover it. 00:38:35.529 [2024-12-09 09:56:10.765716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.529 [2024-12-09 09:56:10.765723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.529 qpair failed and we were unable to recover it. 00:38:35.529 [2024-12-09 09:56:10.765902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.529 [2024-12-09 09:56:10.765917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.529 qpair failed and we were unable to recover it. 00:38:35.529 [2024-12-09 09:56:10.766195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.529 [2024-12-09 09:56:10.766202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.529 qpair failed and we were unable to recover it. 00:38:35.529 [2024-12-09 09:56:10.766491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.529 [2024-12-09 09:56:10.766498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.529 qpair failed and we were unable to recover it. 00:38:35.529 [2024-12-09 09:56:10.766842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.529 [2024-12-09 09:56:10.766849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.529 qpair failed and we were unable to recover it. 00:38:35.529 [2024-12-09 09:56:10.767038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.530 [2024-12-09 09:56:10.767045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.530 qpair failed and we were unable to recover it. 00:38:35.530 [2024-12-09 09:56:10.767391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.530 [2024-12-09 09:56:10.767398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.530 qpair failed and we were unable to recover it. 00:38:35.530 [2024-12-09 09:56:10.767687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.530 [2024-12-09 09:56:10.767694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.530 qpair failed and we were unable to recover it. 00:38:35.530 [2024-12-09 09:56:10.768008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.530 [2024-12-09 09:56:10.768014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.530 qpair failed and we were unable to recover it. 00:38:35.530 [2024-12-09 09:56:10.768332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.530 [2024-12-09 09:56:10.768338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.530 qpair failed and we were unable to recover it. 00:38:35.530 [2024-12-09 09:56:10.768666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.530 [2024-12-09 09:56:10.768674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.530 qpair failed and we were unable to recover it. 00:38:35.530 [2024-12-09 09:56:10.768966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.530 [2024-12-09 09:56:10.768972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.530 qpair failed and we were unable to recover it. 00:38:35.530 [2024-12-09 09:56:10.769258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.530 [2024-12-09 09:56:10.769265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.530 qpair failed and we were unable to recover it. 00:38:35.530 [2024-12-09 09:56:10.769633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.530 [2024-12-09 09:56:10.769642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.530 qpair failed and we were unable to recover it. 00:38:35.530 [2024-12-09 09:56:10.769970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.530 [2024-12-09 09:56:10.769977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.530 qpair failed and we were unable to recover it. 00:38:35.530 [2024-12-09 09:56:10.770279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.530 [2024-12-09 09:56:10.770285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.530 qpair failed and we were unable to recover it. 00:38:35.530 [2024-12-09 09:56:10.770603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.530 [2024-12-09 09:56:10.770609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.530 qpair failed and we were unable to recover it. 00:38:35.530 [2024-12-09 09:56:10.770863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.530 [2024-12-09 09:56:10.770871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.530 qpair failed and we were unable to recover it. 00:38:35.530 [2024-12-09 09:56:10.771056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.530 [2024-12-09 09:56:10.771063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.530 qpair failed and we were unable to recover it. 00:38:35.530 [2024-12-09 09:56:10.771388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.530 [2024-12-09 09:56:10.771395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.530 qpair failed and we were unable to recover it. 00:38:35.530 [2024-12-09 09:56:10.771695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.530 [2024-12-09 09:56:10.771702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.530 qpair failed and we were unable to recover it. 00:38:35.530 [2024-12-09 09:56:10.772017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.530 [2024-12-09 09:56:10.772024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.530 qpair failed and we were unable to recover it. 00:38:35.530 [2024-12-09 09:56:10.772320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.530 [2024-12-09 09:56:10.772327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.530 qpair failed and we were unable to recover it. 00:38:35.530 [2024-12-09 09:56:10.772641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.530 [2024-12-09 09:56:10.772648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.530 qpair failed and we were unable to recover it. 00:38:35.530 [2024-12-09 09:56:10.772901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.530 [2024-12-09 09:56:10.772909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.530 qpair failed and we were unable to recover it. 00:38:35.530 [2024-12-09 09:56:10.773097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.530 [2024-12-09 09:56:10.773104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.530 qpair failed and we were unable to recover it. 00:38:35.530 [2024-12-09 09:56:10.773461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.530 [2024-12-09 09:56:10.773469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.530 qpair failed and we were unable to recover it. 00:38:35.530 [2024-12-09 09:56:10.773792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.530 [2024-12-09 09:56:10.773799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.530 qpair failed and we were unable to recover it. 00:38:35.530 [2024-12-09 09:56:10.774071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.530 [2024-12-09 09:56:10.774078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.530 qpair failed and we were unable to recover it. 00:38:35.530 [2024-12-09 09:56:10.774388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.530 [2024-12-09 09:56:10.774396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.530 qpair failed and we were unable to recover it. 00:38:35.530 [2024-12-09 09:56:10.774704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.530 [2024-12-09 09:56:10.774712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.530 qpair failed and we were unable to recover it. 00:38:35.530 [2024-12-09 09:56:10.774971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.530 [2024-12-09 09:56:10.774979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.530 qpair failed and we were unable to recover it. 00:38:35.530 [2024-12-09 09:56:10.775270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.530 [2024-12-09 09:56:10.775278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.530 qpair failed and we were unable to recover it. 00:38:35.530 [2024-12-09 09:56:10.775604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.530 [2024-12-09 09:56:10.775612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.530 qpair failed and we were unable to recover it. 00:38:35.530 [2024-12-09 09:56:10.775920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.530 [2024-12-09 09:56:10.775928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.530 qpair failed and we were unable to recover it. 00:38:35.530 [2024-12-09 09:56:10.776263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.530 [2024-12-09 09:56:10.776270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.530 qpair failed and we were unable to recover it. 00:38:35.530 [2024-12-09 09:56:10.776576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.530 [2024-12-09 09:56:10.776584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.530 qpair failed and we were unable to recover it. 00:38:35.530 [2024-12-09 09:56:10.776769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.530 [2024-12-09 09:56:10.776776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.530 qpair failed and we were unable to recover it. 00:38:35.530 [2024-12-09 09:56:10.776994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.530 [2024-12-09 09:56:10.777002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.530 qpair failed and we were unable to recover it. 00:38:35.530 [2024-12-09 09:56:10.777215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.530 [2024-12-09 09:56:10.777223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.530 qpair failed and we were unable to recover it. 00:38:35.530 [2024-12-09 09:56:10.777557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.530 [2024-12-09 09:56:10.777565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.530 qpair failed and we were unable to recover it. 00:38:35.530 [2024-12-09 09:56:10.777820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.530 [2024-12-09 09:56:10.777828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.530 qpair failed and we were unable to recover it. 00:38:35.530 [2024-12-09 09:56:10.778135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.530 [2024-12-09 09:56:10.778143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.530 qpair failed and we were unable to recover it. 00:38:35.530 [2024-12-09 09:56:10.778304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.531 [2024-12-09 09:56:10.778312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.531 qpair failed and we were unable to recover it. 00:38:35.531 [2024-12-09 09:56:10.778645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.531 [2024-12-09 09:56:10.778653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.531 qpair failed and we were unable to recover it. 00:38:35.531 [2024-12-09 09:56:10.778977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.531 [2024-12-09 09:56:10.778985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.531 qpair failed and we were unable to recover it. 00:38:35.531 [2024-12-09 09:56:10.779306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.531 [2024-12-09 09:56:10.779314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.531 qpair failed and we were unable to recover it. 00:38:35.531 [2024-12-09 09:56:10.779473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.531 [2024-12-09 09:56:10.779481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.531 qpair failed and we were unable to recover it. 00:38:35.531 [2024-12-09 09:56:10.779751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.531 [2024-12-09 09:56:10.779759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.531 qpair failed and we were unable to recover it. 00:38:35.531 [2024-12-09 09:56:10.780081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.531 [2024-12-09 09:56:10.780089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.531 qpair failed and we were unable to recover it. 00:38:35.531 [2024-12-09 09:56:10.780268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.531 [2024-12-09 09:56:10.780278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.531 qpair failed and we were unable to recover it. 00:38:35.531 [2024-12-09 09:56:10.780582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.531 [2024-12-09 09:56:10.780589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.531 qpair failed and we were unable to recover it. 00:38:35.531 [2024-12-09 09:56:10.780868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.531 [2024-12-09 09:56:10.780876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.531 qpair failed and we were unable to recover it. 00:38:35.531 [2024-12-09 09:56:10.781192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.531 [2024-12-09 09:56:10.781200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.531 qpair failed and we were unable to recover it. 00:38:35.531 [2024-12-09 09:56:10.781548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.531 [2024-12-09 09:56:10.781555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.531 qpair failed and we were unable to recover it. 00:38:35.531 [2024-12-09 09:56:10.781766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.531 [2024-12-09 09:56:10.781773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.531 qpair failed and we were unable to recover it. 00:38:35.531 [2024-12-09 09:56:10.782088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.531 [2024-12-09 09:56:10.782095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.531 qpair failed and we were unable to recover it. 00:38:35.531 [2024-12-09 09:56:10.782411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.531 [2024-12-09 09:56:10.782419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.531 qpair failed and we were unable to recover it. 00:38:35.531 [2024-12-09 09:56:10.782600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.531 [2024-12-09 09:56:10.782608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.531 qpair failed and we were unable to recover it. 00:38:35.531 [2024-12-09 09:56:10.782922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.531 [2024-12-09 09:56:10.782930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.531 qpair failed and we were unable to recover it. 00:38:35.531 [2024-12-09 09:56:10.783232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.531 [2024-12-09 09:56:10.783240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.531 qpair failed and we were unable to recover it. 00:38:35.531 [2024-12-09 09:56:10.783549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.531 [2024-12-09 09:56:10.783557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.531 qpair failed and we were unable to recover it. 00:38:35.531 [2024-12-09 09:56:10.783880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.531 [2024-12-09 09:56:10.783887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.531 qpair failed and we were unable to recover it. 00:38:35.531 [2024-12-09 09:56:10.784226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.531 [2024-12-09 09:56:10.784233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.531 qpair failed and we were unable to recover it. 00:38:35.531 [2024-12-09 09:56:10.784404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.531 [2024-12-09 09:56:10.784412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.531 qpair failed and we were unable to recover it. 00:38:35.531 [2024-12-09 09:56:10.784720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.531 [2024-12-09 09:56:10.784728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.531 qpair failed and we were unable to recover it. 00:38:35.531 [2024-12-09 09:56:10.785047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.531 [2024-12-09 09:56:10.785054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.531 qpair failed and we were unable to recover it. 00:38:35.531 [2024-12-09 09:56:10.785355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.531 [2024-12-09 09:56:10.785362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.531 qpair failed and we were unable to recover it. 00:38:35.531 [2024-12-09 09:56:10.785682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.531 [2024-12-09 09:56:10.785689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.531 qpair failed and we were unable to recover it. 00:38:35.531 [2024-12-09 09:56:10.785874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.531 [2024-12-09 09:56:10.785880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.531 qpair failed and we were unable to recover it. 00:38:35.531 [2024-12-09 09:56:10.786310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.531 [2024-12-09 09:56:10.786317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.531 qpair failed and we were unable to recover it. 00:38:35.531 [2024-12-09 09:56:10.786604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.531 [2024-12-09 09:56:10.786610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.531 qpair failed and we were unable to recover it. 00:38:35.531 [2024-12-09 09:56:10.786800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.531 [2024-12-09 09:56:10.786808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.531 qpair failed and we were unable to recover it. 00:38:35.531 [2024-12-09 09:56:10.787091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.531 [2024-12-09 09:56:10.787098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.531 qpair failed and we were unable to recover it. 00:38:35.531 [2024-12-09 09:56:10.787391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.531 [2024-12-09 09:56:10.787398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.531 qpair failed and we were unable to recover it. 00:38:35.531 [2024-12-09 09:56:10.787683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.531 [2024-12-09 09:56:10.787690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.531 qpair failed and we were unable to recover it. 00:38:35.531 [2024-12-09 09:56:10.787931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.531 [2024-12-09 09:56:10.787938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.531 qpair failed and we were unable to recover it. 00:38:35.531 [2024-12-09 09:56:10.788226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.531 [2024-12-09 09:56:10.788233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.531 qpair failed and we were unable to recover it. 00:38:35.531 [2024-12-09 09:56:10.788423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.531 [2024-12-09 09:56:10.788431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.531 qpair failed and we were unable to recover it. 00:38:35.531 [2024-12-09 09:56:10.788744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.531 [2024-12-09 09:56:10.788751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.531 qpair failed and we were unable to recover it. 00:38:35.531 [2024-12-09 09:56:10.789056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.531 [2024-12-09 09:56:10.789063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.531 qpair failed and we were unable to recover it. 00:38:35.531 [2024-12-09 09:56:10.789382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.532 [2024-12-09 09:56:10.789390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.532 qpair failed and we were unable to recover it. 00:38:35.532 [2024-12-09 09:56:10.789691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.532 [2024-12-09 09:56:10.789699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.532 qpair failed and we were unable to recover it. 00:38:35.532 [2024-12-09 09:56:10.789879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.532 [2024-12-09 09:56:10.789886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.532 qpair failed and we were unable to recover it. 00:38:35.532 [2024-12-09 09:56:10.790243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.532 [2024-12-09 09:56:10.790250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.532 qpair failed and we were unable to recover it. 00:38:35.532 [2024-12-09 09:56:10.790541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.532 [2024-12-09 09:56:10.790548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.532 qpair failed and we were unable to recover it. 00:38:35.532 [2024-12-09 09:56:10.790761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.532 [2024-12-09 09:56:10.790768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.532 qpair failed and we were unable to recover it. 00:38:35.532 [2024-12-09 09:56:10.791105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.532 [2024-12-09 09:56:10.791113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.532 qpair failed and we were unable to recover it. 00:38:35.532 [2024-12-09 09:56:10.791413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.532 [2024-12-09 09:56:10.791420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.532 qpair failed and we were unable to recover it. 00:38:35.532 [2024-12-09 09:56:10.791711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.532 [2024-12-09 09:56:10.791719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.532 qpair failed and we were unable to recover it. 00:38:35.532 [2024-12-09 09:56:10.792072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.532 [2024-12-09 09:56:10.792080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.532 qpair failed and we were unable to recover it. 00:38:35.532 [2024-12-09 09:56:10.792269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.532 [2024-12-09 09:56:10.792276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.532 qpair failed and we were unable to recover it. 00:38:35.532 [2024-12-09 09:56:10.792590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.532 [2024-12-09 09:56:10.792598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.532 qpair failed and we were unable to recover it. 00:38:35.532 [2024-12-09 09:56:10.792783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.532 [2024-12-09 09:56:10.792802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.532 qpair failed and we were unable to recover it. 00:38:35.532 [2024-12-09 09:56:10.793084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.532 [2024-12-09 09:56:10.793092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.532 qpair failed and we were unable to recover it. 00:38:35.532 [2024-12-09 09:56:10.793429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.532 [2024-12-09 09:56:10.793436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.532 qpair failed and we were unable to recover it. 00:38:35.532 [2024-12-09 09:56:10.793738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.532 [2024-12-09 09:56:10.793746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.532 qpair failed and we were unable to recover it. 00:38:35.532 [2024-12-09 09:56:10.794056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.532 [2024-12-09 09:56:10.794063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.532 qpair failed and we were unable to recover it. 00:38:35.532 [2024-12-09 09:56:10.794235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.532 [2024-12-09 09:56:10.794242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.532 qpair failed and we were unable to recover it. 00:38:35.532 [2024-12-09 09:56:10.794573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.532 [2024-12-09 09:56:10.794581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.532 qpair failed and we were unable to recover it. 00:38:35.532 [2024-12-09 09:56:10.794772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.532 [2024-12-09 09:56:10.794780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.532 qpair failed and we were unable to recover it. 00:38:35.532 [2024-12-09 09:56:10.795094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.532 [2024-12-09 09:56:10.795102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.532 qpair failed and we were unable to recover it. 00:38:35.532 [2024-12-09 09:56:10.795430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.532 [2024-12-09 09:56:10.795438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.532 qpair failed and we were unable to recover it. 00:38:35.532 [2024-12-09 09:56:10.795747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.532 [2024-12-09 09:56:10.795754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.532 qpair failed and we were unable to recover it. 00:38:35.532 [2024-12-09 09:56:10.795919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.532 [2024-12-09 09:56:10.795927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.532 qpair failed and we were unable to recover it. 00:38:35.532 [2024-12-09 09:56:10.796237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.532 [2024-12-09 09:56:10.796245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.532 qpair failed and we were unable to recover it. 00:38:35.532 [2024-12-09 09:56:10.796519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.532 [2024-12-09 09:56:10.796526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.532 qpair failed and we were unable to recover it. 00:38:35.532 [2024-12-09 09:56:10.796820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.532 [2024-12-09 09:56:10.796827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.532 qpair failed and we were unable to recover it. 00:38:35.532 [2024-12-09 09:56:10.797156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.532 [2024-12-09 09:56:10.797164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.532 qpair failed and we were unable to recover it. 00:38:35.532 [2024-12-09 09:56:10.797482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.532 [2024-12-09 09:56:10.797489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.532 qpair failed and we were unable to recover it. 00:38:35.532 [2024-12-09 09:56:10.797793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.532 [2024-12-09 09:56:10.797800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.532 qpair failed and we were unable to recover it. 00:38:35.532 [2024-12-09 09:56:10.798094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.532 [2024-12-09 09:56:10.798102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.532 qpair failed and we were unable to recover it. 00:38:35.532 [2024-12-09 09:56:10.798277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.532 [2024-12-09 09:56:10.798285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.532 qpair failed and we were unable to recover it. 00:38:35.532 Malloc0 00:38:35.532 [2024-12-09 09:56:10.798578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.532 [2024-12-09 09:56:10.798585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.532 qpair failed and we were unable to recover it. 00:38:35.532 [2024-12-09 09:56:10.798752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.532 [2024-12-09 09:56:10.798760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.532 qpair failed and we were unable to recover it. 00:38:35.533 [2024-12-09 09:56:10.799101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.533 [2024-12-09 09:56:10.799108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.533 qpair failed and we were unable to recover it. 00:38:35.533 09:56:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:35.533 [2024-12-09 09:56:10.799311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.533 [2024-12-09 09:56:10.799322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.533 qpair failed and we were unable to recover it. 00:38:35.533 [2024-12-09 09:56:10.799646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.533 [2024-12-09 09:56:10.799654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.533 qpair failed and we were unable to recover it. 00:38:35.533 09:56:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:38:35.533 [2024-12-09 09:56:10.800024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.533 [2024-12-09 09:56:10.800032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.533 qpair failed and we were unable to recover it. 00:38:35.533 09:56:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:35.533 09:56:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:35.533 [2024-12-09 09:56:10.800305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.533 [2024-12-09 09:56:10.800313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.533 qpair failed and we were unable to recover it. 00:38:35.533 [2024-12-09 09:56:10.800631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.533 [2024-12-09 09:56:10.800642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.533 qpair failed and we were unable to recover it. 00:38:35.533 [2024-12-09 09:56:10.800874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.533 [2024-12-09 09:56:10.800881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.533 qpair failed and we were unable to recover it. 00:38:35.533 [2024-12-09 09:56:10.801176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.533 [2024-12-09 09:56:10.801183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.533 qpair failed and we were unable to recover it. 00:38:35.533 [2024-12-09 09:56:10.801502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.533 [2024-12-09 09:56:10.801509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.533 qpair failed and we were unable to recover it. 00:38:35.533 [2024-12-09 09:56:10.801816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.533 [2024-12-09 09:56:10.801823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.533 qpair failed and we were unable to recover it. 00:38:35.533 [2024-12-09 09:56:10.802047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.533 [2024-12-09 09:56:10.802054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.533 qpair failed and we were unable to recover it. 00:38:35.533 [2024-12-09 09:56:10.802117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.533 [2024-12-09 09:56:10.802125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.533 qpair failed and we were unable to recover it. 00:38:35.533 [2024-12-09 09:56:10.802406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.533 [2024-12-09 09:56:10.802413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.533 qpair failed and we were unable to recover it. 00:38:35.533 [2024-12-09 09:56:10.802795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.533 [2024-12-09 09:56:10.802804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.533 qpair failed and we were unable to recover it. 00:38:35.533 [2024-12-09 09:56:10.803080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.533 [2024-12-09 09:56:10.803087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.533 qpair failed and we were unable to recover it. 00:38:35.533 [2024-12-09 09:56:10.803385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.533 [2024-12-09 09:56:10.803392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.533 qpair failed and we were unable to recover it. 00:38:35.533 [2024-12-09 09:56:10.803798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.533 [2024-12-09 09:56:10.803805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.533 qpair failed and we were unable to recover it. 00:38:35.533 [2024-12-09 09:56:10.804127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.533 [2024-12-09 09:56:10.804134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.533 qpair failed and we were unable to recover it. 00:38:35.533 [2024-12-09 09:56:10.804533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.533 [2024-12-09 09:56:10.804540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.533 qpair failed and we were unable to recover it. 00:38:35.533 [2024-12-09 09:56:10.804750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.533 [2024-12-09 09:56:10.804758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.533 qpair failed and we were unable to recover it. 00:38:35.533 [2024-12-09 09:56:10.804943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.533 [2024-12-09 09:56:10.804949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.533 qpair failed and we were unable to recover it. 00:38:35.533 [2024-12-09 09:56:10.805167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.533 [2024-12-09 09:56:10.805176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.533 qpair failed and we were unable to recover it. 00:38:35.533 [2024-12-09 09:56:10.805472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.533 [2024-12-09 09:56:10.805479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.533 qpair failed and we were unable to recover it. 00:38:35.533 [2024-12-09 09:56:10.805713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.533 [2024-12-09 09:56:10.805721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.533 qpair failed and we were unable to recover it. 00:38:35.533 [2024-12-09 09:56:10.805770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.533 [2024-12-09 09:56:10.805778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.533 qpair failed and we were unable to recover it. 00:38:35.533 [2024-12-09 09:56:10.805783] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:35.533 [2024-12-09 09:56:10.806077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.533 [2024-12-09 09:56:10.806085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.533 qpair failed and we were unable to recover it. 00:38:35.533 [2024-12-09 09:56:10.806391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.533 [2024-12-09 09:56:10.806401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.533 qpair failed and we were unable to recover it. 00:38:35.533 [2024-12-09 09:56:10.806591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.533 [2024-12-09 09:56:10.806598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.533 qpair failed and we were unable to recover it. 00:38:35.533 [2024-12-09 09:56:10.806847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.533 [2024-12-09 09:56:10.806854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.533 qpair failed and we were unable to recover it. 00:38:35.533 [2024-12-09 09:56:10.807179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.533 [2024-12-09 09:56:10.807187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.533 qpair failed and we were unable to recover it. 00:38:35.533 [2024-12-09 09:56:10.807476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.533 [2024-12-09 09:56:10.807485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.533 qpair failed and we were unable to recover it. 00:38:35.533 [2024-12-09 09:56:10.807722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.533 [2024-12-09 09:56:10.807730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.533 qpair failed and we were unable to recover it. 00:38:35.533 [2024-12-09 09:56:10.808053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.533 [2024-12-09 09:56:10.808060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.533 qpair failed and we were unable to recover it. 00:38:35.533 [2024-12-09 09:56:10.808280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.533 [2024-12-09 09:56:10.808287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.533 qpair failed and we were unable to recover it. 00:38:35.533 [2024-12-09 09:56:10.808517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.533 [2024-12-09 09:56:10.808524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.533 qpair failed and we were unable to recover it. 00:38:35.533 [2024-12-09 09:56:10.808828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.533 [2024-12-09 09:56:10.808835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.533 qpair failed and we were unable to recover it. 00:38:35.534 [2024-12-09 09:56:10.809008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.534 [2024-12-09 09:56:10.809015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.534 qpair failed and we were unable to recover it. 00:38:35.534 [2024-12-09 09:56:10.809177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.534 [2024-12-09 09:56:10.809184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.534 qpair failed and we were unable to recover it. 00:38:35.534 [2024-12-09 09:56:10.809344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.534 [2024-12-09 09:56:10.809351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.534 qpair failed and we were unable to recover it. 00:38:35.534 [2024-12-09 09:56:10.809571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.534 [2024-12-09 09:56:10.809578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.534 qpair failed and we were unable to recover it. 00:38:35.534 [2024-12-09 09:56:10.809912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.534 [2024-12-09 09:56:10.809920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.534 qpair failed and we were unable to recover it. 00:38:35.534 [2024-12-09 09:56:10.810136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.534 [2024-12-09 09:56:10.810143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.534 qpair failed and we were unable to recover it. 00:38:35.534 [2024-12-09 09:56:10.810442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.534 [2024-12-09 09:56:10.810450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.534 qpair failed and we were unable to recover it. 00:38:35.534 [2024-12-09 09:56:10.810692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.534 [2024-12-09 09:56:10.810699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.534 qpair failed and we were unable to recover it. 00:38:35.534 [2024-12-09 09:56:10.811000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.534 [2024-12-09 09:56:10.811007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.534 qpair failed and we were unable to recover it. 00:38:35.534 [2024-12-09 09:56:10.811196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.534 [2024-12-09 09:56:10.811203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.534 qpair failed and we were unable to recover it. 00:38:35.534 [2024-12-09 09:56:10.811513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.534 [2024-12-09 09:56:10.811520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.534 qpair failed and we were unable to recover it. 00:38:35.534 [2024-12-09 09:56:10.811782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.534 [2024-12-09 09:56:10.811791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.534 qpair failed and we were unable to recover it. 00:38:35.534 [2024-12-09 09:56:10.812073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.534 [2024-12-09 09:56:10.812080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.534 qpair failed and we were unable to recover it. 00:38:35.534 [2024-12-09 09:56:10.812469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.534 [2024-12-09 09:56:10.812476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.534 qpair failed and we were unable to recover it. 00:38:35.534 [2024-12-09 09:56:10.812799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.534 [2024-12-09 09:56:10.812807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.534 qpair failed and we were unable to recover it. 00:38:35.534 [2024-12-09 09:56:10.813000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.534 [2024-12-09 09:56:10.813007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.534 qpair failed and we were unable to recover it. 00:38:35.534 [2024-12-09 09:56:10.813293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.534 [2024-12-09 09:56:10.813300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.534 qpair failed and we were unable to recover it. 00:38:35.534 [2024-12-09 09:56:10.813588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.534 [2024-12-09 09:56:10.813596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.534 qpair failed and we were unable to recover it. 00:38:35.534 [2024-12-09 09:56:10.814023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.534 [2024-12-09 09:56:10.814030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.534 qpair failed and we were unable to recover it. 00:38:35.534 [2024-12-09 09:56:10.814324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.534 [2024-12-09 09:56:10.814332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.534 qpair failed and we were unable to recover it. 00:38:35.534 [2024-12-09 09:56:10.814658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.534 [2024-12-09 09:56:10.814665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.534 qpair failed and we were unable to recover it. 00:38:35.534 09:56:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:35.534 [2024-12-09 09:56:10.815033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.534 [2024-12-09 09:56:10.815041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.534 qpair failed and we were unable to recover it. 00:38:35.534 [2024-12-09 09:56:10.815096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.534 [2024-12-09 09:56:10.815105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.534 qpair failed and we were unable to recover it. 00:38:35.534 09:56:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:35.534 [2024-12-09 09:56:10.815421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.534 [2024-12-09 09:56:10.815428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.534 qpair failed and we were unable to recover it. 00:38:35.534 09:56:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:35.534 [2024-12-09 09:56:10.815627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.534 [2024-12-09 09:56:10.815635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.534 qpair failed and we were unable to recover it. 00:38:35.534 09:56:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:35.534 [2024-12-09 09:56:10.815983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.534 [2024-12-09 09:56:10.815991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.534 qpair failed and we were unable to recover it. 00:38:35.534 [2024-12-09 09:56:10.816173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.534 [2024-12-09 09:56:10.816181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.534 qpair failed and we were unable to recover it. 00:38:35.534 [2024-12-09 09:56:10.816584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.534 [2024-12-09 09:56:10.816591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.534 qpair failed and we were unable to recover it. 00:38:35.534 [2024-12-09 09:56:10.816633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.534 [2024-12-09 09:56:10.816643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.534 qpair failed and we were unable to recover it. 00:38:35.534 [2024-12-09 09:56:10.816966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.534 [2024-12-09 09:56:10.816973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.534 qpair failed and we were unable to recover it. 00:38:35.534 [2024-12-09 09:56:10.817141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.534 [2024-12-09 09:56:10.817150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.534 qpair failed and we were unable to recover it. 00:38:35.534 [2024-12-09 09:56:10.817423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.534 [2024-12-09 09:56:10.817430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.534 qpair failed and we were unable to recover it. 00:38:35.534 [2024-12-09 09:56:10.817725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.534 [2024-12-09 09:56:10.817732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.534 qpair failed and we were unable to recover it. 00:38:35.534 [2024-12-09 09:56:10.817954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.534 [2024-12-09 09:56:10.817962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.534 qpair failed and we were unable to recover it. 00:38:35.534 [2024-12-09 09:56:10.818267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.534 [2024-12-09 09:56:10.818273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.534 qpair failed and we were unable to recover it. 00:38:35.534 [2024-12-09 09:56:10.818556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.534 [2024-12-09 09:56:10.818564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.534 qpair failed and we were unable to recover it. 00:38:35.534 [2024-12-09 09:56:10.818798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.535 [2024-12-09 09:56:10.818805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.535 qpair failed and we were unable to recover it. 00:38:35.535 [2024-12-09 09:56:10.819032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.535 [2024-12-09 09:56:10.819040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.535 qpair failed and we were unable to recover it. 00:38:35.535 [2024-12-09 09:56:10.819232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.535 [2024-12-09 09:56:10.819239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.535 qpair failed and we were unable to recover it. 00:38:35.535 [2024-12-09 09:56:10.819506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.535 [2024-12-09 09:56:10.819513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.535 qpair failed and we were unable to recover it. 00:38:35.535 [2024-12-09 09:56:10.819681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.535 [2024-12-09 09:56:10.819688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.535 qpair failed and we were unable to recover it. 00:38:35.535 [2024-12-09 09:56:10.819901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.535 [2024-12-09 09:56:10.819908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.535 qpair failed and we were unable to recover it. 00:38:35.535 [2024-12-09 09:56:10.820193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.535 [2024-12-09 09:56:10.820202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.535 qpair failed and we were unable to recover it. 00:38:35.535 [2024-12-09 09:56:10.820515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.535 [2024-12-09 09:56:10.820523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.535 qpair failed and we were unable to recover it. 00:38:35.535 [2024-12-09 09:56:10.820815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.535 [2024-12-09 09:56:10.820822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.535 qpair failed and we were unable to recover it. 00:38:35.535 [2024-12-09 09:56:10.821039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.535 [2024-12-09 09:56:10.821047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.535 qpair failed and we were unable to recover it. 00:38:35.535 [2024-12-09 09:56:10.821395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.535 [2024-12-09 09:56:10.821402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.535 qpair failed and we were unable to recover it. 00:38:35.535 [2024-12-09 09:56:10.821719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.535 [2024-12-09 09:56:10.821727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.535 qpair failed and we were unable to recover it. 00:38:35.535 [2024-12-09 09:56:10.821926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.535 [2024-12-09 09:56:10.821932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.535 qpair failed and we were unable to recover it. 00:38:35.535 [2024-12-09 09:56:10.822270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.535 [2024-12-09 09:56:10.822278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.535 qpair failed and we were unable to recover it. 00:38:35.535 [2024-12-09 09:56:10.822365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.535 [2024-12-09 09:56:10.822372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.535 qpair failed and we were unable to recover it. 00:38:35.535 [2024-12-09 09:56:10.822538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.535 [2024-12-09 09:56:10.822546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.535 qpair failed and we were unable to recover it. 00:38:35.535 [2024-12-09 09:56:10.822859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.535 [2024-12-09 09:56:10.822866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.535 qpair failed and we were unable to recover it. 00:38:35.535 [2024-12-09 09:56:10.823188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.535 [2024-12-09 09:56:10.823195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.535 qpair failed and we were unable to recover it. 00:38:35.535 [2024-12-09 09:56:10.823370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.535 [2024-12-09 09:56:10.823376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.535 qpair failed and we were unable to recover it. 00:38:35.535 [2024-12-09 09:56:10.823602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.535 [2024-12-09 09:56:10.823611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.535 qpair failed and we were unable to recover it. 00:38:35.535 [2024-12-09 09:56:10.823907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.535 [2024-12-09 09:56:10.823914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.535 qpair failed and we were unable to recover it. 00:38:35.535 [2024-12-09 09:56:10.824243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.535 [2024-12-09 09:56:10.824250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.535 qpair failed and we were unable to recover it. 00:38:35.535 [2024-12-09 09:56:10.824534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.535 [2024-12-09 09:56:10.824541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.535 qpair failed and we were unable to recover it. 00:38:35.535 [2024-12-09 09:56:10.824769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.535 [2024-12-09 09:56:10.824777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.535 qpair failed and we were unable to recover it. 00:38:35.535 [2024-12-09 09:56:10.825089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.535 [2024-12-09 09:56:10.825096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.535 qpair failed and we were unable to recover it. 00:38:35.535 [2024-12-09 09:56:10.825410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.535 [2024-12-09 09:56:10.825418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.535 qpair failed and we were unable to recover it. 00:38:35.535 [2024-12-09 09:56:10.825460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.535 [2024-12-09 09:56:10.825467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.535 qpair failed and we were unable to recover it. 00:38:35.535 [2024-12-09 09:56:10.825643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.535 [2024-12-09 09:56:10.825651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.535 qpair failed and we were unable to recover it. 00:38:35.535 [2024-12-09 09:56:10.825944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.535 [2024-12-09 09:56:10.825951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.535 qpair failed and we were unable to recover it. 00:38:35.535 [2024-12-09 09:56:10.826289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.535 [2024-12-09 09:56:10.826296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.535 qpair failed and we were unable to recover it. 00:38:35.535 [2024-12-09 09:56:10.826605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.535 [2024-12-09 09:56:10.826612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.535 qpair failed and we were unable to recover it. 00:38:35.535 09:56:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:35.535 [2024-12-09 09:56:10.826928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.535 [2024-12-09 09:56:10.826935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.535 qpair failed and we were unable to recover it. 00:38:35.535 09:56:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:35.535 [2024-12-09 09:56:10.827270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.535 [2024-12-09 09:56:10.827278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.535 qpair failed and we were unable to recover it. 00:38:35.535 09:56:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:35.535 [2024-12-09 09:56:10.827583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.535 [2024-12-09 09:56:10.827590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.535 qpair failed and we were unable to recover it. 00:38:35.535 [2024-12-09 09:56:10.827773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.535 [2024-12-09 09:56:10.827780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.535 qpair failed and we were unable to recover it. 00:38:35.535 09:56:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:35.535 [2024-12-09 09:56:10.828068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.535 [2024-12-09 09:56:10.828076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.535 qpair failed and we were unable to recover it. 00:38:35.535 [2024-12-09 09:56:10.828422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.535 [2024-12-09 09:56:10.828429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.536 qpair failed and we were unable to recover it. 00:38:35.536 [2024-12-09 09:56:10.828606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.536 [2024-12-09 09:56:10.828614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.536 qpair failed and we were unable to recover it. 00:38:35.536 [2024-12-09 09:56:10.828837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.536 [2024-12-09 09:56:10.828844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.536 qpair failed and we were unable to recover it. 00:38:35.536 [2024-12-09 09:56:10.829052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.536 [2024-12-09 09:56:10.829059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.536 qpair failed and we were unable to recover it. 00:38:35.536 [2024-12-09 09:56:10.829434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.536 [2024-12-09 09:56:10.829441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.536 qpair failed and we were unable to recover it. 00:38:35.536 [2024-12-09 09:56:10.829629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.536 [2024-12-09 09:56:10.829640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.536 qpair failed and we were unable to recover it. 00:38:35.536 [2024-12-09 09:56:10.829838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.536 [2024-12-09 09:56:10.829844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.536 qpair failed and we were unable to recover it. 00:38:35.536 [2024-12-09 09:56:10.830177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.536 [2024-12-09 09:56:10.830184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.536 qpair failed and we were unable to recover it. 00:38:35.536 [2024-12-09 09:56:10.830384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.536 [2024-12-09 09:56:10.830391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.536 qpair failed and we were unable to recover it. 00:38:35.536 [2024-12-09 09:56:10.830563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.536 [2024-12-09 09:56:10.830570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.536 qpair failed and we were unable to recover it. 00:38:35.536 [2024-12-09 09:56:10.830911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.536 [2024-12-09 09:56:10.830919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.536 qpair failed and we were unable to recover it. 00:38:35.536 [2024-12-09 09:56:10.831244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.536 [2024-12-09 09:56:10.831251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.536 qpair failed and we were unable to recover it. 00:38:35.536 [2024-12-09 09:56:10.831599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.536 [2024-12-09 09:56:10.831606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.536 qpair failed and we were unable to recover it. 00:38:35.536 [2024-12-09 09:56:10.831932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.536 [2024-12-09 09:56:10.831940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.536 qpair failed and we were unable to recover it. 00:38:35.536 [2024-12-09 09:56:10.832240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.536 [2024-12-09 09:56:10.832248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.536 qpair failed and we were unable to recover it. 00:38:35.536 [2024-12-09 09:56:10.832535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.536 [2024-12-09 09:56:10.832542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.536 qpair failed and we were unable to recover it. 00:38:35.536 [2024-12-09 09:56:10.832876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.536 [2024-12-09 09:56:10.832883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.536 qpair failed and we were unable to recover it. 00:38:35.536 [2024-12-09 09:56:10.833198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.536 [2024-12-09 09:56:10.833205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.536 qpair failed and we were unable to recover it. 00:38:35.536 [2024-12-09 09:56:10.833511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.536 [2024-12-09 09:56:10.833518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.536 qpair failed and we were unable to recover it. 00:38:35.536 [2024-12-09 09:56:10.833712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.536 [2024-12-09 09:56:10.833719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.536 qpair failed and we were unable to recover it. 00:38:35.536 [2024-12-09 09:56:10.834050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.536 [2024-12-09 09:56:10.834057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.536 qpair failed and we were unable to recover it. 00:38:35.536 [2024-12-09 09:56:10.834379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.536 [2024-12-09 09:56:10.834388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.536 qpair failed and we were unable to recover it. 00:38:35.536 [2024-12-09 09:56:10.834691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.536 [2024-12-09 09:56:10.834698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.536 qpair failed and we were unable to recover it. 00:38:35.536 [2024-12-09 09:56:10.834976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.536 [2024-12-09 09:56:10.834984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.536 qpair failed and we were unable to recover it. 00:38:35.536 [2024-12-09 09:56:10.835169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.536 [2024-12-09 09:56:10.835176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.536 qpair failed and we were unable to recover it. 00:38:35.536 [2024-12-09 09:56:10.835406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.536 [2024-12-09 09:56:10.835414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.536 qpair failed and we were unable to recover it. 00:38:35.536 [2024-12-09 09:56:10.835705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.536 [2024-12-09 09:56:10.835713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.536 qpair failed and we were unable to recover it. 00:38:35.536 [2024-12-09 09:56:10.836024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.536 [2024-12-09 09:56:10.836033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.536 qpair failed and we were unable to recover it. 00:38:35.536 [2024-12-09 09:56:10.836211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.536 [2024-12-09 09:56:10.836218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.536 qpair failed and we were unable to recover it. 00:38:35.536 [2024-12-09 09:56:10.836534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.536 [2024-12-09 09:56:10.836541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.536 qpair failed and we were unable to recover it. 00:38:35.536 [2024-12-09 09:56:10.836941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.536 [2024-12-09 09:56:10.836948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.536 qpair failed and we were unable to recover it. 00:38:35.536 [2024-12-09 09:56:10.837117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.536 [2024-12-09 09:56:10.837125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.536 qpair failed and we were unable to recover it. 00:38:35.536 [2024-12-09 09:56:10.837446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.536 [2024-12-09 09:56:10.837454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.536 qpair failed and we were unable to recover it. 00:38:35.536 [2024-12-09 09:56:10.837750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.536 [2024-12-09 09:56:10.837757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.536 qpair failed and we were unable to recover it. 00:38:35.536 [2024-12-09 09:56:10.838063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.537 [2024-12-09 09:56:10.838070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.537 qpair failed and we were unable to recover it. 00:38:35.537 [2024-12-09 09:56:10.838323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.537 [2024-12-09 09:56:10.838330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.537 qpair failed and we were unable to recover it. 00:38:35.537 [2024-12-09 09:56:10.838643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.537 [2024-12-09 09:56:10.838651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.537 qpair failed and we were unable to recover it. 00:38:35.537 09:56:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:35.537 [2024-12-09 09:56:10.839068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.537 [2024-12-09 09:56:10.839076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.537 qpair failed and we were unable to recover it. 00:38:35.537 09:56:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:35.537 [2024-12-09 09:56:10.839383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.537 [2024-12-09 09:56:10.839390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.537 qpair failed and we were unable to recover it. 00:38:35.537 09:56:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:35.537 [2024-12-09 09:56:10.839705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.537 [2024-12-09 09:56:10.839712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.537 qpair failed and we were unable to recover it. 00:38:35.537 09:56:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:35.537 [2024-12-09 09:56:10.840075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.537 [2024-12-09 09:56:10.840083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.537 qpair failed and we were unable to recover it. 00:38:35.537 [2024-12-09 09:56:10.840387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.537 [2024-12-09 09:56:10.840395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.537 qpair failed and we were unable to recover it. 00:38:35.537 [2024-12-09 09:56:10.840469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.537 [2024-12-09 09:56:10.840477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.537 qpair failed and we were unable to recover it. 00:38:35.537 [2024-12-09 09:56:10.840772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.537 [2024-12-09 09:56:10.840779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.537 qpair failed and we were unable to recover it. 00:38:35.537 [2024-12-09 09:56:10.841075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.537 [2024-12-09 09:56:10.841081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.537 qpair failed and we were unable to recover it. 00:38:35.537 [2024-12-09 09:56:10.841390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.537 [2024-12-09 09:56:10.841398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.537 qpair failed and we were unable to recover it. 00:38:35.537 [2024-12-09 09:56:10.841739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.537 [2024-12-09 09:56:10.841751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.537 qpair failed and we were unable to recover it. 00:38:35.537 [2024-12-09 09:56:10.842055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.537 [2024-12-09 09:56:10.842062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.537 qpair failed and we were unable to recover it. 00:38:35.537 [2024-12-09 09:56:10.842348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.537 [2024-12-09 09:56:10.842355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.537 qpair failed and we were unable to recover it. 00:38:35.537 [2024-12-09 09:56:10.842678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.537 [2024-12-09 09:56:10.842684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.537 qpair failed and we were unable to recover it. 00:38:35.537 [2024-12-09 09:56:10.842839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.537 [2024-12-09 09:56:10.842845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.537 qpair failed and we were unable to recover it. 00:38:35.537 [2024-12-09 09:56:10.843135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.537 [2024-12-09 09:56:10.843144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.537 qpair failed and we were unable to recover it. 00:38:35.537 [2024-12-09 09:56:10.843456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.537 [2024-12-09 09:56:10.843463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.537 qpair failed and we were unable to recover it. 00:38:35.537 [2024-12-09 09:56:10.843763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.537 [2024-12-09 09:56:10.843771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.537 qpair failed and we were unable to recover it. 00:38:35.537 [2024-12-09 09:56:10.844096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.537 [2024-12-09 09:56:10.844105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.537 qpair failed and we were unable to recover it. 00:38:35.537 [2024-12-09 09:56:10.844405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.537 [2024-12-09 09:56:10.844414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.537 qpair failed and we were unable to recover it. 00:38:35.537 [2024-12-09 09:56:10.844703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.537 [2024-12-09 09:56:10.844711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.537 qpair failed and we were unable to recover it. 00:38:35.537 [2024-12-09 09:56:10.844859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.537 [2024-12-09 09:56:10.844866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.537 qpair failed and we were unable to recover it. 00:38:35.537 [2024-12-09 09:56:10.845118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.537 [2024-12-09 09:56:10.845125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.537 qpair failed and we were unable to recover it. 00:38:35.537 [2024-12-09 09:56:10.845427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.537 [2024-12-09 09:56:10.845434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.537 qpair failed and we were unable to recover it. 00:38:35.537 [2024-12-09 09:56:10.845617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.537 [2024-12-09 09:56:10.845625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.537 qpair failed and we were unable to recover it. 00:38:35.537 [2024-12-09 09:56:10.845975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:38:35.537 [2024-12-09 09:56:10.845983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2cf8000b90 with addr=10.0.0.2, port=4420 00:38:35.537 qpair failed and we were unable to recover it. 00:38:35.537 [2024-12-09 09:56:10.846034] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:35.537 09:56:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:35.537 09:56:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:35.537 09:56:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:35.537 09:56:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:35.537 [2024-12-09 09:56:10.856726] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.537 [2024-12-09 09:56:10.856793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.537 [2024-12-09 09:56:10.856807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.537 [2024-12-09 09:56:10.856813] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.537 [2024-12-09 09:56:10.856818] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:35.537 [2024-12-09 09:56:10.856833] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:35.537 qpair failed and we were unable to recover it. 00:38:35.537 09:56:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:35.537 09:56:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3065679 00:38:35.537 [2024-12-09 09:56:10.866591] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.537 [2024-12-09 09:56:10.866653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.537 [2024-12-09 09:56:10.866664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.537 [2024-12-09 09:56:10.866669] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.537 [2024-12-09 09:56:10.866673] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:35.537 [2024-12-09 09:56:10.866684] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:35.538 qpair failed and we were unable to recover it. 00:38:35.538 [2024-12-09 09:56:10.876653] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.538 [2024-12-09 09:56:10.876704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.538 [2024-12-09 09:56:10.876714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.538 [2024-12-09 09:56:10.876722] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.538 [2024-12-09 09:56:10.876726] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:35.538 [2024-12-09 09:56:10.876737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:35.538 qpair failed and we were unable to recover it. 00:38:35.538 [2024-12-09 09:56:10.886710] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.538 [2024-12-09 09:56:10.886765] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.538 [2024-12-09 09:56:10.886775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.538 [2024-12-09 09:56:10.886780] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.538 [2024-12-09 09:56:10.886784] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:35.538 [2024-12-09 09:56:10.886794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:35.538 qpair failed and we were unable to recover it. 00:38:35.538 [2024-12-09 09:56:10.896654] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.538 [2024-12-09 09:56:10.896705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.538 [2024-12-09 09:56:10.896715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.538 [2024-12-09 09:56:10.896720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.538 [2024-12-09 09:56:10.896725] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:35.538 [2024-12-09 09:56:10.896735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:35.538 qpair failed and we were unable to recover it. 00:38:35.538 [2024-12-09 09:56:10.906627] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.538 [2024-12-09 09:56:10.906723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.538 [2024-12-09 09:56:10.906733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.538 [2024-12-09 09:56:10.906738] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.538 [2024-12-09 09:56:10.906742] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:35.538 [2024-12-09 09:56:10.906752] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:35.538 qpair failed and we were unable to recover it. 00:38:35.538 [2024-12-09 09:56:10.916533] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.538 [2024-12-09 09:56:10.916580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.538 [2024-12-09 09:56:10.916589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.538 [2024-12-09 09:56:10.916594] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.538 [2024-12-09 09:56:10.916599] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:35.538 [2024-12-09 09:56:10.916608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:35.538 qpair failed and we were unable to recover it. 00:38:35.538 [2024-12-09 09:56:10.926700] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.538 [2024-12-09 09:56:10.926754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.538 [2024-12-09 09:56:10.926765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.538 [2024-12-09 09:56:10.926770] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.538 [2024-12-09 09:56:10.926774] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:35.538 [2024-12-09 09:56:10.926785] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:35.538 qpair failed and we were unable to recover it. 00:38:35.538 [2024-12-09 09:56:10.936783] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.538 [2024-12-09 09:56:10.936845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.538 [2024-12-09 09:56:10.936855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.538 [2024-12-09 09:56:10.936860] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.538 [2024-12-09 09:56:10.936864] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:35.538 [2024-12-09 09:56:10.936875] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:35.538 qpair failed and we were unable to recover it. 00:38:35.538 [2024-12-09 09:56:10.946781] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.538 [2024-12-09 09:56:10.946875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.538 [2024-12-09 09:56:10.946885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.538 [2024-12-09 09:56:10.946890] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.538 [2024-12-09 09:56:10.946894] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:35.538 [2024-12-09 09:56:10.946905] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:35.538 qpair failed and we were unable to recover it. 00:38:35.538 [2024-12-09 09:56:10.956825] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.538 [2024-12-09 09:56:10.956872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.538 [2024-12-09 09:56:10.956882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.538 [2024-12-09 09:56:10.956887] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.538 [2024-12-09 09:56:10.956891] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:35.538 [2024-12-09 09:56:10.956901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:35.538 qpair failed and we were unable to recover it. 00:38:35.800 [2024-12-09 09:56:10.966780] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.800 [2024-12-09 09:56:10.966858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.800 [2024-12-09 09:56:10.966869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.800 [2024-12-09 09:56:10.966873] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.800 [2024-12-09 09:56:10.966878] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:35.800 [2024-12-09 09:56:10.966888] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:35.800 qpair failed and we were unable to recover it. 00:38:35.800 [2024-12-09 09:56:10.976828] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.800 [2024-12-09 09:56:10.976880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.800 [2024-12-09 09:56:10.976890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.800 [2024-12-09 09:56:10.976894] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.800 [2024-12-09 09:56:10.976899] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:35.800 [2024-12-09 09:56:10.976909] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:35.800 qpair failed and we were unable to recover it. 00:38:35.800 [2024-12-09 09:56:10.986863] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.800 [2024-12-09 09:56:10.986909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.800 [2024-12-09 09:56:10.986919] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.800 [2024-12-09 09:56:10.986924] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.800 [2024-12-09 09:56:10.986928] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:35.800 [2024-12-09 09:56:10.986938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:35.800 qpair failed and we were unable to recover it. 00:38:35.800 [2024-12-09 09:56:10.996870] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.800 [2024-12-09 09:56:10.996961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.800 [2024-12-09 09:56:10.996971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.800 [2024-12-09 09:56:10.996976] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.800 [2024-12-09 09:56:10.996980] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:35.800 [2024-12-09 09:56:10.996991] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:35.800 qpair failed and we were unable to recover it. 00:38:35.800 [2024-12-09 09:56:11.006940] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.800 [2024-12-09 09:56:11.006990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.800 [2024-12-09 09:56:11.007000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.800 [2024-12-09 09:56:11.007014] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.800 [2024-12-09 09:56:11.007018] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:35.800 [2024-12-09 09:56:11.007028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:35.800 qpair failed and we were unable to recover it. 00:38:35.800 [2024-12-09 09:56:11.016965] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.800 [2024-12-09 09:56:11.017064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.800 [2024-12-09 09:56:11.017074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.800 [2024-12-09 09:56:11.017079] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.800 [2024-12-09 09:56:11.017083] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:35.800 [2024-12-09 09:56:11.017093] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:35.800 qpair failed and we were unable to recover it. 00:38:35.800 [2024-12-09 09:56:11.026980] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.800 [2024-12-09 09:56:11.027028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.800 [2024-12-09 09:56:11.027038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.800 [2024-12-09 09:56:11.027043] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.800 [2024-12-09 09:56:11.027048] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:35.800 [2024-12-09 09:56:11.027058] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:35.800 qpair failed and we were unable to recover it. 00:38:35.800 [2024-12-09 09:56:11.037134] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.801 [2024-12-09 09:56:11.037191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.801 [2024-12-09 09:56:11.037201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.801 [2024-12-09 09:56:11.037206] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.801 [2024-12-09 09:56:11.037210] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:35.801 [2024-12-09 09:56:11.037220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:35.801 qpair failed and we were unable to recover it. 00:38:35.801 [2024-12-09 09:56:11.047091] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.801 [2024-12-09 09:56:11.047181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.801 [2024-12-09 09:56:11.047191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.801 [2024-12-09 09:56:11.047196] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.801 [2024-12-09 09:56:11.047200] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:35.801 [2024-12-09 09:56:11.047213] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:35.801 qpair failed and we were unable to recover it. 00:38:35.801 [2024-12-09 09:56:11.057130] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.801 [2024-12-09 09:56:11.057185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.801 [2024-12-09 09:56:11.057196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.801 [2024-12-09 09:56:11.057200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.801 [2024-12-09 09:56:11.057205] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:35.801 [2024-12-09 09:56:11.057215] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:35.801 qpair failed and we were unable to recover it. 00:38:35.801 [2024-12-09 09:56:11.067149] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.801 [2024-12-09 09:56:11.067193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.801 [2024-12-09 09:56:11.067203] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.801 [2024-12-09 09:56:11.067208] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.801 [2024-12-09 09:56:11.067212] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:35.801 [2024-12-09 09:56:11.067222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:35.801 qpair failed and we were unable to recover it. 00:38:35.801 [2024-12-09 09:56:11.077129] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.801 [2024-12-09 09:56:11.077194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.801 [2024-12-09 09:56:11.077203] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.801 [2024-12-09 09:56:11.077208] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.801 [2024-12-09 09:56:11.077213] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:35.801 [2024-12-09 09:56:11.077223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:35.801 qpair failed and we were unable to recover it. 00:38:35.801 [2024-12-09 09:56:11.087154] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.801 [2024-12-09 09:56:11.087206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.801 [2024-12-09 09:56:11.087216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.801 [2024-12-09 09:56:11.087221] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.801 [2024-12-09 09:56:11.087225] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:35.801 [2024-12-09 09:56:11.087235] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:35.801 qpair failed and we were unable to recover it. 00:38:35.801 [2024-12-09 09:56:11.097193] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.801 [2024-12-09 09:56:11.097241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.801 [2024-12-09 09:56:11.097251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.801 [2024-12-09 09:56:11.097256] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.801 [2024-12-09 09:56:11.097260] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:35.801 [2024-12-09 09:56:11.097270] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:35.801 qpair failed and we were unable to recover it. 00:38:35.801 [2024-12-09 09:56:11.107211] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.801 [2024-12-09 09:56:11.107265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.801 [2024-12-09 09:56:11.107275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.801 [2024-12-09 09:56:11.107280] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.801 [2024-12-09 09:56:11.107284] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:35.801 [2024-12-09 09:56:11.107294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:35.801 qpair failed and we were unable to recover it. 00:38:35.801 [2024-12-09 09:56:11.117202] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.801 [2024-12-09 09:56:11.117248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.801 [2024-12-09 09:56:11.117258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.801 [2024-12-09 09:56:11.117263] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.801 [2024-12-09 09:56:11.117267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:35.801 [2024-12-09 09:56:11.117277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:35.801 qpair failed and we were unable to recover it. 00:38:35.801 [2024-12-09 09:56:11.127271] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.801 [2024-12-09 09:56:11.127324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.801 [2024-12-09 09:56:11.127334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.801 [2024-12-09 09:56:11.127339] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.801 [2024-12-09 09:56:11.127343] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:35.801 [2024-12-09 09:56:11.127354] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:35.801 qpair failed and we were unable to recover it. 00:38:35.801 [2024-12-09 09:56:11.137298] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.801 [2024-12-09 09:56:11.137346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.801 [2024-12-09 09:56:11.137358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.801 [2024-12-09 09:56:11.137363] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.801 [2024-12-09 09:56:11.137367] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:35.801 [2024-12-09 09:56:11.137377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:35.801 qpair failed and we were unable to recover it. 00:38:35.801 [2024-12-09 09:56:11.147300] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.801 [2024-12-09 09:56:11.147345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.801 [2024-12-09 09:56:11.147355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.801 [2024-12-09 09:56:11.147360] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.801 [2024-12-09 09:56:11.147364] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:35.801 [2024-12-09 09:56:11.147374] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:35.801 qpair failed and we were unable to recover it. 00:38:35.801 [2024-12-09 09:56:11.157327] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.801 [2024-12-09 09:56:11.157371] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.801 [2024-12-09 09:56:11.157381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.801 [2024-12-09 09:56:11.157386] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.801 [2024-12-09 09:56:11.157390] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:35.801 [2024-12-09 09:56:11.157400] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:35.801 qpair failed and we were unable to recover it. 00:38:35.801 [2024-12-09 09:56:11.167368] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.801 [2024-12-09 09:56:11.167419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.801 [2024-12-09 09:56:11.167429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.801 [2024-12-09 09:56:11.167434] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.801 [2024-12-09 09:56:11.167439] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:35.801 [2024-12-09 09:56:11.167449] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:35.801 qpair failed and we were unable to recover it. 00:38:35.801 [2024-12-09 09:56:11.177406] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.801 [2024-12-09 09:56:11.177456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.801 [2024-12-09 09:56:11.177466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.801 [2024-12-09 09:56:11.177471] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.801 [2024-12-09 09:56:11.177478] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:35.801 [2024-12-09 09:56:11.177488] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:35.801 qpair failed and we were unable to recover it. 00:38:35.801 [2024-12-09 09:56:11.187395] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.801 [2024-12-09 09:56:11.187444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.801 [2024-12-09 09:56:11.187454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.801 [2024-12-09 09:56:11.187459] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.801 [2024-12-09 09:56:11.187463] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:35.801 [2024-12-09 09:56:11.187473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:35.801 qpair failed and we were unable to recover it. 00:38:35.801 [2024-12-09 09:56:11.197465] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.802 [2024-12-09 09:56:11.197544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.802 [2024-12-09 09:56:11.197554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.802 [2024-12-09 09:56:11.197559] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.802 [2024-12-09 09:56:11.197563] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:35.802 [2024-12-09 09:56:11.197573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:35.802 qpair failed and we were unable to recover it. 00:38:35.802 [2024-12-09 09:56:11.207372] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.802 [2024-12-09 09:56:11.207471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.802 [2024-12-09 09:56:11.207482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.802 [2024-12-09 09:56:11.207487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.802 [2024-12-09 09:56:11.207491] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:35.802 [2024-12-09 09:56:11.207502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:35.802 qpair failed and we were unable to recover it. 00:38:35.802 [2024-12-09 09:56:11.217567] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.802 [2024-12-09 09:56:11.217621] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.802 [2024-12-09 09:56:11.217631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.802 [2024-12-09 09:56:11.217636] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.802 [2024-12-09 09:56:11.217645] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:35.802 [2024-12-09 09:56:11.217656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:35.802 qpair failed and we were unable to recover it. 00:38:35.802 [2024-12-09 09:56:11.227533] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.802 [2024-12-09 09:56:11.227579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.802 [2024-12-09 09:56:11.227588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.802 [2024-12-09 09:56:11.227593] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.802 [2024-12-09 09:56:11.227598] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:35.802 [2024-12-09 09:56:11.227608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:35.802 qpair failed and we were unable to recover it. 00:38:35.802 [2024-12-09 09:56:11.237579] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.802 [2024-12-09 09:56:11.237626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.802 [2024-12-09 09:56:11.237642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.802 [2024-12-09 09:56:11.237648] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.802 [2024-12-09 09:56:11.237652] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:35.802 [2024-12-09 09:56:11.237662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:35.802 qpair failed and we were unable to recover it. 00:38:35.802 [2024-12-09 09:56:11.247625] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:35.802 [2024-12-09 09:56:11.247717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:35.802 [2024-12-09 09:56:11.247728] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:35.802 [2024-12-09 09:56:11.247733] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:35.802 [2024-12-09 09:56:11.247737] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:35.802 [2024-12-09 09:56:11.247748] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:35.802 qpair failed and we were unable to recover it. 00:38:36.063 [2024-12-09 09:56:11.257646] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.063 [2024-12-09 09:56:11.257697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.063 [2024-12-09 09:56:11.257706] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.063 [2024-12-09 09:56:11.257711] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.063 [2024-12-09 09:56:11.257716] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.063 [2024-12-09 09:56:11.257726] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.063 qpair failed and we were unable to recover it. 00:38:36.063 [2024-12-09 09:56:11.267641] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.063 [2024-12-09 09:56:11.267688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.063 [2024-12-09 09:56:11.267700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.063 [2024-12-09 09:56:11.267705] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.063 [2024-12-09 09:56:11.267710] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.063 [2024-12-09 09:56:11.267720] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.063 qpair failed and we were unable to recover it. 00:38:36.063 [2024-12-09 09:56:11.277655] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.063 [2024-12-09 09:56:11.277700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.063 [2024-12-09 09:56:11.277709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.063 [2024-12-09 09:56:11.277714] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.063 [2024-12-09 09:56:11.277718] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.063 [2024-12-09 09:56:11.277729] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.063 qpair failed and we were unable to recover it. 00:38:36.063 [2024-12-09 09:56:11.287725] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.063 [2024-12-09 09:56:11.287775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.063 [2024-12-09 09:56:11.287785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.063 [2024-12-09 09:56:11.287789] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.063 [2024-12-09 09:56:11.287794] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.063 [2024-12-09 09:56:11.287804] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.063 qpair failed and we were unable to recover it. 00:38:36.063 [2024-12-09 09:56:11.297738] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.063 [2024-12-09 09:56:11.297827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.063 [2024-12-09 09:56:11.297836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.063 [2024-12-09 09:56:11.297841] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.063 [2024-12-09 09:56:11.297845] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.063 [2024-12-09 09:56:11.297856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.063 qpair failed and we were unable to recover it. 00:38:36.063 [2024-12-09 09:56:11.307772] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.063 [2024-12-09 09:56:11.307841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.063 [2024-12-09 09:56:11.307850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.063 [2024-12-09 09:56:11.307855] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.063 [2024-12-09 09:56:11.307862] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.063 [2024-12-09 09:56:11.307873] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.063 qpair failed and we were unable to recover it. 00:38:36.063 [2024-12-09 09:56:11.317824] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.063 [2024-12-09 09:56:11.317870] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.063 [2024-12-09 09:56:11.317880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.063 [2024-12-09 09:56:11.317885] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.063 [2024-12-09 09:56:11.317889] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.063 [2024-12-09 09:56:11.317899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.063 qpair failed and we were unable to recover it. 00:38:36.063 [2024-12-09 09:56:11.327702] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.063 [2024-12-09 09:56:11.327753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.063 [2024-12-09 09:56:11.327763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.063 [2024-12-09 09:56:11.327767] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.063 [2024-12-09 09:56:11.327772] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.063 [2024-12-09 09:56:11.327781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.063 qpair failed and we were unable to recover it. 00:38:36.063 [2024-12-09 09:56:11.337872] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.064 [2024-12-09 09:56:11.337926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.064 [2024-12-09 09:56:11.337936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.064 [2024-12-09 09:56:11.337941] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.064 [2024-12-09 09:56:11.337946] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.064 [2024-12-09 09:56:11.337955] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.064 qpair failed and we were unable to recover it. 00:38:36.064 [2024-12-09 09:56:11.347879] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.064 [2024-12-09 09:56:11.347925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.064 [2024-12-09 09:56:11.347935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.064 [2024-12-09 09:56:11.347939] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.064 [2024-12-09 09:56:11.347944] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.064 [2024-12-09 09:56:11.347954] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.064 qpair failed and we were unable to recover it. 00:38:36.064 [2024-12-09 09:56:11.357882] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.064 [2024-12-09 09:56:11.357932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.064 [2024-12-09 09:56:11.357942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.064 [2024-12-09 09:56:11.357947] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.064 [2024-12-09 09:56:11.357951] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.064 [2024-12-09 09:56:11.357961] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.064 qpair failed and we were unable to recover it. 00:38:36.064 [2024-12-09 09:56:11.367950] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.064 [2024-12-09 09:56:11.368006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.064 [2024-12-09 09:56:11.368016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.064 [2024-12-09 09:56:11.368021] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.064 [2024-12-09 09:56:11.368025] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.064 [2024-12-09 09:56:11.368035] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.064 qpair failed and we were unable to recover it. 00:38:36.064 [2024-12-09 09:56:11.377973] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.064 [2024-12-09 09:56:11.378023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.064 [2024-12-09 09:56:11.378033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.064 [2024-12-09 09:56:11.378038] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.064 [2024-12-09 09:56:11.378042] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.064 [2024-12-09 09:56:11.378052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.064 qpair failed and we were unable to recover it. 00:38:36.064 [2024-12-09 09:56:11.387968] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.064 [2024-12-09 09:56:11.388017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.064 [2024-12-09 09:56:11.388027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.064 [2024-12-09 09:56:11.388031] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.064 [2024-12-09 09:56:11.388036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.064 [2024-12-09 09:56:11.388046] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.064 qpair failed and we were unable to recover it. 00:38:36.064 [2024-12-09 09:56:11.398041] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.064 [2024-12-09 09:56:11.398087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.064 [2024-12-09 09:56:11.398099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.064 [2024-12-09 09:56:11.398104] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.064 [2024-12-09 09:56:11.398108] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.064 [2024-12-09 09:56:11.398118] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.064 qpair failed and we were unable to recover it. 00:38:36.064 [2024-12-09 09:56:11.408054] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.064 [2024-12-09 09:56:11.408105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.064 [2024-12-09 09:56:11.408115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.064 [2024-12-09 09:56:11.408120] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.064 [2024-12-09 09:56:11.408125] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.064 [2024-12-09 09:56:11.408134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.064 qpair failed and we were unable to recover it. 00:38:36.064 [2024-12-09 09:56:11.418056] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.064 [2024-12-09 09:56:11.418110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.064 [2024-12-09 09:56:11.418120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.064 [2024-12-09 09:56:11.418124] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.064 [2024-12-09 09:56:11.418129] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.064 [2024-12-09 09:56:11.418139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.064 qpair failed and we were unable to recover it. 00:38:36.064 [2024-12-09 09:56:11.428107] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.064 [2024-12-09 09:56:11.428192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.064 [2024-12-09 09:56:11.428201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.064 [2024-12-09 09:56:11.428206] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.064 [2024-12-09 09:56:11.428211] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.064 [2024-12-09 09:56:11.428221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.064 qpair failed and we were unable to recover it. 00:38:36.064 [2024-12-09 09:56:11.438106] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.064 [2024-12-09 09:56:11.438151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.064 [2024-12-09 09:56:11.438161] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.064 [2024-12-09 09:56:11.438169] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.064 [2024-12-09 09:56:11.438173] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.064 [2024-12-09 09:56:11.438183] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.064 qpair failed and we were unable to recover it. 00:38:36.064 [2024-12-09 09:56:11.448047] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.064 [2024-12-09 09:56:11.448099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.064 [2024-12-09 09:56:11.448109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.064 [2024-12-09 09:56:11.448114] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.064 [2024-12-09 09:56:11.448118] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.064 [2024-12-09 09:56:11.448128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.064 qpair failed and we were unable to recover it. 00:38:36.064 [2024-12-09 09:56:11.458209] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.064 [2024-12-09 09:56:11.458301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.064 [2024-12-09 09:56:11.458311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.064 [2024-12-09 09:56:11.458316] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.064 [2024-12-09 09:56:11.458320] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.064 [2024-12-09 09:56:11.458331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.064 qpair failed and we were unable to recover it. 00:38:36.064 [2024-12-09 09:56:11.468223] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.064 [2024-12-09 09:56:11.468268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.064 [2024-12-09 09:56:11.468277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.064 [2024-12-09 09:56:11.468282] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.064 [2024-12-09 09:56:11.468287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.064 [2024-12-09 09:56:11.468297] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.064 qpair failed and we were unable to recover it. 00:38:36.064 [2024-12-09 09:56:11.478225] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.064 [2024-12-09 09:56:11.478275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.064 [2024-12-09 09:56:11.478284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.064 [2024-12-09 09:56:11.478289] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.064 [2024-12-09 09:56:11.478293] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.064 [2024-12-09 09:56:11.478303] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.064 qpair failed and we were unable to recover it. 00:38:36.064 [2024-12-09 09:56:11.488280] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.064 [2024-12-09 09:56:11.488374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.064 [2024-12-09 09:56:11.488384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.064 [2024-12-09 09:56:11.488388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.064 [2024-12-09 09:56:11.488392] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.065 [2024-12-09 09:56:11.488403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.065 qpair failed and we were unable to recover it. 00:38:36.065 [2024-12-09 09:56:11.498273] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.065 [2024-12-09 09:56:11.498321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.065 [2024-12-09 09:56:11.498330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.065 [2024-12-09 09:56:11.498335] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.065 [2024-12-09 09:56:11.498339] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.065 [2024-12-09 09:56:11.498349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.065 qpair failed and we were unable to recover it. 00:38:36.065 [2024-12-09 09:56:11.508322] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.065 [2024-12-09 09:56:11.508415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.065 [2024-12-09 09:56:11.508425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.065 [2024-12-09 09:56:11.508429] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.065 [2024-12-09 09:56:11.508434] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.065 [2024-12-09 09:56:11.508444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.065 qpair failed and we were unable to recover it. 00:38:36.325 [2024-12-09 09:56:11.518344] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.325 [2024-12-09 09:56:11.518393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.325 [2024-12-09 09:56:11.518403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.325 [2024-12-09 09:56:11.518408] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.325 [2024-12-09 09:56:11.518412] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.325 [2024-12-09 09:56:11.518422] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.325 qpair failed and we were unable to recover it. 00:38:36.325 [2024-12-09 09:56:11.528389] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.325 [2024-12-09 09:56:11.528494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.325 [2024-12-09 09:56:11.528504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.325 [2024-12-09 09:56:11.528509] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.325 [2024-12-09 09:56:11.528513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.325 [2024-12-09 09:56:11.528523] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.325 qpair failed and we were unable to recover it. 00:38:36.325 [2024-12-09 09:56:11.538297] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.325 [2024-12-09 09:56:11.538350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.325 [2024-12-09 09:56:11.538362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.325 [2024-12-09 09:56:11.538367] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.325 [2024-12-09 09:56:11.538371] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.325 [2024-12-09 09:56:11.538382] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.325 qpair failed and we were unable to recover it. 00:38:36.325 [2024-12-09 09:56:11.548425] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.325 [2024-12-09 09:56:11.548483] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.325 [2024-12-09 09:56:11.548510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.325 [2024-12-09 09:56:11.548515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.325 [2024-12-09 09:56:11.548520] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.325 [2024-12-09 09:56:11.548535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.325 qpair failed and we were unable to recover it. 00:38:36.325 [2024-12-09 09:56:11.558436] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.325 [2024-12-09 09:56:11.558537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.325 [2024-12-09 09:56:11.558548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.325 [2024-12-09 09:56:11.558552] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.325 [2024-12-09 09:56:11.558557] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.325 [2024-12-09 09:56:11.558567] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.325 qpair failed and we were unable to recover it. 00:38:36.325 [2024-12-09 09:56:11.568498] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.325 [2024-12-09 09:56:11.568587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.325 [2024-12-09 09:56:11.568597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.325 [2024-12-09 09:56:11.568605] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.325 [2024-12-09 09:56:11.568609] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.325 [2024-12-09 09:56:11.568619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.325 qpair failed and we were unable to recover it. 00:38:36.325 [2024-12-09 09:56:11.578519] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.325 [2024-12-09 09:56:11.578570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.325 [2024-12-09 09:56:11.578579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.325 [2024-12-09 09:56:11.578584] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.325 [2024-12-09 09:56:11.578589] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.325 [2024-12-09 09:56:11.578599] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.325 qpair failed and we were unable to recover it. 00:38:36.325 [2024-12-09 09:56:11.588543] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.325 [2024-12-09 09:56:11.588596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.325 [2024-12-09 09:56:11.588606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.325 [2024-12-09 09:56:11.588610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.325 [2024-12-09 09:56:11.588615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.325 [2024-12-09 09:56:11.588625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.325 qpair failed and we were unable to recover it. 00:38:36.325 [2024-12-09 09:56:11.598580] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.325 [2024-12-09 09:56:11.598662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.325 [2024-12-09 09:56:11.598672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.325 [2024-12-09 09:56:11.598677] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.325 [2024-12-09 09:56:11.598681] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.325 [2024-12-09 09:56:11.598692] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.325 qpair failed and we were unable to recover it. 00:38:36.325 [2024-12-09 09:56:11.608589] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.325 [2024-12-09 09:56:11.608646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.325 [2024-12-09 09:56:11.608655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.325 [2024-12-09 09:56:11.608660] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.325 [2024-12-09 09:56:11.608664] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.325 [2024-12-09 09:56:11.608678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.325 qpair failed and we were unable to recover it. 00:38:36.325 [2024-12-09 09:56:11.618642] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.325 [2024-12-09 09:56:11.618690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.325 [2024-12-09 09:56:11.618700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.325 [2024-12-09 09:56:11.618705] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.325 [2024-12-09 09:56:11.618709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.325 [2024-12-09 09:56:11.618719] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.325 qpair failed and we were unable to recover it. 00:38:36.325 [2024-12-09 09:56:11.628680] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.325 [2024-12-09 09:56:11.628729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.325 [2024-12-09 09:56:11.628739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.326 [2024-12-09 09:56:11.628743] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.326 [2024-12-09 09:56:11.628749] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.326 [2024-12-09 09:56:11.628759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.326 qpair failed and we were unable to recover it. 00:38:36.326 [2024-12-09 09:56:11.638691] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.326 [2024-12-09 09:56:11.638739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.326 [2024-12-09 09:56:11.638749] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.326 [2024-12-09 09:56:11.638754] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.326 [2024-12-09 09:56:11.638759] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.326 [2024-12-09 09:56:11.638770] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.326 qpair failed and we were unable to recover it. 00:38:36.326 [2024-12-09 09:56:11.648612] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.326 [2024-12-09 09:56:11.648672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.326 [2024-12-09 09:56:11.648682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.326 [2024-12-09 09:56:11.648687] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.326 [2024-12-09 09:56:11.648691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.326 [2024-12-09 09:56:11.648702] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.326 qpair failed and we were unable to recover it. 00:38:36.326 [2024-12-09 09:56:11.658770] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.326 [2024-12-09 09:56:11.658817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.326 [2024-12-09 09:56:11.658827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.326 [2024-12-09 09:56:11.658832] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.326 [2024-12-09 09:56:11.658836] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.326 [2024-12-09 09:56:11.658846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.326 qpair failed and we were unable to recover it. 00:38:36.326 [2024-12-09 09:56:11.668744] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.326 [2024-12-09 09:56:11.668791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.326 [2024-12-09 09:56:11.668801] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.326 [2024-12-09 09:56:11.668806] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.326 [2024-12-09 09:56:11.668810] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.326 [2024-12-09 09:56:11.668820] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.326 qpair failed and we were unable to recover it. 00:38:36.326 [2024-12-09 09:56:11.678781] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.326 [2024-12-09 09:56:11.678828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.326 [2024-12-09 09:56:11.678837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.326 [2024-12-09 09:56:11.678842] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.326 [2024-12-09 09:56:11.678847] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.326 [2024-12-09 09:56:11.678857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.326 qpair failed and we were unable to recover it. 00:38:36.326 [2024-12-09 09:56:11.688801] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.326 [2024-12-09 09:56:11.688859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.326 [2024-12-09 09:56:11.688869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.326 [2024-12-09 09:56:11.688873] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.326 [2024-12-09 09:56:11.688878] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.326 [2024-12-09 09:56:11.688888] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.326 qpair failed and we were unable to recover it. 00:38:36.326 [2024-12-09 09:56:11.698872] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.326 [2024-12-09 09:56:11.698925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.326 [2024-12-09 09:56:11.698937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.326 [2024-12-09 09:56:11.698942] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.326 [2024-12-09 09:56:11.698946] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.326 [2024-12-09 09:56:11.698956] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.326 qpair failed and we were unable to recover it. 00:38:36.326 [2024-12-09 09:56:11.708874] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.326 [2024-12-09 09:56:11.708921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.326 [2024-12-09 09:56:11.708931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.326 [2024-12-09 09:56:11.708936] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.326 [2024-12-09 09:56:11.708940] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.326 [2024-12-09 09:56:11.708950] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.326 qpair failed and we were unable to recover it. 00:38:36.326 [2024-12-09 09:56:11.718875] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.326 [2024-12-09 09:56:11.718920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.326 [2024-12-09 09:56:11.718930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.326 [2024-12-09 09:56:11.718934] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.326 [2024-12-09 09:56:11.718939] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.326 [2024-12-09 09:56:11.718949] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.326 qpair failed and we were unable to recover it. 00:38:36.326 [2024-12-09 09:56:11.728842] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.326 [2024-12-09 09:56:11.728892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.326 [2024-12-09 09:56:11.728902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.326 [2024-12-09 09:56:11.728907] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.326 [2024-12-09 09:56:11.728911] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.326 [2024-12-09 09:56:11.728921] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.326 qpair failed and we were unable to recover it. 00:38:36.326 [2024-12-09 09:56:11.738968] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.326 [2024-12-09 09:56:11.739019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.326 [2024-12-09 09:56:11.739028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.326 [2024-12-09 09:56:11.739033] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.326 [2024-12-09 09:56:11.739041] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.326 [2024-12-09 09:56:11.739051] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.326 qpair failed and we were unable to recover it. 00:38:36.326 [2024-12-09 09:56:11.748863] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.326 [2024-12-09 09:56:11.748914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.326 [2024-12-09 09:56:11.748924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.326 [2024-12-09 09:56:11.748928] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.326 [2024-12-09 09:56:11.748933] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.326 [2024-12-09 09:56:11.748943] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.326 qpair failed and we were unable to recover it. 00:38:36.326 [2024-12-09 09:56:11.759023] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.326 [2024-12-09 09:56:11.759071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.326 [2024-12-09 09:56:11.759081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.326 [2024-12-09 09:56:11.759085] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.326 [2024-12-09 09:56:11.759090] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.326 [2024-12-09 09:56:11.759100] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.326 qpair failed and we were unable to recover it. 00:38:36.326 [2024-12-09 09:56:11.769062] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.326 [2024-12-09 09:56:11.769111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.326 [2024-12-09 09:56:11.769120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.326 [2024-12-09 09:56:11.769125] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.326 [2024-12-09 09:56:11.769129] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.326 [2024-12-09 09:56:11.769140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.326 qpair failed and we were unable to recover it. 00:38:36.587 [2024-12-09 09:56:11.779056] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.587 [2024-12-09 09:56:11.779104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.587 [2024-12-09 09:56:11.779114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.587 [2024-12-09 09:56:11.779119] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.587 [2024-12-09 09:56:11.779123] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.587 [2024-12-09 09:56:11.779133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.587 qpair failed and we were unable to recover it. 00:38:36.587 [2024-12-09 09:56:11.789109] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.587 [2024-12-09 09:56:11.789156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.587 [2024-12-09 09:56:11.789166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.587 [2024-12-09 09:56:11.789170] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.587 [2024-12-09 09:56:11.789175] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.587 [2024-12-09 09:56:11.789185] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.587 qpair failed and we were unable to recover it. 00:38:36.587 [2024-12-09 09:56:11.799104] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.587 [2024-12-09 09:56:11.799153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.587 [2024-12-09 09:56:11.799163] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.587 [2024-12-09 09:56:11.799167] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.587 [2024-12-09 09:56:11.799172] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.587 [2024-12-09 09:56:11.799182] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.587 qpair failed and we were unable to recover it. 00:38:36.587 [2024-12-09 09:56:11.809144] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.587 [2024-12-09 09:56:11.809193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.587 [2024-12-09 09:56:11.809203] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.587 [2024-12-09 09:56:11.809208] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.587 [2024-12-09 09:56:11.809212] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.587 [2024-12-09 09:56:11.809222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.587 qpair failed and we were unable to recover it. 00:38:36.587 [2024-12-09 09:56:11.819131] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.587 [2024-12-09 09:56:11.819192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.587 [2024-12-09 09:56:11.819202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.587 [2024-12-09 09:56:11.819206] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.587 [2024-12-09 09:56:11.819211] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.587 [2024-12-09 09:56:11.819221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.587 qpair failed and we were unable to recover it. 00:38:36.587 [2024-12-09 09:56:11.829190] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.587 [2024-12-09 09:56:11.829240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.587 [2024-12-09 09:56:11.829253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.587 [2024-12-09 09:56:11.829258] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.587 [2024-12-09 09:56:11.829262] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.587 [2024-12-09 09:56:11.829272] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.587 qpair failed and we were unable to recover it. 00:38:36.587 [2024-12-09 09:56:11.839203] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.587 [2024-12-09 09:56:11.839253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.587 [2024-12-09 09:56:11.839263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.587 [2024-12-09 09:56:11.839267] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.587 [2024-12-09 09:56:11.839271] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.587 [2024-12-09 09:56:11.839281] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.587 qpair failed and we were unable to recover it. 00:38:36.587 [2024-12-09 09:56:11.849240] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.587 [2024-12-09 09:56:11.849288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.587 [2024-12-09 09:56:11.849298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.587 [2024-12-09 09:56:11.849303] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.587 [2024-12-09 09:56:11.849307] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.587 [2024-12-09 09:56:11.849317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.587 qpair failed and we were unable to recover it. 00:38:36.587 [2024-12-09 09:56:11.859305] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.587 [2024-12-09 09:56:11.859389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.587 [2024-12-09 09:56:11.859399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.587 [2024-12-09 09:56:11.859404] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.587 [2024-12-09 09:56:11.859408] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.587 [2024-12-09 09:56:11.859418] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.587 qpair failed and we were unable to recover it. 00:38:36.587 [2024-12-09 09:56:11.869323] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.587 [2024-12-09 09:56:11.869377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.587 [2024-12-09 09:56:11.869396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.587 [2024-12-09 09:56:11.869402] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.587 [2024-12-09 09:56:11.869410] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.587 [2024-12-09 09:56:11.869425] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.587 qpair failed and we were unable to recover it. 00:38:36.587 [2024-12-09 09:56:11.879330] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.587 [2024-12-09 09:56:11.879380] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.588 [2024-12-09 09:56:11.879399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.588 [2024-12-09 09:56:11.879405] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.588 [2024-12-09 09:56:11.879409] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.588 [2024-12-09 09:56:11.879423] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.588 qpair failed and we were unable to recover it. 00:38:36.588 [2024-12-09 09:56:11.889370] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.588 [2024-12-09 09:56:11.889418] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.588 [2024-12-09 09:56:11.889429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.588 [2024-12-09 09:56:11.889434] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.588 [2024-12-09 09:56:11.889439] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.588 [2024-12-09 09:56:11.889449] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.588 qpair failed and we were unable to recover it. 00:38:36.588 [2024-12-09 09:56:11.899432] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.588 [2024-12-09 09:56:11.899482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.588 [2024-12-09 09:56:11.899492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.588 [2024-12-09 09:56:11.899497] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.588 [2024-12-09 09:56:11.899501] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.588 [2024-12-09 09:56:11.899512] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.588 qpair failed and we were unable to recover it. 00:38:36.588 [2024-12-09 09:56:11.909439] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.588 [2024-12-09 09:56:11.909499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.588 [2024-12-09 09:56:11.909509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.588 [2024-12-09 09:56:11.909514] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.588 [2024-12-09 09:56:11.909518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.588 [2024-12-09 09:56:11.909528] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.588 qpair failed and we were unable to recover it. 00:38:36.588 [2024-12-09 09:56:11.919466] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.588 [2024-12-09 09:56:11.919510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.588 [2024-12-09 09:56:11.919520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.588 [2024-12-09 09:56:11.919525] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.588 [2024-12-09 09:56:11.919529] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.588 [2024-12-09 09:56:11.919539] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.588 qpair failed and we were unable to recover it. 00:38:36.588 [2024-12-09 09:56:11.929508] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.588 [2024-12-09 09:56:11.929556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.588 [2024-12-09 09:56:11.929566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.588 [2024-12-09 09:56:11.929571] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.588 [2024-12-09 09:56:11.929575] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.588 [2024-12-09 09:56:11.929585] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.588 qpair failed and we were unable to recover it. 00:38:36.588 [2024-12-09 09:56:11.939530] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.588 [2024-12-09 09:56:11.939583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.588 [2024-12-09 09:56:11.939593] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.588 [2024-12-09 09:56:11.939598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.588 [2024-12-09 09:56:11.939602] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.588 [2024-12-09 09:56:11.939612] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.588 qpair failed and we were unable to recover it. 00:38:36.588 [2024-12-09 09:56:11.949514] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.588 [2024-12-09 09:56:11.949565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.588 [2024-12-09 09:56:11.949576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.588 [2024-12-09 09:56:11.949580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.588 [2024-12-09 09:56:11.949585] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.588 [2024-12-09 09:56:11.949595] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.588 qpair failed and we were unable to recover it. 00:38:36.588 [2024-12-09 09:56:11.959568] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.588 [2024-12-09 09:56:11.959619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.588 [2024-12-09 09:56:11.959632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.588 [2024-12-09 09:56:11.959639] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.588 [2024-12-09 09:56:11.959644] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.588 [2024-12-09 09:56:11.959654] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.588 qpair failed and we were unable to recover it. 00:38:36.588 [2024-12-09 09:56:11.969605] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.588 [2024-12-09 09:56:11.969655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.588 [2024-12-09 09:56:11.969665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.588 [2024-12-09 09:56:11.969670] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.588 [2024-12-09 09:56:11.969674] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.588 [2024-12-09 09:56:11.969685] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.588 qpair failed and we were unable to recover it. 00:38:36.588 [2024-12-09 09:56:11.979659] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.588 [2024-12-09 09:56:11.979714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.588 [2024-12-09 09:56:11.979724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.588 [2024-12-09 09:56:11.979729] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.588 [2024-12-09 09:56:11.979733] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.588 [2024-12-09 09:56:11.979744] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.588 qpair failed and we were unable to recover it. 00:38:36.588 [2024-12-09 09:56:11.989623] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.588 [2024-12-09 09:56:11.989672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.588 [2024-12-09 09:56:11.989682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.588 [2024-12-09 09:56:11.989687] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.588 [2024-12-09 09:56:11.989691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.588 [2024-12-09 09:56:11.989701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.588 qpair failed and we were unable to recover it. 00:38:36.588 [2024-12-09 09:56:11.999686] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.588 [2024-12-09 09:56:11.999733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.588 [2024-12-09 09:56:11.999743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.588 [2024-12-09 09:56:11.999751] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.588 [2024-12-09 09:56:11.999755] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.588 [2024-12-09 09:56:11.999766] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.588 qpair failed and we were unable to recover it. 00:38:36.588 [2024-12-09 09:56:12.009717] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.588 [2024-12-09 09:56:12.009803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.588 [2024-12-09 09:56:12.009813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.588 [2024-12-09 09:56:12.009818] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.588 [2024-12-09 09:56:12.009822] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.588 [2024-12-09 09:56:12.009832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.588 qpair failed and we were unable to recover it. 00:38:36.588 [2024-12-09 09:56:12.019609] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.588 [2024-12-09 09:56:12.019659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.588 [2024-12-09 09:56:12.019669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.588 [2024-12-09 09:56:12.019674] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.588 [2024-12-09 09:56:12.019678] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.588 [2024-12-09 09:56:12.019689] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.588 qpair failed and we were unable to recover it. 00:38:36.588 [2024-12-09 09:56:12.029737] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.588 [2024-12-09 09:56:12.029783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.588 [2024-12-09 09:56:12.029793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.588 [2024-12-09 09:56:12.029798] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.588 [2024-12-09 09:56:12.029802] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.588 [2024-12-09 09:56:12.029812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.588 qpair failed and we were unable to recover it. 00:38:36.849 [2024-12-09 09:56:12.039780] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.849 [2024-12-09 09:56:12.039825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.849 [2024-12-09 09:56:12.039835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.849 [2024-12-09 09:56:12.039839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.849 [2024-12-09 09:56:12.039844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.849 [2024-12-09 09:56:12.039854] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.849 qpair failed and we were unable to recover it. 00:38:36.849 [2024-12-09 09:56:12.049841] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.849 [2024-12-09 09:56:12.049908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.849 [2024-12-09 09:56:12.049918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.849 [2024-12-09 09:56:12.049923] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.849 [2024-12-09 09:56:12.049927] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.849 [2024-12-09 09:56:12.049937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.849 qpair failed and we were unable to recover it. 00:38:36.849 [2024-12-09 09:56:12.059863] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.849 [2024-12-09 09:56:12.059914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.849 [2024-12-09 09:56:12.059924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.849 [2024-12-09 09:56:12.059929] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.849 [2024-12-09 09:56:12.059934] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.849 [2024-12-09 09:56:12.059944] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.849 qpair failed and we were unable to recover it. 00:38:36.849 [2024-12-09 09:56:12.069867] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.849 [2024-12-09 09:56:12.069912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.849 [2024-12-09 09:56:12.069922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.849 [2024-12-09 09:56:12.069927] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.849 [2024-12-09 09:56:12.069932] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.849 [2024-12-09 09:56:12.069942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.849 qpair failed and we were unable to recover it. 00:38:36.849 [2024-12-09 09:56:12.079800] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.849 [2024-12-09 09:56:12.079845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.849 [2024-12-09 09:56:12.079855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.849 [2024-12-09 09:56:12.079859] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.849 [2024-12-09 09:56:12.079864] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.849 [2024-12-09 09:56:12.079874] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.849 qpair failed and we were unable to recover it. 00:38:36.849 [2024-12-09 09:56:12.089925] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.849 [2024-12-09 09:56:12.089976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.849 [2024-12-09 09:56:12.089986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.849 [2024-12-09 09:56:12.089991] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.849 [2024-12-09 09:56:12.089995] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.849 [2024-12-09 09:56:12.090005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.849 qpair failed and we were unable to recover it. 00:38:36.849 [2024-12-09 09:56:12.099980] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.849 [2024-12-09 09:56:12.100034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.849 [2024-12-09 09:56:12.100043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.849 [2024-12-09 09:56:12.100048] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.849 [2024-12-09 09:56:12.100053] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.849 [2024-12-09 09:56:12.100062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.849 qpair failed and we were unable to recover it. 00:38:36.849 [2024-12-09 09:56:12.110000] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.849 [2024-12-09 09:56:12.110049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.849 [2024-12-09 09:56:12.110059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.849 [2024-12-09 09:56:12.110064] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.849 [2024-12-09 09:56:12.110068] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.849 [2024-12-09 09:56:12.110078] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.849 qpair failed and we were unable to recover it. 00:38:36.849 [2024-12-09 09:56:12.119993] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.849 [2024-12-09 09:56:12.120043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.849 [2024-12-09 09:56:12.120052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.849 [2024-12-09 09:56:12.120058] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.849 [2024-12-09 09:56:12.120062] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.849 [2024-12-09 09:56:12.120072] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.849 qpair failed and we were unable to recover it. 00:38:36.849 [2024-12-09 09:56:12.130059] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.849 [2024-12-09 09:56:12.130108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.849 [2024-12-09 09:56:12.130118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.849 [2024-12-09 09:56:12.130126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.849 [2024-12-09 09:56:12.130130] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.849 [2024-12-09 09:56:12.130140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.849 qpair failed and we were unable to recover it. 00:38:36.849 [2024-12-09 09:56:12.140075] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.849 [2024-12-09 09:56:12.140131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.850 [2024-12-09 09:56:12.140141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.850 [2024-12-09 09:56:12.140146] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.850 [2024-12-09 09:56:12.140150] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.850 [2024-12-09 09:56:12.140160] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.850 qpair failed and we were unable to recover it. 00:38:36.850 [2024-12-09 09:56:12.150104] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.850 [2024-12-09 09:56:12.150147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.850 [2024-12-09 09:56:12.150157] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.850 [2024-12-09 09:56:12.150162] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.850 [2024-12-09 09:56:12.150166] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.850 [2024-12-09 09:56:12.150176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.850 qpair failed and we were unable to recover it. 00:38:36.850 [2024-12-09 09:56:12.160157] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.850 [2024-12-09 09:56:12.160231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.850 [2024-12-09 09:56:12.160240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.850 [2024-12-09 09:56:12.160245] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.850 [2024-12-09 09:56:12.160250] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.850 [2024-12-09 09:56:12.160259] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.850 qpair failed and we were unable to recover it. 00:38:36.850 [2024-12-09 09:56:12.170163] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.850 [2024-12-09 09:56:12.170216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.850 [2024-12-09 09:56:12.170225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.850 [2024-12-09 09:56:12.170230] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.850 [2024-12-09 09:56:12.170235] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.850 [2024-12-09 09:56:12.170247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.850 qpair failed and we were unable to recover it. 00:38:36.850 [2024-12-09 09:56:12.180181] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.850 [2024-12-09 09:56:12.180246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.850 [2024-12-09 09:56:12.180256] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.850 [2024-12-09 09:56:12.180261] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.850 [2024-12-09 09:56:12.180265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.850 [2024-12-09 09:56:12.180275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.850 qpair failed and we were unable to recover it. 00:38:36.850 [2024-12-09 09:56:12.190207] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.850 [2024-12-09 09:56:12.190254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.850 [2024-12-09 09:56:12.190263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.850 [2024-12-09 09:56:12.190268] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.850 [2024-12-09 09:56:12.190272] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.850 [2024-12-09 09:56:12.190282] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.850 qpair failed and we were unable to recover it. 00:38:36.850 [2024-12-09 09:56:12.200236] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.850 [2024-12-09 09:56:12.200285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.850 [2024-12-09 09:56:12.200294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.850 [2024-12-09 09:56:12.200299] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.850 [2024-12-09 09:56:12.200304] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.850 [2024-12-09 09:56:12.200313] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.850 qpair failed and we were unable to recover it. 00:38:36.850 [2024-12-09 09:56:12.210261] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.850 [2024-12-09 09:56:12.210308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.850 [2024-12-09 09:56:12.210318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.850 [2024-12-09 09:56:12.210323] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.850 [2024-12-09 09:56:12.210327] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.850 [2024-12-09 09:56:12.210337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.850 qpair failed and we were unable to recover it. 00:38:36.850 [2024-12-09 09:56:12.220323] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.850 [2024-12-09 09:56:12.220370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.850 [2024-12-09 09:56:12.220381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.850 [2024-12-09 09:56:12.220385] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.850 [2024-12-09 09:56:12.220390] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.850 [2024-12-09 09:56:12.220399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.850 qpair failed and we were unable to recover it. 00:38:36.850 [2024-12-09 09:56:12.230337] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.850 [2024-12-09 09:56:12.230388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.850 [2024-12-09 09:56:12.230398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.850 [2024-12-09 09:56:12.230402] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.850 [2024-12-09 09:56:12.230407] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.850 [2024-12-09 09:56:12.230417] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.850 qpair failed and we were unable to recover it. 00:38:36.850 [2024-12-09 09:56:12.240378] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.850 [2024-12-09 09:56:12.240420] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.850 [2024-12-09 09:56:12.240429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.850 [2024-12-09 09:56:12.240434] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.850 [2024-12-09 09:56:12.240438] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.850 [2024-12-09 09:56:12.240448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.850 qpair failed and we were unable to recover it. 00:38:36.850 [2024-12-09 09:56:12.250400] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.850 [2024-12-09 09:56:12.250450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.850 [2024-12-09 09:56:12.250460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.850 [2024-12-09 09:56:12.250465] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.850 [2024-12-09 09:56:12.250469] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.850 [2024-12-09 09:56:12.250479] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.850 qpair failed and we were unable to recover it. 00:38:36.850 [2024-12-09 09:56:12.260423] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.850 [2024-12-09 09:56:12.260515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.850 [2024-12-09 09:56:12.260528] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.850 [2024-12-09 09:56:12.260533] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.850 [2024-12-09 09:56:12.260537] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.850 [2024-12-09 09:56:12.260547] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.850 qpair failed and we were unable to recover it. 00:38:36.850 [2024-12-09 09:56:12.270421] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.850 [2024-12-09 09:56:12.270464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.850 [2024-12-09 09:56:12.270474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.850 [2024-12-09 09:56:12.270479] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.850 [2024-12-09 09:56:12.270483] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.850 [2024-12-09 09:56:12.270493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.850 qpair failed and we were unable to recover it. 00:38:36.850 [2024-12-09 09:56:12.280347] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.850 [2024-12-09 09:56:12.280410] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.850 [2024-12-09 09:56:12.280420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.850 [2024-12-09 09:56:12.280427] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.850 [2024-12-09 09:56:12.280432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.850 [2024-12-09 09:56:12.280442] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.850 qpair failed and we were unable to recover it. 00:38:36.850 [2024-12-09 09:56:12.290537] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:36.850 [2024-12-09 09:56:12.290587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:36.850 [2024-12-09 09:56:12.290597] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:36.850 [2024-12-09 09:56:12.290602] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:36.850 [2024-12-09 09:56:12.290607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:36.850 [2024-12-09 09:56:12.290617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:36.850 qpair failed and we were unable to recover it. 00:38:37.112 [2024-12-09 09:56:12.300535] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.112 [2024-12-09 09:56:12.300585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.112 [2024-12-09 09:56:12.300595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.112 [2024-12-09 09:56:12.300600] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.112 [2024-12-09 09:56:12.300608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.112 [2024-12-09 09:56:12.300618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.112 qpair failed and we were unable to recover it. 00:38:37.112 [2024-12-09 09:56:12.310573] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.112 [2024-12-09 09:56:12.310658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.112 [2024-12-09 09:56:12.310668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.112 [2024-12-09 09:56:12.310673] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.112 [2024-12-09 09:56:12.310677] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.113 [2024-12-09 09:56:12.310687] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.113 qpair failed and we were unable to recover it. 00:38:37.113 [2024-12-09 09:56:12.320589] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.113 [2024-12-09 09:56:12.320646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.113 [2024-12-09 09:56:12.320656] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.113 [2024-12-09 09:56:12.320661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.113 [2024-12-09 09:56:12.320665] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.113 [2024-12-09 09:56:12.320675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.113 qpair failed and we were unable to recover it. 00:38:37.113 [2024-12-09 09:56:12.330595] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.113 [2024-12-09 09:56:12.330645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.113 [2024-12-09 09:56:12.330655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.113 [2024-12-09 09:56:12.330660] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.113 [2024-12-09 09:56:12.330665] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.113 [2024-12-09 09:56:12.330675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.113 qpair failed and we were unable to recover it. 00:38:37.113 [2024-12-09 09:56:12.340646] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.113 [2024-12-09 09:56:12.340698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.113 [2024-12-09 09:56:12.340708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.113 [2024-12-09 09:56:12.340712] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.113 [2024-12-09 09:56:12.340717] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.113 [2024-12-09 09:56:12.340727] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.113 qpair failed and we were unable to recover it. 00:38:37.113 [2024-12-09 09:56:12.350668] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.113 [2024-12-09 09:56:12.350717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.113 [2024-12-09 09:56:12.350727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.113 [2024-12-09 09:56:12.350732] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.113 [2024-12-09 09:56:12.350736] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.113 [2024-12-09 09:56:12.350746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.113 qpair failed and we were unable to recover it. 00:38:37.113 [2024-12-09 09:56:12.360712] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.113 [2024-12-09 09:56:12.360756] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.113 [2024-12-09 09:56:12.360766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.113 [2024-12-09 09:56:12.360771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.113 [2024-12-09 09:56:12.360775] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.113 [2024-12-09 09:56:12.360785] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.113 qpair failed and we were unable to recover it. 00:38:37.113 [2024-12-09 09:56:12.370740] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.113 [2024-12-09 09:56:12.370798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.113 [2024-12-09 09:56:12.370808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.113 [2024-12-09 09:56:12.370812] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.113 [2024-12-09 09:56:12.370817] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.113 [2024-12-09 09:56:12.370827] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.113 qpair failed and we were unable to recover it. 00:38:37.113 [2024-12-09 09:56:12.380790] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.113 [2024-12-09 09:56:12.380844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.113 [2024-12-09 09:56:12.380854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.113 [2024-12-09 09:56:12.380859] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.113 [2024-12-09 09:56:12.380863] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.113 [2024-12-09 09:56:12.380873] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.113 qpair failed and we were unable to recover it. 00:38:37.113 [2024-12-09 09:56:12.390761] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.113 [2024-12-09 09:56:12.390821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.113 [2024-12-09 09:56:12.390834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.113 [2024-12-09 09:56:12.390839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.113 [2024-12-09 09:56:12.390843] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.113 [2024-12-09 09:56:12.390853] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.113 qpair failed and we were unable to recover it. 00:38:37.113 [2024-12-09 09:56:12.400678] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.113 [2024-12-09 09:56:12.400728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.113 [2024-12-09 09:56:12.400738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.113 [2024-12-09 09:56:12.400743] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.113 [2024-12-09 09:56:12.400747] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.113 [2024-12-09 09:56:12.400757] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.113 qpair failed and we were unable to recover it. 00:38:37.113 [2024-12-09 09:56:12.410874] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.113 [2024-12-09 09:56:12.410954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.113 [2024-12-09 09:56:12.410964] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.113 [2024-12-09 09:56:12.410969] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.113 [2024-12-09 09:56:12.410973] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.113 [2024-12-09 09:56:12.410983] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.113 qpair failed and we were unable to recover it. 00:38:37.113 [2024-12-09 09:56:12.420890] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.113 [2024-12-09 09:56:12.420943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.113 [2024-12-09 09:56:12.420952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.113 [2024-12-09 09:56:12.420957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.113 [2024-12-09 09:56:12.420962] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.113 [2024-12-09 09:56:12.420971] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.113 qpair failed and we were unable to recover it. 00:38:37.113 [2024-12-09 09:56:12.430892] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.113 [2024-12-09 09:56:12.430945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.113 [2024-12-09 09:56:12.430955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.113 [2024-12-09 09:56:12.430960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.113 [2024-12-09 09:56:12.430967] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.113 [2024-12-09 09:56:12.430977] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.113 qpair failed and we were unable to recover it. 00:38:37.113 [2024-12-09 09:56:12.440927] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.113 [2024-12-09 09:56:12.440978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.113 [2024-12-09 09:56:12.440987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.113 [2024-12-09 09:56:12.440992] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.113 [2024-12-09 09:56:12.440996] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.113 [2024-12-09 09:56:12.441006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.113 qpair failed and we were unable to recover it. 00:38:37.113 [2024-12-09 09:56:12.450970] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.113 [2024-12-09 09:56:12.451018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.113 [2024-12-09 09:56:12.451028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.113 [2024-12-09 09:56:12.451033] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.113 [2024-12-09 09:56:12.451037] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.113 [2024-12-09 09:56:12.451047] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.113 qpair failed and we were unable to recover it. 00:38:37.113 [2024-12-09 09:56:12.460981] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.113 [2024-12-09 09:56:12.461033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.113 [2024-12-09 09:56:12.461043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.113 [2024-12-09 09:56:12.461048] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.113 [2024-12-09 09:56:12.461052] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.113 [2024-12-09 09:56:12.461062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.113 qpair failed and we were unable to recover it. 00:38:37.113 [2024-12-09 09:56:12.471000] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.113 [2024-12-09 09:56:12.471046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.113 [2024-12-09 09:56:12.471056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.113 [2024-12-09 09:56:12.471060] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.113 [2024-12-09 09:56:12.471065] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.113 [2024-12-09 09:56:12.471075] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.113 qpair failed and we were unable to recover it. 00:38:37.113 [2024-12-09 09:56:12.481033] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.113 [2024-12-09 09:56:12.481116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.113 [2024-12-09 09:56:12.481126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.113 [2024-12-09 09:56:12.481130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.113 [2024-12-09 09:56:12.481135] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.113 [2024-12-09 09:56:12.481145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.113 qpair failed and we were unable to recover it. 00:38:37.113 [2024-12-09 09:56:12.491086] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.113 [2024-12-09 09:56:12.491133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.113 [2024-12-09 09:56:12.491143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.113 [2024-12-09 09:56:12.491147] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.113 [2024-12-09 09:56:12.491152] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.113 [2024-12-09 09:56:12.491161] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.113 qpair failed and we were unable to recover it. 00:38:37.113 [2024-12-09 09:56:12.501102] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.113 [2024-12-09 09:56:12.501149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.113 [2024-12-09 09:56:12.501159] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.113 [2024-12-09 09:56:12.501164] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.113 [2024-12-09 09:56:12.501168] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.113 [2024-12-09 09:56:12.501178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.113 qpair failed and we were unable to recover it. 00:38:37.114 [2024-12-09 09:56:12.511133] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.114 [2024-12-09 09:56:12.511177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.114 [2024-12-09 09:56:12.511186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.114 [2024-12-09 09:56:12.511191] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.114 [2024-12-09 09:56:12.511195] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.114 [2024-12-09 09:56:12.511205] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.114 qpair failed and we were unable to recover it. 00:38:37.114 [2024-12-09 09:56:12.521118] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.114 [2024-12-09 09:56:12.521163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.114 [2024-12-09 09:56:12.521176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.114 [2024-12-09 09:56:12.521181] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.114 [2024-12-09 09:56:12.521185] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.114 [2024-12-09 09:56:12.521195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.114 qpair failed and we were unable to recover it. 00:38:37.114 [2024-12-09 09:56:12.531145] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.114 [2024-12-09 09:56:12.531215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.114 [2024-12-09 09:56:12.531225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.114 [2024-12-09 09:56:12.531230] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.114 [2024-12-09 09:56:12.531234] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.114 [2024-12-09 09:56:12.531244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.114 qpair failed and we were unable to recover it. 00:38:37.114 [2024-12-09 09:56:12.541226] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.114 [2024-12-09 09:56:12.541280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.114 [2024-12-09 09:56:12.541290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.114 [2024-12-09 09:56:12.541295] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.114 [2024-12-09 09:56:12.541299] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.114 [2024-12-09 09:56:12.541309] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.114 qpair failed and we were unable to recover it. 00:38:37.114 [2024-12-09 09:56:12.551226] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.114 [2024-12-09 09:56:12.551272] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.114 [2024-12-09 09:56:12.551282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.114 [2024-12-09 09:56:12.551287] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.114 [2024-12-09 09:56:12.551291] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.114 [2024-12-09 09:56:12.551302] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.114 qpair failed and we were unable to recover it. 00:38:37.114 [2024-12-09 09:56:12.561215] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.114 [2024-12-09 09:56:12.561264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.114 [2024-12-09 09:56:12.561274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.114 [2024-12-09 09:56:12.561282] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.114 [2024-12-09 09:56:12.561286] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.114 [2024-12-09 09:56:12.561296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.114 qpair failed and we were unable to recover it. 00:38:37.376 [2024-12-09 09:56:12.571275] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.376 [2024-12-09 09:56:12.571328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.376 [2024-12-09 09:56:12.571338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.377 [2024-12-09 09:56:12.571343] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.377 [2024-12-09 09:56:12.571348] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.377 [2024-12-09 09:56:12.571358] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.377 qpair failed and we were unable to recover it. 00:38:37.377 [2024-12-09 09:56:12.581292] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.377 [2024-12-09 09:56:12.581338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.377 [2024-12-09 09:56:12.581348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.377 [2024-12-09 09:56:12.581353] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.377 [2024-12-09 09:56:12.581357] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.377 [2024-12-09 09:56:12.581367] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.377 qpair failed and we were unable to recover it. 00:38:37.377 [2024-12-09 09:56:12.591319] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.377 [2024-12-09 09:56:12.591376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.377 [2024-12-09 09:56:12.591395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.377 [2024-12-09 09:56:12.591401] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.377 [2024-12-09 09:56:12.591406] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.377 [2024-12-09 09:56:12.591420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.377 qpair failed and we were unable to recover it. 00:38:37.377 [2024-12-09 09:56:12.601357] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.377 [2024-12-09 09:56:12.601408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.377 [2024-12-09 09:56:12.601426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.377 [2024-12-09 09:56:12.601432] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.377 [2024-12-09 09:56:12.601437] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.377 [2024-12-09 09:56:12.601455] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.377 qpair failed and we were unable to recover it. 00:38:37.377 [2024-12-09 09:56:12.611398] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.377 [2024-12-09 09:56:12.611449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.377 [2024-12-09 09:56:12.611461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.377 [2024-12-09 09:56:12.611466] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.377 [2024-12-09 09:56:12.611471] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.377 [2024-12-09 09:56:12.611483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.377 qpair failed and we were unable to recover it. 00:38:37.377 [2024-12-09 09:56:12.621423] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.377 [2024-12-09 09:56:12.621471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.377 [2024-12-09 09:56:12.621482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.377 [2024-12-09 09:56:12.621487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.377 [2024-12-09 09:56:12.621492] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.377 [2024-12-09 09:56:12.621502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.377 qpair failed and we were unable to recover it. 00:38:37.377 [2024-12-09 09:56:12.631444] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.377 [2024-12-09 09:56:12.631497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.377 [2024-12-09 09:56:12.631507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.377 [2024-12-09 09:56:12.631512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.377 [2024-12-09 09:56:12.631517] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.377 [2024-12-09 09:56:12.631527] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.377 qpair failed and we were unable to recover it. 00:38:37.377 [2024-12-09 09:56:12.641399] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.377 [2024-12-09 09:56:12.641439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.377 [2024-12-09 09:56:12.641449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.377 [2024-12-09 09:56:12.641453] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.377 [2024-12-09 09:56:12.641458] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.377 [2024-12-09 09:56:12.641468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.377 qpair failed and we were unable to recover it. 00:38:37.377 [2024-12-09 09:56:12.651477] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.377 [2024-12-09 09:56:12.651530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.377 [2024-12-09 09:56:12.651540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.377 [2024-12-09 09:56:12.651545] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.377 [2024-12-09 09:56:12.651549] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.377 [2024-12-09 09:56:12.651560] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.377 qpair failed and we were unable to recover it. 00:38:37.377 [2024-12-09 09:56:12.661526] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.377 [2024-12-09 09:56:12.661571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.377 [2024-12-09 09:56:12.661582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.377 [2024-12-09 09:56:12.661587] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.377 [2024-12-09 09:56:12.661592] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.377 [2024-12-09 09:56:12.661602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.377 qpair failed and we were unable to recover it. 00:38:37.377 [2024-12-09 09:56:12.671572] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.377 [2024-12-09 09:56:12.671616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.377 [2024-12-09 09:56:12.671626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.377 [2024-12-09 09:56:12.671631] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.377 [2024-12-09 09:56:12.671636] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.378 [2024-12-09 09:56:12.671650] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.378 qpair failed and we were unable to recover it. 00:38:37.378 [2024-12-09 09:56:12.681539] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.378 [2024-12-09 09:56:12.681581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.378 [2024-12-09 09:56:12.681590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.378 [2024-12-09 09:56:12.681595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.378 [2024-12-09 09:56:12.681600] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.378 [2024-12-09 09:56:12.681610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.378 qpair failed and we were unable to recover it. 00:38:37.378 [2024-12-09 09:56:12.691618] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.378 [2024-12-09 09:56:12.691673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.378 [2024-12-09 09:56:12.691684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.378 [2024-12-09 09:56:12.691692] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.378 [2024-12-09 09:56:12.691698] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.378 [2024-12-09 09:56:12.691709] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.378 qpair failed and we were unable to recover it. 00:38:37.378 [2024-12-09 09:56:12.701628] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.378 [2024-12-09 09:56:12.701683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.378 [2024-12-09 09:56:12.701693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.378 [2024-12-09 09:56:12.701698] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.378 [2024-12-09 09:56:12.701702] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.378 [2024-12-09 09:56:12.701712] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.378 qpair failed and we were unable to recover it. 00:38:37.378 [2024-12-09 09:56:12.711679] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.378 [2024-12-09 09:56:12.711728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.378 [2024-12-09 09:56:12.711738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.378 [2024-12-09 09:56:12.711743] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.378 [2024-12-09 09:56:12.711747] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.378 [2024-12-09 09:56:12.711757] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.378 qpair failed and we were unable to recover it. 00:38:37.378 [2024-12-09 09:56:12.721650] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.378 [2024-12-09 09:56:12.721698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.378 [2024-12-09 09:56:12.721707] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.378 [2024-12-09 09:56:12.721712] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.378 [2024-12-09 09:56:12.721717] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.378 [2024-12-09 09:56:12.721727] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.378 qpair failed and we were unable to recover it. 00:38:37.378 [2024-12-09 09:56:12.731628] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.378 [2024-12-09 09:56:12.731682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.378 [2024-12-09 09:56:12.731692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.378 [2024-12-09 09:56:12.731696] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.378 [2024-12-09 09:56:12.731701] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.378 [2024-12-09 09:56:12.731714] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.378 qpair failed and we were unable to recover it. 00:38:37.378 [2024-12-09 09:56:12.741768] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.378 [2024-12-09 09:56:12.741815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.378 [2024-12-09 09:56:12.741825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.378 [2024-12-09 09:56:12.741830] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.378 [2024-12-09 09:56:12.741834] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.378 [2024-12-09 09:56:12.741844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.378 qpair failed and we were unable to recover it. 00:38:37.378 [2024-12-09 09:56:12.751741] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.378 [2024-12-09 09:56:12.751788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.378 [2024-12-09 09:56:12.751799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.378 [2024-12-09 09:56:12.751803] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.378 [2024-12-09 09:56:12.751808] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.378 [2024-12-09 09:56:12.751818] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.378 qpair failed and we were unable to recover it. 00:38:37.378 [2024-12-09 09:56:12.761778] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.378 [2024-12-09 09:56:12.761821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.378 [2024-12-09 09:56:12.761831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.378 [2024-12-09 09:56:12.761835] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.378 [2024-12-09 09:56:12.761840] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.378 [2024-12-09 09:56:12.761850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.378 qpair failed and we were unable to recover it. 00:38:37.378 [2024-12-09 09:56:12.771811] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.378 [2024-12-09 09:56:12.771887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.378 [2024-12-09 09:56:12.771897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.378 [2024-12-09 09:56:12.771902] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.378 [2024-12-09 09:56:12.771906] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.378 [2024-12-09 09:56:12.771916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.378 qpair failed and we were unable to recover it. 00:38:37.379 [2024-12-09 09:56:12.781883] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.379 [2024-12-09 09:56:12.781932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.379 [2024-12-09 09:56:12.781942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.379 [2024-12-09 09:56:12.781947] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.379 [2024-12-09 09:56:12.781952] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.379 [2024-12-09 09:56:12.781962] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.379 qpair failed and we were unable to recover it. 00:38:37.379 [2024-12-09 09:56:12.791900] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.379 [2024-12-09 09:56:12.791947] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.379 [2024-12-09 09:56:12.791957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.379 [2024-12-09 09:56:12.791962] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.379 [2024-12-09 09:56:12.791966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.379 [2024-12-09 09:56:12.791976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.379 qpair failed and we were unable to recover it. 00:38:37.379 [2024-12-09 09:56:12.801838] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.379 [2024-12-09 09:56:12.801880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.379 [2024-12-09 09:56:12.801890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.379 [2024-12-09 09:56:12.801895] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.379 [2024-12-09 09:56:12.801899] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.379 [2024-12-09 09:56:12.801909] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.379 qpair failed and we were unable to recover it. 00:38:37.379 [2024-12-09 09:56:12.811946] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.379 [2024-12-09 09:56:12.812001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.379 [2024-12-09 09:56:12.812012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.379 [2024-12-09 09:56:12.812017] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.379 [2024-12-09 09:56:12.812022] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.379 [2024-12-09 09:56:12.812032] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.379 qpair failed and we were unable to recover it. 00:38:37.379 [2024-12-09 09:56:12.821957] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.379 [2024-12-09 09:56:12.822003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.379 [2024-12-09 09:56:12.822017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.379 [2024-12-09 09:56:12.822022] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.379 [2024-12-09 09:56:12.822026] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.379 [2024-12-09 09:56:12.822036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.379 qpair failed and we were unable to recover it. 00:38:37.642 [2024-12-09 09:56:12.832000] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.642 [2024-12-09 09:56:12.832087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.642 [2024-12-09 09:56:12.832098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.642 [2024-12-09 09:56:12.832102] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.642 [2024-12-09 09:56:12.832107] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.642 [2024-12-09 09:56:12.832117] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.642 qpair failed and we were unable to recover it. 00:38:37.642 [2024-12-09 09:56:12.841980] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.642 [2024-12-09 09:56:12.842021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.642 [2024-12-09 09:56:12.842030] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.642 [2024-12-09 09:56:12.842035] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.642 [2024-12-09 09:56:12.842040] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.642 [2024-12-09 09:56:12.842050] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.642 qpair failed and we were unable to recover it. 00:38:37.642 [2024-12-09 09:56:12.852061] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.642 [2024-12-09 09:56:12.852108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.642 [2024-12-09 09:56:12.852118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.642 [2024-12-09 09:56:12.852123] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.642 [2024-12-09 09:56:12.852127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.642 [2024-12-09 09:56:12.852137] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.642 qpair failed and we were unable to recover it. 00:38:37.642 [2024-12-09 09:56:12.862035] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.642 [2024-12-09 09:56:12.862083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.642 [2024-12-09 09:56:12.862093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.643 [2024-12-09 09:56:12.862098] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.643 [2024-12-09 09:56:12.862105] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.643 [2024-12-09 09:56:12.862115] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.643 qpair failed and we were unable to recover it. 00:38:37.643 [2024-12-09 09:56:12.872098] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.643 [2024-12-09 09:56:12.872142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.643 [2024-12-09 09:56:12.872152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.643 [2024-12-09 09:56:12.872157] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.643 [2024-12-09 09:56:12.872162] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.643 [2024-12-09 09:56:12.872172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.643 qpair failed and we were unable to recover it. 00:38:37.643 [2024-12-09 09:56:12.882076] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.643 [2024-12-09 09:56:12.882159] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.643 [2024-12-09 09:56:12.882169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.643 [2024-12-09 09:56:12.882174] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.643 [2024-12-09 09:56:12.882178] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.643 [2024-12-09 09:56:12.882188] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.643 qpair failed and we were unable to recover it. 00:38:37.643 [2024-12-09 09:56:12.892165] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.643 [2024-12-09 09:56:12.892211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.643 [2024-12-09 09:56:12.892220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.643 [2024-12-09 09:56:12.892225] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.643 [2024-12-09 09:56:12.892230] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.643 [2024-12-09 09:56:12.892240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.643 qpair failed and we were unable to recover it. 00:38:37.643 [2024-12-09 09:56:12.902186] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.643 [2024-12-09 09:56:12.902236] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.643 [2024-12-09 09:56:12.902246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.643 [2024-12-09 09:56:12.902251] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.643 [2024-12-09 09:56:12.902255] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.643 [2024-12-09 09:56:12.902266] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.643 qpair failed and we were unable to recover it. 00:38:37.643 [2024-12-09 09:56:12.912210] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.643 [2024-12-09 09:56:12.912304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.643 [2024-12-09 09:56:12.912314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.643 [2024-12-09 09:56:12.912319] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.643 [2024-12-09 09:56:12.912323] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.643 [2024-12-09 09:56:12.912333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.643 qpair failed and we were unable to recover it. 00:38:37.643 [2024-12-09 09:56:12.922187] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.643 [2024-12-09 09:56:12.922230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.643 [2024-12-09 09:56:12.922240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.643 [2024-12-09 09:56:12.922245] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.643 [2024-12-09 09:56:12.922250] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.643 [2024-12-09 09:56:12.922260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.643 qpair failed and we were unable to recover it. 00:38:37.643 [2024-12-09 09:56:12.932277] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.643 [2024-12-09 09:56:12.932326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.643 [2024-12-09 09:56:12.932335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.643 [2024-12-09 09:56:12.932340] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.643 [2024-12-09 09:56:12.932344] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.643 [2024-12-09 09:56:12.932354] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.643 qpair failed and we were unable to recover it. 00:38:37.643 [2024-12-09 09:56:12.942292] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.643 [2024-12-09 09:56:12.942337] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.643 [2024-12-09 09:56:12.942347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.643 [2024-12-09 09:56:12.942352] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.643 [2024-12-09 09:56:12.942356] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.643 [2024-12-09 09:56:12.942367] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.643 qpair failed and we were unable to recover it. 00:38:37.643 [2024-12-09 09:56:12.952312] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.643 [2024-12-09 09:56:12.952361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.643 [2024-12-09 09:56:12.952375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.643 [2024-12-09 09:56:12.952380] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.643 [2024-12-09 09:56:12.952384] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.643 [2024-12-09 09:56:12.952395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.643 qpair failed and we were unable to recover it. 00:38:37.643 [2024-12-09 09:56:12.962297] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.643 [2024-12-09 09:56:12.962347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.643 [2024-12-09 09:56:12.962357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.644 [2024-12-09 09:56:12.962362] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.644 [2024-12-09 09:56:12.962367] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.644 [2024-12-09 09:56:12.962377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.644 qpair failed and we were unable to recover it. 00:38:37.644 [2024-12-09 09:56:12.972256] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.644 [2024-12-09 09:56:12.972309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.644 [2024-12-09 09:56:12.972319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.644 [2024-12-09 09:56:12.972323] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.644 [2024-12-09 09:56:12.972328] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.644 [2024-12-09 09:56:12.972338] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.644 qpair failed and we were unable to recover it. 00:38:37.644 [2024-12-09 09:56:12.982415] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.644 [2024-12-09 09:56:12.982460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.644 [2024-12-09 09:56:12.982470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.644 [2024-12-09 09:56:12.982475] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.644 [2024-12-09 09:56:12.982479] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.644 [2024-12-09 09:56:12.982489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.644 qpair failed and we were unable to recover it. 00:38:37.644 [2024-12-09 09:56:12.992394] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.644 [2024-12-09 09:56:12.992443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.644 [2024-12-09 09:56:12.992453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.644 [2024-12-09 09:56:12.992458] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.644 [2024-12-09 09:56:12.992465] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.644 [2024-12-09 09:56:12.992475] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.644 qpair failed and we were unable to recover it. 00:38:37.644 [2024-12-09 09:56:13.002414] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.644 [2024-12-09 09:56:13.002457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.644 [2024-12-09 09:56:13.002467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.644 [2024-12-09 09:56:13.002472] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.644 [2024-12-09 09:56:13.002476] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.644 [2024-12-09 09:56:13.002486] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.644 qpair failed and we were unable to recover it. 00:38:37.644 [2024-12-09 09:56:13.012474] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.644 [2024-12-09 09:56:13.012523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.644 [2024-12-09 09:56:13.012533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.644 [2024-12-09 09:56:13.012538] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.644 [2024-12-09 09:56:13.012542] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.644 [2024-12-09 09:56:13.012552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.644 qpair failed and we were unable to recover it. 00:38:37.644 [2024-12-09 09:56:13.022509] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.644 [2024-12-09 09:56:13.022597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.644 [2024-12-09 09:56:13.022607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.644 [2024-12-09 09:56:13.022612] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.644 [2024-12-09 09:56:13.022617] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.644 [2024-12-09 09:56:13.022627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.644 qpair failed and we were unable to recover it. 00:38:37.644 [2024-12-09 09:56:13.032379] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.644 [2024-12-09 09:56:13.032436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.644 [2024-12-09 09:56:13.032447] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.644 [2024-12-09 09:56:13.032452] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.644 [2024-12-09 09:56:13.032456] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.644 [2024-12-09 09:56:13.032466] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.644 qpair failed and we were unable to recover it. 00:38:37.644 [2024-12-09 09:56:13.042513] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.644 [2024-12-09 09:56:13.042597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.644 [2024-12-09 09:56:13.042607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.644 [2024-12-09 09:56:13.042612] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.644 [2024-12-09 09:56:13.042616] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.644 [2024-12-09 09:56:13.042626] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.644 qpair failed and we were unable to recover it. 00:38:37.644 [2024-12-09 09:56:13.052593] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.644 [2024-12-09 09:56:13.052655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.644 [2024-12-09 09:56:13.052665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.644 [2024-12-09 09:56:13.052670] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.644 [2024-12-09 09:56:13.052674] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.644 [2024-12-09 09:56:13.052684] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.644 qpair failed and we were unable to recover it. 00:38:37.644 [2024-12-09 09:56:13.062633] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.644 [2024-12-09 09:56:13.062684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.644 [2024-12-09 09:56:13.062695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.644 [2024-12-09 09:56:13.062699] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.645 [2024-12-09 09:56:13.062704] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.645 [2024-12-09 09:56:13.062714] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.645 qpair failed and we were unable to recover it. 00:38:37.645 [2024-12-09 09:56:13.072565] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.645 [2024-12-09 09:56:13.072603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.645 [2024-12-09 09:56:13.072613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.645 [2024-12-09 09:56:13.072618] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.645 [2024-12-09 09:56:13.072622] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.645 [2024-12-09 09:56:13.072632] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.645 qpair failed and we were unable to recover it. 00:38:37.645 [2024-12-09 09:56:13.082611] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.645 [2024-12-09 09:56:13.082654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.645 [2024-12-09 09:56:13.082666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.645 [2024-12-09 09:56:13.082671] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.645 [2024-12-09 09:56:13.082675] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.645 [2024-12-09 09:56:13.082686] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.645 qpair failed and we were unable to recover it. 00:38:37.908 [2024-12-09 09:56:13.092703] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.908 [2024-12-09 09:56:13.092782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.908 [2024-12-09 09:56:13.092792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.908 [2024-12-09 09:56:13.092797] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.908 [2024-12-09 09:56:13.092802] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.908 [2024-12-09 09:56:13.092811] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.908 qpair failed and we were unable to recover it. 00:38:37.908 [2024-12-09 09:56:13.102776] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.908 [2024-12-09 09:56:13.102827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.908 [2024-12-09 09:56:13.102837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.908 [2024-12-09 09:56:13.102841] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.908 [2024-12-09 09:56:13.102846] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.908 [2024-12-09 09:56:13.102856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.908 qpair failed and we were unable to recover it. 00:38:37.908 [2024-12-09 09:56:13.112712] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.908 [2024-12-09 09:56:13.112752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.908 [2024-12-09 09:56:13.112762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.908 [2024-12-09 09:56:13.112767] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.908 [2024-12-09 09:56:13.112771] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.908 [2024-12-09 09:56:13.112781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.908 qpair failed and we were unable to recover it. 00:38:37.908 [2024-12-09 09:56:13.122714] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.908 [2024-12-09 09:56:13.122806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.908 [2024-12-09 09:56:13.122816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.908 [2024-12-09 09:56:13.122824] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.908 [2024-12-09 09:56:13.122829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.908 [2024-12-09 09:56:13.122839] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.908 qpair failed and we were unable to recover it. 00:38:37.908 [2024-12-09 09:56:13.132807] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.908 [2024-12-09 09:56:13.132857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.908 [2024-12-09 09:56:13.132867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.908 [2024-12-09 09:56:13.132872] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.908 [2024-12-09 09:56:13.132876] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.908 [2024-12-09 09:56:13.132886] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.908 qpair failed and we were unable to recover it. 00:38:37.908 [2024-12-09 09:56:13.142871] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.908 [2024-12-09 09:56:13.142917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.908 [2024-12-09 09:56:13.142927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.908 [2024-12-09 09:56:13.142932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.908 [2024-12-09 09:56:13.142936] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.908 [2024-12-09 09:56:13.142946] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.908 qpair failed and we were unable to recover it. 00:38:37.908 [2024-12-09 09:56:13.152708] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.908 [2024-12-09 09:56:13.152758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.908 [2024-12-09 09:56:13.152769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.908 [2024-12-09 09:56:13.152773] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.908 [2024-12-09 09:56:13.152778] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.908 [2024-12-09 09:56:13.152788] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.909 qpair failed and we were unable to recover it. 00:38:37.909 [2024-12-09 09:56:13.162702] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.909 [2024-12-09 09:56:13.162745] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.909 [2024-12-09 09:56:13.162755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.909 [2024-12-09 09:56:13.162760] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.909 [2024-12-09 09:56:13.162764] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.909 [2024-12-09 09:56:13.162780] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.909 qpair failed and we were unable to recover it. 00:38:37.909 [2024-12-09 09:56:13.172893] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.909 [2024-12-09 09:56:13.172948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.909 [2024-12-09 09:56:13.172957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.909 [2024-12-09 09:56:13.172962] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.909 [2024-12-09 09:56:13.172966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.909 [2024-12-09 09:56:13.172977] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.909 qpair failed and we were unable to recover it. 00:38:37.909 [2024-12-09 09:56:13.182900] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.909 [2024-12-09 09:56:13.182975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.909 [2024-12-09 09:56:13.182984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.909 [2024-12-09 09:56:13.182989] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.909 [2024-12-09 09:56:13.182994] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.909 [2024-12-09 09:56:13.183004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.909 qpair failed and we were unable to recover it. 00:38:37.909 [2024-12-09 09:56:13.192900] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.909 [2024-12-09 09:56:13.192940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.909 [2024-12-09 09:56:13.192950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.909 [2024-12-09 09:56:13.192955] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.909 [2024-12-09 09:56:13.192959] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.909 [2024-12-09 09:56:13.192969] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.909 qpair failed and we were unable to recover it. 00:38:37.909 [2024-12-09 09:56:13.202857] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.909 [2024-12-09 09:56:13.202905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.909 [2024-12-09 09:56:13.202916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.909 [2024-12-09 09:56:13.202921] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.909 [2024-12-09 09:56:13.202925] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.909 [2024-12-09 09:56:13.202936] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.909 qpair failed and we were unable to recover it. 00:38:37.909 [2024-12-09 09:56:13.213020] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.909 [2024-12-09 09:56:13.213074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.909 [2024-12-09 09:56:13.213084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.909 [2024-12-09 09:56:13.213089] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.909 [2024-12-09 09:56:13.213094] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.909 [2024-12-09 09:56:13.213104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.909 qpair failed and we were unable to recover it. 00:38:37.909 [2024-12-09 09:56:13.223100] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.909 [2024-12-09 09:56:13.223176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.909 [2024-12-09 09:56:13.223186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.909 [2024-12-09 09:56:13.223191] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.909 [2024-12-09 09:56:13.223195] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.909 [2024-12-09 09:56:13.223205] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.909 qpair failed and we were unable to recover it. 00:38:37.909 [2024-12-09 09:56:13.233009] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.909 [2024-12-09 09:56:13.233047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.909 [2024-12-09 09:56:13.233057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.909 [2024-12-09 09:56:13.233062] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.909 [2024-12-09 09:56:13.233066] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.909 [2024-12-09 09:56:13.233077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.909 qpair failed and we were unable to recover it. 00:38:37.909 [2024-12-09 09:56:13.243073] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.909 [2024-12-09 09:56:13.243113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.909 [2024-12-09 09:56:13.243123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.909 [2024-12-09 09:56:13.243128] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.909 [2024-12-09 09:56:13.243132] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.909 [2024-12-09 09:56:13.243143] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.909 qpair failed and we were unable to recover it. 00:38:37.909 [2024-12-09 09:56:13.253153] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.909 [2024-12-09 09:56:13.253204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.909 [2024-12-09 09:56:13.253214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.909 [2024-12-09 09:56:13.253221] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.909 [2024-12-09 09:56:13.253226] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.910 [2024-12-09 09:56:13.253236] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.910 qpair failed and we were unable to recover it. 00:38:37.910 [2024-12-09 09:56:13.263038] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.910 [2024-12-09 09:56:13.263082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.910 [2024-12-09 09:56:13.263092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.910 [2024-12-09 09:56:13.263097] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.910 [2024-12-09 09:56:13.263101] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.910 [2024-12-09 09:56:13.263111] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.910 qpair failed and we were unable to recover it. 00:38:37.910 [2024-12-09 09:56:13.273138] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.910 [2024-12-09 09:56:13.273186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.910 [2024-12-09 09:56:13.273195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.910 [2024-12-09 09:56:13.273200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.910 [2024-12-09 09:56:13.273204] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.910 [2024-12-09 09:56:13.273214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.910 qpair failed and we were unable to recover it. 00:38:37.910 [2024-12-09 09:56:13.283169] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.910 [2024-12-09 09:56:13.283213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.910 [2024-12-09 09:56:13.283222] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.910 [2024-12-09 09:56:13.283227] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.910 [2024-12-09 09:56:13.283231] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.910 [2024-12-09 09:56:13.283242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.910 qpair failed and we were unable to recover it. 00:38:37.910 [2024-12-09 09:56:13.293243] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.910 [2024-12-09 09:56:13.293312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.910 [2024-12-09 09:56:13.293321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.910 [2024-12-09 09:56:13.293326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.910 [2024-12-09 09:56:13.293331] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.910 [2024-12-09 09:56:13.293343] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.910 qpair failed and we were unable to recover it. 00:38:37.910 [2024-12-09 09:56:13.303279] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.910 [2024-12-09 09:56:13.303333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.910 [2024-12-09 09:56:13.303344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.910 [2024-12-09 09:56:13.303349] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.910 [2024-12-09 09:56:13.303353] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.910 [2024-12-09 09:56:13.303364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.910 qpair failed and we were unable to recover it. 00:38:37.910 [2024-12-09 09:56:13.313262] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.910 [2024-12-09 09:56:13.313303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.910 [2024-12-09 09:56:13.313313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.910 [2024-12-09 09:56:13.313317] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.910 [2024-12-09 09:56:13.313322] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.910 [2024-12-09 09:56:13.313332] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.910 qpair failed and we were unable to recover it. 00:38:37.910 [2024-12-09 09:56:13.323266] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.910 [2024-12-09 09:56:13.323306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.910 [2024-12-09 09:56:13.323317] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.910 [2024-12-09 09:56:13.323322] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.910 [2024-12-09 09:56:13.323326] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.910 [2024-12-09 09:56:13.323337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.910 qpair failed and we were unable to recover it. 00:38:37.910 [2024-12-09 09:56:13.333359] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.910 [2024-12-09 09:56:13.333409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.910 [2024-12-09 09:56:13.333419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.910 [2024-12-09 09:56:13.333424] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.910 [2024-12-09 09:56:13.333428] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.910 [2024-12-09 09:56:13.333438] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.910 qpair failed and we were unable to recover it. 00:38:37.910 [2024-12-09 09:56:13.343401] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.910 [2024-12-09 09:56:13.343454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.910 [2024-12-09 09:56:13.343473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.910 [2024-12-09 09:56:13.343479] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.910 [2024-12-09 09:56:13.343484] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.910 [2024-12-09 09:56:13.343498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.910 qpair failed and we were unable to recover it. 00:38:37.910 [2024-12-09 09:56:13.353319] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:37.910 [2024-12-09 09:56:13.353372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:37.910 [2024-12-09 09:56:13.353391] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:37.910 [2024-12-09 09:56:13.353397] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:37.910 [2024-12-09 09:56:13.353402] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:37.910 [2024-12-09 09:56:13.353416] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:37.910 qpair failed and we were unable to recover it. 00:38:38.174 [2024-12-09 09:56:13.363387] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.174 [2024-12-09 09:56:13.363431] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.174 [2024-12-09 09:56:13.363450] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.174 [2024-12-09 09:56:13.363456] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.174 [2024-12-09 09:56:13.363461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.174 [2024-12-09 09:56:13.363475] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.174 qpair failed and we were unable to recover it. 00:38:38.174 [2024-12-09 09:56:13.373489] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.174 [2024-12-09 09:56:13.373540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.174 [2024-12-09 09:56:13.373551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.174 [2024-12-09 09:56:13.373557] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.174 [2024-12-09 09:56:13.373561] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.174 [2024-12-09 09:56:13.373573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.174 qpair failed and we were unable to recover it. 00:38:38.174 [2024-12-09 09:56:13.383507] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.174 [2024-12-09 09:56:13.383560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.174 [2024-12-09 09:56:13.383574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.174 [2024-12-09 09:56:13.383579] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.174 [2024-12-09 09:56:13.383584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.174 [2024-12-09 09:56:13.383595] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.174 qpair failed and we were unable to recover it. 00:38:38.174 [2024-12-09 09:56:13.393353] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.174 [2024-12-09 09:56:13.393404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.174 [2024-12-09 09:56:13.393414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.174 [2024-12-09 09:56:13.393419] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.174 [2024-12-09 09:56:13.393424] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.174 [2024-12-09 09:56:13.393434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.174 qpair failed and we were unable to recover it. 00:38:38.174 [2024-12-09 09:56:13.403497] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.175 [2024-12-09 09:56:13.403593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.175 [2024-12-09 09:56:13.403604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.175 [2024-12-09 09:56:13.403609] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.175 [2024-12-09 09:56:13.403613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.175 [2024-12-09 09:56:13.403623] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.175 qpair failed and we were unable to recover it. 00:38:38.175 [2024-12-09 09:56:13.413469] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.175 [2024-12-09 09:56:13.413522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.175 [2024-12-09 09:56:13.413533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.175 [2024-12-09 09:56:13.413538] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.175 [2024-12-09 09:56:13.413542] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.175 [2024-12-09 09:56:13.413553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.175 qpair failed and we were unable to recover it. 00:38:38.175 [2024-12-09 09:56:13.423625] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.175 [2024-12-09 09:56:13.423685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.175 [2024-12-09 09:56:13.423695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.175 [2024-12-09 09:56:13.423700] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.175 [2024-12-09 09:56:13.423707] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.175 [2024-12-09 09:56:13.423717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.175 qpair failed and we were unable to recover it. 00:38:38.175 [2024-12-09 09:56:13.433588] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.175 [2024-12-09 09:56:13.433632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.175 [2024-12-09 09:56:13.433646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.175 [2024-12-09 09:56:13.433651] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.175 [2024-12-09 09:56:13.433656] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.175 [2024-12-09 09:56:13.433666] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.175 qpair failed and we were unable to recover it. 00:38:38.175 [2024-12-09 09:56:13.443611] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.175 [2024-12-09 09:56:13.443657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.175 [2024-12-09 09:56:13.443667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.175 [2024-12-09 09:56:13.443672] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.175 [2024-12-09 09:56:13.443677] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.175 [2024-12-09 09:56:13.443687] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.175 qpair failed and we were unable to recover it. 00:38:38.175 [2024-12-09 09:56:13.453689] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.175 [2024-12-09 09:56:13.453741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.175 [2024-12-09 09:56:13.453751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.175 [2024-12-09 09:56:13.453756] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.175 [2024-12-09 09:56:13.453760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.175 [2024-12-09 09:56:13.453771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.175 qpair failed and we were unable to recover it. 00:38:38.175 [2024-12-09 09:56:13.463723] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.175 [2024-12-09 09:56:13.463774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.175 [2024-12-09 09:56:13.463785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.175 [2024-12-09 09:56:13.463790] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.175 [2024-12-09 09:56:13.463794] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.175 [2024-12-09 09:56:13.463805] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.175 qpair failed and we were unable to recover it. 00:38:38.175 [2024-12-09 09:56:13.473692] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.175 [2024-12-09 09:56:13.473735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.175 [2024-12-09 09:56:13.473746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.175 [2024-12-09 09:56:13.473750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.175 [2024-12-09 09:56:13.473755] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.175 [2024-12-09 09:56:13.473765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.175 qpair failed and we were unable to recover it. 00:38:38.175 [2024-12-09 09:56:13.483588] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.175 [2024-12-09 09:56:13.483628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.175 [2024-12-09 09:56:13.483641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.175 [2024-12-09 09:56:13.483647] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.175 [2024-12-09 09:56:13.483651] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.175 [2024-12-09 09:56:13.483661] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.175 qpair failed and we were unable to recover it. 00:38:38.175 [2024-12-09 09:56:13.493778] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.175 [2024-12-09 09:56:13.493826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.175 [2024-12-09 09:56:13.493835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.175 [2024-12-09 09:56:13.493840] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.175 [2024-12-09 09:56:13.493845] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.175 [2024-12-09 09:56:13.493855] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.175 qpair failed and we were unable to recover it. 00:38:38.175 [2024-12-09 09:56:13.503827] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.175 [2024-12-09 09:56:13.503874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.175 [2024-12-09 09:56:13.503884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.176 [2024-12-09 09:56:13.503889] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.176 [2024-12-09 09:56:13.503893] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.176 [2024-12-09 09:56:13.503903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.176 qpair failed and we were unable to recover it. 00:38:38.176 [2024-12-09 09:56:13.513789] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.176 [2024-12-09 09:56:13.513834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.176 [2024-12-09 09:56:13.513846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.176 [2024-12-09 09:56:13.513851] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.176 [2024-12-09 09:56:13.513855] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.176 [2024-12-09 09:56:13.513866] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.176 qpair failed and we were unable to recover it. 00:38:38.176 [2024-12-09 09:56:13.523822] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.176 [2024-12-09 09:56:13.523866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.176 [2024-12-09 09:56:13.523876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.176 [2024-12-09 09:56:13.523881] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.176 [2024-12-09 09:56:13.523885] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.176 [2024-12-09 09:56:13.523895] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.176 qpair failed and we were unable to recover it. 00:38:38.176 [2024-12-09 09:56:13.533891] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.176 [2024-12-09 09:56:13.533941] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.176 [2024-12-09 09:56:13.533951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.176 [2024-12-09 09:56:13.533956] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.176 [2024-12-09 09:56:13.533961] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.176 [2024-12-09 09:56:13.533971] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.176 qpair failed and we were unable to recover it. 00:38:38.176 [2024-12-09 09:56:13.543943] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.176 [2024-12-09 09:56:13.543988] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.176 [2024-12-09 09:56:13.543997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.176 [2024-12-09 09:56:13.544002] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.176 [2024-12-09 09:56:13.544007] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.176 [2024-12-09 09:56:13.544016] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.176 qpair failed and we were unable to recover it. 00:38:38.176 [2024-12-09 09:56:13.553919] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.176 [2024-12-09 09:56:13.553963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.176 [2024-12-09 09:56:13.553973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.176 [2024-12-09 09:56:13.553978] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.176 [2024-12-09 09:56:13.553988] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.176 [2024-12-09 09:56:13.553999] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.176 qpair failed and we were unable to recover it. 00:38:38.176 [2024-12-09 09:56:13.563936] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.176 [2024-12-09 09:56:13.563981] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.176 [2024-12-09 09:56:13.563991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.176 [2024-12-09 09:56:13.563996] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.176 [2024-12-09 09:56:13.564000] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.176 [2024-12-09 09:56:13.564010] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.176 qpair failed and we were unable to recover it. 00:38:38.176 [2024-12-09 09:56:13.573984] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.176 [2024-12-09 09:56:13.574035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.176 [2024-12-09 09:56:13.574044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.176 [2024-12-09 09:56:13.574049] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.176 [2024-12-09 09:56:13.574053] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.176 [2024-12-09 09:56:13.574063] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.176 qpair failed and we were unable to recover it. 00:38:38.176 [2024-12-09 09:56:13.584027] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.176 [2024-12-09 09:56:13.584081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.176 [2024-12-09 09:56:13.584090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.176 [2024-12-09 09:56:13.584095] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.176 [2024-12-09 09:56:13.584099] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.176 [2024-12-09 09:56:13.584109] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.176 qpair failed and we were unable to recover it. 00:38:38.176 [2024-12-09 09:56:13.594021] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.176 [2024-12-09 09:56:13.594061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.176 [2024-12-09 09:56:13.594071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.176 [2024-12-09 09:56:13.594075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.176 [2024-12-09 09:56:13.594080] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.176 [2024-12-09 09:56:13.594090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.176 qpair failed and we were unable to recover it. 00:38:38.176 [2024-12-09 09:56:13.604088] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.177 [2024-12-09 09:56:13.604133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.177 [2024-12-09 09:56:13.604143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.177 [2024-12-09 09:56:13.604147] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.177 [2024-12-09 09:56:13.604152] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.177 [2024-12-09 09:56:13.604162] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.177 qpair failed and we were unable to recover it. 00:38:38.177 [2024-12-09 09:56:13.614118] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.177 [2024-12-09 09:56:13.614173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.177 [2024-12-09 09:56:13.614183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.177 [2024-12-09 09:56:13.614187] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.177 [2024-12-09 09:56:13.614192] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.177 [2024-12-09 09:56:13.614202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.177 qpair failed and we were unable to recover it. 00:38:38.441 [2024-12-09 09:56:13.624130] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.441 [2024-12-09 09:56:13.624179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.441 [2024-12-09 09:56:13.624189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.441 [2024-12-09 09:56:13.624194] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.441 [2024-12-09 09:56:13.624199] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.441 [2024-12-09 09:56:13.624209] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.441 qpair failed and we were unable to recover it. 00:38:38.441 [2024-12-09 09:56:13.634124] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.441 [2024-12-09 09:56:13.634168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.441 [2024-12-09 09:56:13.634178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.441 [2024-12-09 09:56:13.634183] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.441 [2024-12-09 09:56:13.634187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.441 [2024-12-09 09:56:13.634197] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.441 qpair failed and we were unable to recover it. 00:38:38.441 [2024-12-09 09:56:13.644142] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.441 [2024-12-09 09:56:13.644188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.441 [2024-12-09 09:56:13.644200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.441 [2024-12-09 09:56:13.644205] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.441 [2024-12-09 09:56:13.644209] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.441 [2024-12-09 09:56:13.644219] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.441 qpair failed and we were unable to recover it. 00:38:38.441 [2024-12-09 09:56:13.654219] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.441 [2024-12-09 09:56:13.654267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.441 [2024-12-09 09:56:13.654277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.441 [2024-12-09 09:56:13.654282] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.441 [2024-12-09 09:56:13.654286] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.441 [2024-12-09 09:56:13.654296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.441 qpair failed and we were unable to recover it. 00:38:38.441 [2024-12-09 09:56:13.664234] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.441 [2024-12-09 09:56:13.664288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.441 [2024-12-09 09:56:13.664298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.441 [2024-12-09 09:56:13.664302] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.441 [2024-12-09 09:56:13.664307] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.441 [2024-12-09 09:56:13.664317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.441 qpair failed and we were unable to recover it. 00:38:38.441 [2024-12-09 09:56:13.674211] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.441 [2024-12-09 09:56:13.674258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.441 [2024-12-09 09:56:13.674268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.441 [2024-12-09 09:56:13.674273] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.441 [2024-12-09 09:56:13.674277] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.441 [2024-12-09 09:56:13.674287] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.441 qpair failed and we were unable to recover it. 00:38:38.441 [2024-12-09 09:56:13.684260] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.441 [2024-12-09 09:56:13.684310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.441 [2024-12-09 09:56:13.684319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.441 [2024-12-09 09:56:13.684327] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.441 [2024-12-09 09:56:13.684331] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.441 [2024-12-09 09:56:13.684341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.441 qpair failed and we were unable to recover it. 00:38:38.441 [2024-12-09 09:56:13.694293] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.442 [2024-12-09 09:56:13.694383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.442 [2024-12-09 09:56:13.694402] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.442 [2024-12-09 09:56:13.694408] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.442 [2024-12-09 09:56:13.694412] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.442 [2024-12-09 09:56:13.694427] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.442 qpair failed and we were unable to recover it. 00:38:38.442 [2024-12-09 09:56:13.704375] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.442 [2024-12-09 09:56:13.704430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.442 [2024-12-09 09:56:13.704449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.442 [2024-12-09 09:56:13.704455] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.442 [2024-12-09 09:56:13.704459] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.442 [2024-12-09 09:56:13.704473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.442 qpair failed and we were unable to recover it. 00:38:38.442 [2024-12-09 09:56:13.714323] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.442 [2024-12-09 09:56:13.714367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.442 [2024-12-09 09:56:13.714386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.442 [2024-12-09 09:56:13.714392] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.442 [2024-12-09 09:56:13.714396] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.442 [2024-12-09 09:56:13.714410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.442 qpair failed and we were unable to recover it. 00:38:38.442 [2024-12-09 09:56:13.724315] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.442 [2024-12-09 09:56:13.724358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.442 [2024-12-09 09:56:13.724369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.442 [2024-12-09 09:56:13.724374] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.442 [2024-12-09 09:56:13.724378] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.442 [2024-12-09 09:56:13.724393] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.442 qpair failed and we were unable to recover it. 00:38:38.442 [2024-12-09 09:56:13.734306] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.442 [2024-12-09 09:56:13.734357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.442 [2024-12-09 09:56:13.734367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.442 [2024-12-09 09:56:13.734372] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.442 [2024-12-09 09:56:13.734376] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.442 [2024-12-09 09:56:13.734388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.442 qpair failed and we were unable to recover it. 00:38:38.442 [2024-12-09 09:56:13.744451] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.442 [2024-12-09 09:56:13.744501] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.442 [2024-12-09 09:56:13.744511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.442 [2024-12-09 09:56:13.744516] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.442 [2024-12-09 09:56:13.744521] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.442 [2024-12-09 09:56:13.744531] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.442 qpair failed and we were unable to recover it. 00:38:38.442 [2024-12-09 09:56:13.754419] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.442 [2024-12-09 09:56:13.754463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.442 [2024-12-09 09:56:13.754473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.442 [2024-12-09 09:56:13.754478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.442 [2024-12-09 09:56:13.754482] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.442 [2024-12-09 09:56:13.754493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.442 qpair failed and we were unable to recover it. 00:38:38.442 [2024-12-09 09:56:13.764486] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.442 [2024-12-09 09:56:13.764527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.442 [2024-12-09 09:56:13.764537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.442 [2024-12-09 09:56:13.764542] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.442 [2024-12-09 09:56:13.764546] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.442 [2024-12-09 09:56:13.764556] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.442 qpair failed and we were unable to recover it. 00:38:38.442 [2024-12-09 09:56:13.774559] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.442 [2024-12-09 09:56:13.774657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.442 [2024-12-09 09:56:13.774667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.442 [2024-12-09 09:56:13.774672] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.442 [2024-12-09 09:56:13.774676] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.442 [2024-12-09 09:56:13.774687] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.442 qpair failed and we were unable to recover it. 00:38:38.442 [2024-12-09 09:56:13.784557] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.442 [2024-12-09 09:56:13.784605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.442 [2024-12-09 09:56:13.784615] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.442 [2024-12-09 09:56:13.784619] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.442 [2024-12-09 09:56:13.784624] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.442 [2024-12-09 09:56:13.784634] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.442 qpair failed and we were unable to recover it. 00:38:38.442 [2024-12-09 09:56:13.794556] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.442 [2024-12-09 09:56:13.794609] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.442 [2024-12-09 09:56:13.794619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.442 [2024-12-09 09:56:13.794624] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.442 [2024-12-09 09:56:13.794629] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.443 [2024-12-09 09:56:13.794642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.443 qpair failed and we were unable to recover it. 00:38:38.443 [2024-12-09 09:56:13.804594] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.443 [2024-12-09 09:56:13.804633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.443 [2024-12-09 09:56:13.804647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.443 [2024-12-09 09:56:13.804652] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.443 [2024-12-09 09:56:13.804656] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.443 [2024-12-09 09:56:13.804667] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.443 qpair failed and we were unable to recover it. 00:38:38.443 [2024-12-09 09:56:13.814676] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.443 [2024-12-09 09:56:13.814730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.443 [2024-12-09 09:56:13.814741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.443 [2024-12-09 09:56:13.814749] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.443 [2024-12-09 09:56:13.814753] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.443 [2024-12-09 09:56:13.814764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.443 qpair failed and we were unable to recover it. 00:38:38.443 [2024-12-09 09:56:13.824670] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.443 [2024-12-09 09:56:13.824747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.443 [2024-12-09 09:56:13.824757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.443 [2024-12-09 09:56:13.824762] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.443 [2024-12-09 09:56:13.824766] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.443 [2024-12-09 09:56:13.824776] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.443 qpair failed and we were unable to recover it. 00:38:38.443 [2024-12-09 09:56:13.834667] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.443 [2024-12-09 09:56:13.834715] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.443 [2024-12-09 09:56:13.834725] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.443 [2024-12-09 09:56:13.834730] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.443 [2024-12-09 09:56:13.834734] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.443 [2024-12-09 09:56:13.834744] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.443 qpair failed and we were unable to recover it. 00:38:38.443 [2024-12-09 09:56:13.844709] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.443 [2024-12-09 09:56:13.844756] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.443 [2024-12-09 09:56:13.844766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.443 [2024-12-09 09:56:13.844771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.443 [2024-12-09 09:56:13.844775] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.443 [2024-12-09 09:56:13.844785] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.443 qpair failed and we were unable to recover it. 00:38:38.443 [2024-12-09 09:56:13.854680] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.443 [2024-12-09 09:56:13.854734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.443 [2024-12-09 09:56:13.854744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.443 [2024-12-09 09:56:13.854749] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.443 [2024-12-09 09:56:13.854753] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.443 [2024-12-09 09:56:13.854766] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.443 qpair failed and we were unable to recover it. 00:38:38.443 [2024-12-09 09:56:13.864809] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.443 [2024-12-09 09:56:13.864860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.443 [2024-12-09 09:56:13.864870] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.443 [2024-12-09 09:56:13.864875] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.443 [2024-12-09 09:56:13.864879] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.443 [2024-12-09 09:56:13.864889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.443 qpair failed and we were unable to recover it. 00:38:38.443 [2024-12-09 09:56:13.874797] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.443 [2024-12-09 09:56:13.874881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.443 [2024-12-09 09:56:13.874891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.443 [2024-12-09 09:56:13.874896] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.443 [2024-12-09 09:56:13.874900] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.443 [2024-12-09 09:56:13.874910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.443 qpair failed and we were unable to recover it. 00:38:38.443 [2024-12-09 09:56:13.884821] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.443 [2024-12-09 09:56:13.884868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.443 [2024-12-09 09:56:13.884877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.443 [2024-12-09 09:56:13.884882] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.443 [2024-12-09 09:56:13.884886] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.443 [2024-12-09 09:56:13.884896] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.443 qpair failed and we were unable to recover it. 00:38:38.707 [2024-12-09 09:56:13.894911] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.707 [2024-12-09 09:56:13.894962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.707 [2024-12-09 09:56:13.894972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.707 [2024-12-09 09:56:13.894977] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.707 [2024-12-09 09:56:13.894981] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.707 [2024-12-09 09:56:13.894992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.707 qpair failed and we were unable to recover it. 00:38:38.707 [2024-12-09 09:56:13.904942] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.707 [2024-12-09 09:56:13.904994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.707 [2024-12-09 09:56:13.905004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.707 [2024-12-09 09:56:13.905009] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.707 [2024-12-09 09:56:13.905013] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.707 [2024-12-09 09:56:13.905023] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.707 qpair failed and we were unable to recover it. 00:38:38.707 [2024-12-09 09:56:13.914914] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.707 [2024-12-09 09:56:13.914965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.707 [2024-12-09 09:56:13.914975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.707 [2024-12-09 09:56:13.914980] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.707 [2024-12-09 09:56:13.914984] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.707 [2024-12-09 09:56:13.914994] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.707 qpair failed and we were unable to recover it. 00:38:38.707 [2024-12-09 09:56:13.924934] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.707 [2024-12-09 09:56:13.925010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.707 [2024-12-09 09:56:13.925020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.707 [2024-12-09 09:56:13.925025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.707 [2024-12-09 09:56:13.925029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.707 [2024-12-09 09:56:13.925039] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.707 qpair failed and we were unable to recover it. 00:38:38.707 [2024-12-09 09:56:13.935004] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.707 [2024-12-09 09:56:13.935086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.707 [2024-12-09 09:56:13.935095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.707 [2024-12-09 09:56:13.935100] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.707 [2024-12-09 09:56:13.935104] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.707 [2024-12-09 09:56:13.935114] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.707 qpair failed and we were unable to recover it. 00:38:38.707 [2024-12-09 09:56:13.945022] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.707 [2024-12-09 09:56:13.945103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.707 [2024-12-09 09:56:13.945116] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.707 [2024-12-09 09:56:13.945121] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.707 [2024-12-09 09:56:13.945125] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.707 [2024-12-09 09:56:13.945135] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.707 qpair failed and we were unable to recover it. 00:38:38.707 [2024-12-09 09:56:13.955007] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.707 [2024-12-09 09:56:13.955048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.707 [2024-12-09 09:56:13.955058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.707 [2024-12-09 09:56:13.955063] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.707 [2024-12-09 09:56:13.955068] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.707 [2024-12-09 09:56:13.955078] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.708 qpair failed and we were unable to recover it. 00:38:38.708 [2024-12-09 09:56:13.965036] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.708 [2024-12-09 09:56:13.965075] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.708 [2024-12-09 09:56:13.965085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.708 [2024-12-09 09:56:13.965090] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.708 [2024-12-09 09:56:13.965094] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.708 [2024-12-09 09:56:13.965105] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.708 qpair failed and we were unable to recover it. 00:38:38.708 [2024-12-09 09:56:13.975117] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.708 [2024-12-09 09:56:13.975166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.708 [2024-12-09 09:56:13.975176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.708 [2024-12-09 09:56:13.975181] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.708 [2024-12-09 09:56:13.975185] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.708 [2024-12-09 09:56:13.975195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.708 qpair failed and we were unable to recover it. 00:38:38.708 [2024-12-09 09:56:13.985129] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.708 [2024-12-09 09:56:13.985177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.708 [2024-12-09 09:56:13.985186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.708 [2024-12-09 09:56:13.985191] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.708 [2024-12-09 09:56:13.985198] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.708 [2024-12-09 09:56:13.985208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.708 qpair failed and we were unable to recover it. 00:38:38.708 [2024-12-09 09:56:13.995076] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.708 [2024-12-09 09:56:13.995116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.708 [2024-12-09 09:56:13.995126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.708 [2024-12-09 09:56:13.995130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.708 [2024-12-09 09:56:13.995135] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.708 [2024-12-09 09:56:13.995145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.708 qpair failed and we were unable to recover it. 00:38:38.708 [2024-12-09 09:56:14.005149] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.708 [2024-12-09 09:56:14.005188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.708 [2024-12-09 09:56:14.005198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.708 [2024-12-09 09:56:14.005203] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.708 [2024-12-09 09:56:14.005208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.708 [2024-12-09 09:56:14.005218] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.708 qpair failed and we were unable to recover it. 00:38:38.708 [2024-12-09 09:56:14.015213] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.708 [2024-12-09 09:56:14.015261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.708 [2024-12-09 09:56:14.015271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.708 [2024-12-09 09:56:14.015276] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.708 [2024-12-09 09:56:14.015280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.708 [2024-12-09 09:56:14.015290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.708 qpair failed and we were unable to recover it. 00:38:38.708 [2024-12-09 09:56:14.025257] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.708 [2024-12-09 09:56:14.025302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.708 [2024-12-09 09:56:14.025311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.708 [2024-12-09 09:56:14.025316] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.708 [2024-12-09 09:56:14.025321] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.708 [2024-12-09 09:56:14.025331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.708 qpair failed and we were unable to recover it. 00:38:38.708 [2024-12-09 09:56:14.035215] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.708 [2024-12-09 09:56:14.035265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.708 [2024-12-09 09:56:14.035275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.708 [2024-12-09 09:56:14.035280] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.708 [2024-12-09 09:56:14.035284] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.708 [2024-12-09 09:56:14.035294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.708 qpair failed and we were unable to recover it. 00:38:38.708 [2024-12-09 09:56:14.045262] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.708 [2024-12-09 09:56:14.045309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.708 [2024-12-09 09:56:14.045318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.708 [2024-12-09 09:56:14.045323] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.708 [2024-12-09 09:56:14.045328] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.708 [2024-12-09 09:56:14.045338] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.708 qpair failed and we were unable to recover it. 00:38:38.708 [2024-12-09 09:56:14.055319] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.708 [2024-12-09 09:56:14.055365] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.708 [2024-12-09 09:56:14.055376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.708 [2024-12-09 09:56:14.055380] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.708 [2024-12-09 09:56:14.055385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.708 [2024-12-09 09:56:14.055395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.708 qpair failed and we were unable to recover it. 00:38:38.708 [2024-12-09 09:56:14.065356] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.708 [2024-12-09 09:56:14.065406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.708 [2024-12-09 09:56:14.065424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.708 [2024-12-09 09:56:14.065430] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.708 [2024-12-09 09:56:14.065435] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.709 [2024-12-09 09:56:14.065449] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.709 qpair failed and we were unable to recover it. 00:38:38.709 [2024-12-09 09:56:14.075322] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.709 [2024-12-09 09:56:14.075366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.709 [2024-12-09 09:56:14.075388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.709 [2024-12-09 09:56:14.075394] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.709 [2024-12-09 09:56:14.075399] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.709 [2024-12-09 09:56:14.075414] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.709 qpair failed and we were unable to recover it. 00:38:38.709 [2024-12-09 09:56:14.085263] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.709 [2024-12-09 09:56:14.085308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.709 [2024-12-09 09:56:14.085326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.709 [2024-12-09 09:56:14.085332] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.709 [2024-12-09 09:56:14.085337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.709 [2024-12-09 09:56:14.085351] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.709 qpair failed and we were unable to recover it. 00:38:38.709 [2024-12-09 09:56:14.095303] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.709 [2024-12-09 09:56:14.095405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.709 [2024-12-09 09:56:14.095417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.709 [2024-12-09 09:56:14.095422] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.709 [2024-12-09 09:56:14.095426] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.709 [2024-12-09 09:56:14.095437] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.709 qpair failed and we were unable to recover it. 00:38:38.709 [2024-12-09 09:56:14.105460] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.709 [2024-12-09 09:56:14.105508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.709 [2024-12-09 09:56:14.105519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.709 [2024-12-09 09:56:14.105524] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.709 [2024-12-09 09:56:14.105528] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.709 [2024-12-09 09:56:14.105539] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.709 qpair failed and we were unable to recover it. 00:38:38.709 [2024-12-09 09:56:14.115440] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.709 [2024-12-09 09:56:14.115486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.709 [2024-12-09 09:56:14.115496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.709 [2024-12-09 09:56:14.115501] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.709 [2024-12-09 09:56:14.115509] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.709 [2024-12-09 09:56:14.115519] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.709 qpair failed and we were unable to recover it. 00:38:38.709 [2024-12-09 09:56:14.125457] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.709 [2024-12-09 09:56:14.125500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.709 [2024-12-09 09:56:14.125510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.709 [2024-12-09 09:56:14.125515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.709 [2024-12-09 09:56:14.125519] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.709 [2024-12-09 09:56:14.125530] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.709 qpair failed and we were unable to recover it. 00:38:38.709 [2024-12-09 09:56:14.135416] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.709 [2024-12-09 09:56:14.135467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.709 [2024-12-09 09:56:14.135477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.709 [2024-12-09 09:56:14.135482] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.709 [2024-12-09 09:56:14.135486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.709 [2024-12-09 09:56:14.135496] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.709 qpair failed and we were unable to recover it. 00:38:38.709 [2024-12-09 09:56:14.145551] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.709 [2024-12-09 09:56:14.145597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.709 [2024-12-09 09:56:14.145607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.709 [2024-12-09 09:56:14.145612] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.709 [2024-12-09 09:56:14.145616] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.709 [2024-12-09 09:56:14.145626] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.709 qpair failed and we were unable to recover it. 00:38:38.709 [2024-12-09 09:56:14.155548] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.709 [2024-12-09 09:56:14.155596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.709 [2024-12-09 09:56:14.155606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.709 [2024-12-09 09:56:14.155611] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.709 [2024-12-09 09:56:14.155615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.709 [2024-12-09 09:56:14.155625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.709 qpair failed and we were unable to recover it. 00:38:38.972 [2024-12-09 09:56:14.165586] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.972 [2024-12-09 09:56:14.165628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.972 [2024-12-09 09:56:14.165642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.972 [2024-12-09 09:56:14.165647] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.972 [2024-12-09 09:56:14.165652] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.972 [2024-12-09 09:56:14.165662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.972 qpair failed and we were unable to recover it. 00:38:38.972 [2024-12-09 09:56:14.175660] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.972 [2024-12-09 09:56:14.175711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.972 [2024-12-09 09:56:14.175721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.972 [2024-12-09 09:56:14.175726] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.972 [2024-12-09 09:56:14.175730] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.972 [2024-12-09 09:56:14.175740] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.972 qpair failed and we were unable to recover it. 00:38:38.972 [2024-12-09 09:56:14.185680] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.972 [2024-12-09 09:56:14.185727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.972 [2024-12-09 09:56:14.185736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.972 [2024-12-09 09:56:14.185741] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.972 [2024-12-09 09:56:14.185746] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.972 [2024-12-09 09:56:14.185756] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.972 qpair failed and we were unable to recover it. 00:38:38.972 [2024-12-09 09:56:14.195673] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.972 [2024-12-09 09:56:14.195737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.972 [2024-12-09 09:56:14.195747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.972 [2024-12-09 09:56:14.195751] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.972 [2024-12-09 09:56:14.195756] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.972 [2024-12-09 09:56:14.195766] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.972 qpair failed and we were unable to recover it. 00:38:38.972 [2024-12-09 09:56:14.205673] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.972 [2024-12-09 09:56:14.205720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.972 [2024-12-09 09:56:14.205730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.972 [2024-12-09 09:56:14.205735] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.972 [2024-12-09 09:56:14.205739] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.972 [2024-12-09 09:56:14.205749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.972 qpair failed and we were unable to recover it. 00:38:38.972 [2024-12-09 09:56:14.215788] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.972 [2024-12-09 09:56:14.215837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.972 [2024-12-09 09:56:14.215846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.972 [2024-12-09 09:56:14.215851] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.972 [2024-12-09 09:56:14.215855] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.972 [2024-12-09 09:56:14.215865] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.972 qpair failed and we were unable to recover it. 00:38:38.972 [2024-12-09 09:56:14.225809] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.972 [2024-12-09 09:56:14.225866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.972 [2024-12-09 09:56:14.225876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.972 [2024-12-09 09:56:14.225881] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.972 [2024-12-09 09:56:14.225885] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.973 [2024-12-09 09:56:14.225895] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.973 qpair failed and we were unable to recover it. 00:38:38.973 [2024-12-09 09:56:14.235788] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.973 [2024-12-09 09:56:14.235854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.973 [2024-12-09 09:56:14.235864] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.973 [2024-12-09 09:56:14.235869] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.973 [2024-12-09 09:56:14.235873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.973 [2024-12-09 09:56:14.235883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.973 qpair failed and we were unable to recover it. 00:38:38.973 [2024-12-09 09:56:14.245773] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.973 [2024-12-09 09:56:14.245850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.973 [2024-12-09 09:56:14.245860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.973 [2024-12-09 09:56:14.245867] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.973 [2024-12-09 09:56:14.245872] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.973 [2024-12-09 09:56:14.245882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.973 qpair failed and we were unable to recover it. 00:38:38.973 [2024-12-09 09:56:14.255892] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.973 [2024-12-09 09:56:14.255942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.973 [2024-12-09 09:56:14.255952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.973 [2024-12-09 09:56:14.255957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.973 [2024-12-09 09:56:14.255961] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.973 [2024-12-09 09:56:14.255971] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.973 qpair failed and we were unable to recover it. 00:38:38.973 [2024-12-09 09:56:14.265795] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.973 [2024-12-09 09:56:14.265846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.973 [2024-12-09 09:56:14.265856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.973 [2024-12-09 09:56:14.265861] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.973 [2024-12-09 09:56:14.265865] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.973 [2024-12-09 09:56:14.265875] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.973 qpair failed and we were unable to recover it. 00:38:38.973 [2024-12-09 09:56:14.275870] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.973 [2024-12-09 09:56:14.275912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.973 [2024-12-09 09:56:14.275922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.973 [2024-12-09 09:56:14.275927] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.973 [2024-12-09 09:56:14.275931] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.973 [2024-12-09 09:56:14.275941] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.973 qpair failed and we were unable to recover it. 00:38:38.973 [2024-12-09 09:56:14.285971] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.973 [2024-12-09 09:56:14.286011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.973 [2024-12-09 09:56:14.286021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.973 [2024-12-09 09:56:14.286026] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.973 [2024-12-09 09:56:14.286031] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.973 [2024-12-09 09:56:14.286043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.973 qpair failed and we were unable to recover it. 00:38:38.973 [2024-12-09 09:56:14.295977] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.973 [2024-12-09 09:56:14.296029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.973 [2024-12-09 09:56:14.296039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.973 [2024-12-09 09:56:14.296044] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.973 [2024-12-09 09:56:14.296048] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.973 [2024-12-09 09:56:14.296058] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.973 qpair failed and we were unable to recover it. 00:38:38.973 [2024-12-09 09:56:14.306025] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.973 [2024-12-09 09:56:14.306108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.973 [2024-12-09 09:56:14.306118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.973 [2024-12-09 09:56:14.306123] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.973 [2024-12-09 09:56:14.306127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.973 [2024-12-09 09:56:14.306137] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.973 qpair failed and we were unable to recover it. 00:38:38.973 [2024-12-09 09:56:14.315978] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.973 [2024-12-09 09:56:14.316025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.973 [2024-12-09 09:56:14.316035] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.973 [2024-12-09 09:56:14.316040] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.973 [2024-12-09 09:56:14.316044] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.973 [2024-12-09 09:56:14.316054] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.973 qpair failed and we were unable to recover it. 00:38:38.973 [2024-12-09 09:56:14.325995] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.973 [2024-12-09 09:56:14.326035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.973 [2024-12-09 09:56:14.326045] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.973 [2024-12-09 09:56:14.326050] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.973 [2024-12-09 09:56:14.326054] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.974 [2024-12-09 09:56:14.326064] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.974 qpair failed and we were unable to recover it. 00:38:38.974 [2024-12-09 09:56:14.336123] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.974 [2024-12-09 09:56:14.336173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.974 [2024-12-09 09:56:14.336183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.974 [2024-12-09 09:56:14.336188] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.974 [2024-12-09 09:56:14.336192] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.974 [2024-12-09 09:56:14.336202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.974 qpair failed and we were unable to recover it. 00:38:38.974 [2024-12-09 09:56:14.346154] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.974 [2024-12-09 09:56:14.346205] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.974 [2024-12-09 09:56:14.346214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.974 [2024-12-09 09:56:14.346219] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.974 [2024-12-09 09:56:14.346224] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.974 [2024-12-09 09:56:14.346234] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.974 qpair failed and we were unable to recover it. 00:38:38.974 [2024-12-09 09:56:14.356122] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.974 [2024-12-09 09:56:14.356160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.974 [2024-12-09 09:56:14.356170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.974 [2024-12-09 09:56:14.356175] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.974 [2024-12-09 09:56:14.356179] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.974 [2024-12-09 09:56:14.356189] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.974 qpair failed and we were unable to recover it. 00:38:38.974 [2024-12-09 09:56:14.366153] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.974 [2024-12-09 09:56:14.366198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.974 [2024-12-09 09:56:14.366208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.974 [2024-12-09 09:56:14.366212] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.974 [2024-12-09 09:56:14.366217] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.974 [2024-12-09 09:56:14.366227] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.974 qpair failed and we were unable to recover it. 00:38:38.974 [2024-12-09 09:56:14.376220] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.974 [2024-12-09 09:56:14.376270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.974 [2024-12-09 09:56:14.376280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.974 [2024-12-09 09:56:14.376287] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.974 [2024-12-09 09:56:14.376292] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.974 [2024-12-09 09:56:14.376302] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.974 qpair failed and we were unable to recover it. 00:38:38.974 [2024-12-09 09:56:14.386243] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.974 [2024-12-09 09:56:14.386339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.974 [2024-12-09 09:56:14.386349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.974 [2024-12-09 09:56:14.386353] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.974 [2024-12-09 09:56:14.386358] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.974 [2024-12-09 09:56:14.386368] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.974 qpair failed and we were unable to recover it. 00:38:38.974 [2024-12-09 09:56:14.396224] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.974 [2024-12-09 09:56:14.396280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.974 [2024-12-09 09:56:14.396290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.974 [2024-12-09 09:56:14.396295] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.974 [2024-12-09 09:56:14.396299] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.974 [2024-12-09 09:56:14.396309] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.974 qpair failed and we were unable to recover it. 00:38:38.974 [2024-12-09 09:56:14.406210] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.974 [2024-12-09 09:56:14.406248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.974 [2024-12-09 09:56:14.406258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.974 [2024-12-09 09:56:14.406263] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.974 [2024-12-09 09:56:14.406267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.974 [2024-12-09 09:56:14.406277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.974 qpair failed and we were unable to recover it. 00:38:38.974 [2024-12-09 09:56:14.416312] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:38.974 [2024-12-09 09:56:14.416390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:38.974 [2024-12-09 09:56:14.416400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:38.974 [2024-12-09 09:56:14.416405] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:38.974 [2024-12-09 09:56:14.416409] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:38.974 [2024-12-09 09:56:14.416422] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:38.974 qpair failed and we were unable to recover it. 00:38:39.253 [2024-12-09 09:56:14.426349] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.253 [2024-12-09 09:56:14.426401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.253 [2024-12-09 09:56:14.426420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.253 [2024-12-09 09:56:14.426426] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.253 [2024-12-09 09:56:14.426431] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:39.253 [2024-12-09 09:56:14.426445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:39.253 qpair failed and we were unable to recover it. 00:38:39.253 [2024-12-09 09:56:14.436303] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.253 [2024-12-09 09:56:14.436352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.253 [2024-12-09 09:56:14.436370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.253 [2024-12-09 09:56:14.436376] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.253 [2024-12-09 09:56:14.436381] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:39.253 [2024-12-09 09:56:14.436396] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:39.253 qpair failed and we were unable to recover it. 00:38:39.253 [2024-12-09 09:56:14.446348] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.253 [2024-12-09 09:56:14.446394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.253 [2024-12-09 09:56:14.446405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.253 [2024-12-09 09:56:14.446411] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.253 [2024-12-09 09:56:14.446415] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:39.253 [2024-12-09 09:56:14.446426] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:39.253 qpair failed and we were unable to recover it. 00:38:39.253 [2024-12-09 09:56:14.456441] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.254 [2024-12-09 09:56:14.456494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.254 [2024-12-09 09:56:14.456513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.254 [2024-12-09 09:56:14.456519] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.254 [2024-12-09 09:56:14.456524] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:39.254 [2024-12-09 09:56:14.456538] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:39.254 qpair failed and we were unable to recover it. 00:38:39.254 [2024-12-09 09:56:14.466371] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.254 [2024-12-09 09:56:14.466421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.254 [2024-12-09 09:56:14.466434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.254 [2024-12-09 09:56:14.466439] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.254 [2024-12-09 09:56:14.466444] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:39.254 [2024-12-09 09:56:14.466455] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:39.254 qpair failed and we were unable to recover it. 00:38:39.254 [2024-12-09 09:56:14.476429] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.254 [2024-12-09 09:56:14.476470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.254 [2024-12-09 09:56:14.476482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.254 [2024-12-09 09:56:14.476488] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.254 [2024-12-09 09:56:14.476493] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:39.254 [2024-12-09 09:56:14.476504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:39.254 qpair failed and we were unable to recover it. 00:38:39.254 [2024-12-09 09:56:14.486468] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.254 [2024-12-09 09:56:14.486558] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.254 [2024-12-09 09:56:14.486568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.254 [2024-12-09 09:56:14.486573] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.254 [2024-12-09 09:56:14.486579] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:39.254 [2024-12-09 09:56:14.486590] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:39.254 qpair failed and we were unable to recover it. 00:38:39.254 [2024-12-09 09:56:14.496511] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.254 [2024-12-09 09:56:14.496559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.254 [2024-12-09 09:56:14.496569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.254 [2024-12-09 09:56:14.496574] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.254 [2024-12-09 09:56:14.496578] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:39.254 [2024-12-09 09:56:14.496588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:39.254 qpair failed and we were unable to recover it. 00:38:39.254 [2024-12-09 09:56:14.506597] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.254 [2024-12-09 09:56:14.506688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.254 [2024-12-09 09:56:14.506702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.254 [2024-12-09 09:56:14.506707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.254 [2024-12-09 09:56:14.506711] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:39.254 [2024-12-09 09:56:14.506722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:39.254 qpair failed and we were unable to recover it. 00:38:39.254 [2024-12-09 09:56:14.516556] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.254 [2024-12-09 09:56:14.516601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.254 [2024-12-09 09:56:14.516611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.254 [2024-12-09 09:56:14.516616] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.254 [2024-12-09 09:56:14.516620] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:39.254 [2024-12-09 09:56:14.516630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:39.254 qpair failed and we were unable to recover it. 00:38:39.254 [2024-12-09 09:56:14.526559] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.254 [2024-12-09 09:56:14.526634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.254 [2024-12-09 09:56:14.526648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.254 [2024-12-09 09:56:14.526653] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.254 [2024-12-09 09:56:14.526657] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:39.254 [2024-12-09 09:56:14.526667] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:39.254 qpair failed and we were unable to recover it. 00:38:39.254 [2024-12-09 09:56:14.536621] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.254 [2024-12-09 09:56:14.536676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.254 [2024-12-09 09:56:14.536686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.254 [2024-12-09 09:56:14.536691] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.254 [2024-12-09 09:56:14.536695] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:39.254 [2024-12-09 09:56:14.536706] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:39.254 qpair failed and we were unable to recover it. 00:38:39.254 [2024-12-09 09:56:14.546588] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.254 [2024-12-09 09:56:14.546685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.254 [2024-12-09 09:56:14.546695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.254 [2024-12-09 09:56:14.546700] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.254 [2024-12-09 09:56:14.546707] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:39.254 [2024-12-09 09:56:14.546717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:39.254 qpair failed and we were unable to recover it. 00:38:39.254 [2024-12-09 09:56:14.556672] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.254 [2024-12-09 09:56:14.556718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.254 [2024-12-09 09:56:14.556728] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.254 [2024-12-09 09:56:14.556733] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.254 [2024-12-09 09:56:14.556738] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:39.254 [2024-12-09 09:56:14.556748] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:39.254 qpair failed and we were unable to recover it. 00:38:39.254 [2024-12-09 09:56:14.566678] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.254 [2024-12-09 09:56:14.566722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.254 [2024-12-09 09:56:14.566732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.254 [2024-12-09 09:56:14.566737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.254 [2024-12-09 09:56:14.566741] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:39.254 [2024-12-09 09:56:14.566751] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:39.254 qpair failed and we were unable to recover it. 00:38:39.254 [2024-12-09 09:56:14.576678] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.254 [2024-12-09 09:56:14.576745] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.254 [2024-12-09 09:56:14.576754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.254 [2024-12-09 09:56:14.576759] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.254 [2024-12-09 09:56:14.576764] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:39.254 [2024-12-09 09:56:14.576774] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:39.254 qpair failed and we were unable to recover it. 00:38:39.254 [2024-12-09 09:56:14.586787] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.255 [2024-12-09 09:56:14.586836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.255 [2024-12-09 09:56:14.586846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.255 [2024-12-09 09:56:14.586851] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.255 [2024-12-09 09:56:14.586855] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:39.255 [2024-12-09 09:56:14.586865] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:39.255 qpair failed and we were unable to recover it. 00:38:39.255 [2024-12-09 09:56:14.596646] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.255 [2024-12-09 09:56:14.596688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.255 [2024-12-09 09:56:14.596698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.255 [2024-12-09 09:56:14.596703] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.255 [2024-12-09 09:56:14.596707] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:39.255 [2024-12-09 09:56:14.596717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:39.255 qpair failed and we were unable to recover it. 00:38:39.255 [2024-12-09 09:56:14.606759] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.255 [2024-12-09 09:56:14.606849] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.255 [2024-12-09 09:56:14.606859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.255 [2024-12-09 09:56:14.606864] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.255 [2024-12-09 09:56:14.606868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:39.255 [2024-12-09 09:56:14.606878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:39.255 qpair failed and we were unable to recover it. 00:38:39.255 [2024-12-09 09:56:14.616867] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.255 [2024-12-09 09:56:14.616917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.255 [2024-12-09 09:56:14.616927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.255 [2024-12-09 09:56:14.616932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.255 [2024-12-09 09:56:14.616936] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:39.255 [2024-12-09 09:56:14.616947] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:39.255 qpair failed and we were unable to recover it. 00:38:39.255 [2024-12-09 09:56:14.626840] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.255 [2024-12-09 09:56:14.626926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.255 [2024-12-09 09:56:14.626936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.255 [2024-12-09 09:56:14.626941] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.255 [2024-12-09 09:56:14.626945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:39.255 [2024-12-09 09:56:14.626955] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:39.255 qpair failed and we were unable to recover it. 00:38:39.255 [2024-12-09 09:56:14.636916] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.255 [2024-12-09 09:56:14.637029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.255 [2024-12-09 09:56:14.637044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.255 [2024-12-09 09:56:14.637049] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.255 [2024-12-09 09:56:14.637054] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:39.255 [2024-12-09 09:56:14.637064] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:39.255 qpair failed and we were unable to recover it. 00:38:39.255 [2024-12-09 09:56:14.646886] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.255 [2024-12-09 09:56:14.646925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.255 [2024-12-09 09:56:14.646935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.255 [2024-12-09 09:56:14.646940] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.255 [2024-12-09 09:56:14.646944] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:39.255 [2024-12-09 09:56:14.646954] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:39.255 qpair failed and we were unable to recover it. 00:38:39.255 [2024-12-09 09:56:14.656968] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.255 [2024-12-09 09:56:14.657017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.255 [2024-12-09 09:56:14.657027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.255 [2024-12-09 09:56:14.657032] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.255 [2024-12-09 09:56:14.657037] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:39.255 [2024-12-09 09:56:14.657047] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:39.255 qpair failed and we were unable to recover it. 00:38:39.255 [2024-12-09 09:56:14.667013] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.255 [2024-12-09 09:56:14.667065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.255 [2024-12-09 09:56:14.667075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.255 [2024-12-09 09:56:14.667080] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.255 [2024-12-09 09:56:14.667084] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:39.255 [2024-12-09 09:56:14.667094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:39.255 qpair failed and we were unable to recover it. 00:38:39.255 [2024-12-09 09:56:14.676858] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.255 [2024-12-09 09:56:14.676897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.255 [2024-12-09 09:56:14.676907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.255 [2024-12-09 09:56:14.676911] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.255 [2024-12-09 09:56:14.676918] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:39.255 [2024-12-09 09:56:14.676929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:39.255 qpair failed and we were unable to recover it. 00:38:39.255 [2024-12-09 09:56:14.687010] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.255 [2024-12-09 09:56:14.687049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.255 [2024-12-09 09:56:14.687059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.255 [2024-12-09 09:56:14.687063] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.255 [2024-12-09 09:56:14.687068] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:39.255 [2024-12-09 09:56:14.687078] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:39.255 qpair failed and we were unable to recover it. 00:38:39.255 [2024-12-09 09:56:14.697091] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.255 [2024-12-09 09:56:14.697176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.255 [2024-12-09 09:56:14.697186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.255 [2024-12-09 09:56:14.697190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.255 [2024-12-09 09:56:14.697195] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:39.255 [2024-12-09 09:56:14.697205] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:39.255 qpair failed and we were unable to recover it. 00:38:39.518 [2024-12-09 09:56:14.707148] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.518 [2024-12-09 09:56:14.707198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.518 [2024-12-09 09:56:14.707208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.518 [2024-12-09 09:56:14.707213] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.518 [2024-12-09 09:56:14.707217] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:39.518 [2024-12-09 09:56:14.707227] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:39.518 qpair failed and we were unable to recover it. 00:38:39.518 [2024-12-09 09:56:14.717112] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.518 [2024-12-09 09:56:14.717156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.518 [2024-12-09 09:56:14.717166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.518 [2024-12-09 09:56:14.717171] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.518 [2024-12-09 09:56:14.717175] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:39.518 [2024-12-09 09:56:14.717185] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:39.518 qpair failed and we were unable to recover it. 00:38:39.518 [2024-12-09 09:56:14.727003] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.518 [2024-12-09 09:56:14.727047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.518 [2024-12-09 09:56:14.727057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.518 [2024-12-09 09:56:14.727062] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.518 [2024-12-09 09:56:14.727066] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:39.518 [2024-12-09 09:56:14.727076] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:39.518 qpair failed and we were unable to recover it. 00:38:39.518 [2024-12-09 09:56:14.737184] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.518 [2024-12-09 09:56:14.737233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.518 [2024-12-09 09:56:14.737243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.518 [2024-12-09 09:56:14.737248] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.518 [2024-12-09 09:56:14.737252] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:39.518 [2024-12-09 09:56:14.737262] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:39.518 qpair failed and we were unable to recover it. 00:38:39.518 [2024-12-09 09:56:14.747247] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.518 [2024-12-09 09:56:14.747292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.518 [2024-12-09 09:56:14.747302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.518 [2024-12-09 09:56:14.747306] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.518 [2024-12-09 09:56:14.747310] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:39.518 [2024-12-09 09:56:14.747321] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:39.518 qpair failed and we were unable to recover it. 00:38:39.518 [2024-12-09 09:56:14.757199] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.518 [2024-12-09 09:56:14.757241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.518 [2024-12-09 09:56:14.757251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.518 [2024-12-09 09:56:14.757256] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.518 [2024-12-09 09:56:14.757260] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:39.518 [2024-12-09 09:56:14.757271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:39.518 qpair failed and we were unable to recover it. 00:38:39.518 [2024-12-09 09:56:14.767102] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.518 [2024-12-09 09:56:14.767144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.518 [2024-12-09 09:56:14.767155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.518 [2024-12-09 09:56:14.767160] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.518 [2024-12-09 09:56:14.767164] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:39.518 [2024-12-09 09:56:14.767174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:39.518 qpair failed and we were unable to recover it. 00:38:39.518 [2024-12-09 09:56:14.777325] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.518 [2024-12-09 09:56:14.777376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.518 [2024-12-09 09:56:14.777385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.518 [2024-12-09 09:56:14.777390] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.518 [2024-12-09 09:56:14.777394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:39.518 [2024-12-09 09:56:14.777404] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:39.518 qpair failed and we were unable to recover it. 00:38:39.518 [2024-12-09 09:56:14.787333] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.518 [2024-12-09 09:56:14.787390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.518 [2024-12-09 09:56:14.787400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.518 [2024-12-09 09:56:14.787405] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.518 [2024-12-09 09:56:14.787409] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:39.518 [2024-12-09 09:56:14.787419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:39.518 qpair failed and we were unable to recover it. 00:38:39.518 [2024-12-09 09:56:14.797313] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.518 [2024-12-09 09:56:14.797358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.518 [2024-12-09 09:56:14.797367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.518 [2024-12-09 09:56:14.797372] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.518 [2024-12-09 09:56:14.797377] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:39.518 [2024-12-09 09:56:14.797387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:39.518 qpair failed and we were unable to recover it. 00:38:39.518 [2024-12-09 09:56:14.807331] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.518 [2024-12-09 09:56:14.807371] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.518 [2024-12-09 09:56:14.807381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.518 [2024-12-09 09:56:14.807388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.518 [2024-12-09 09:56:14.807392] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:39.519 [2024-12-09 09:56:14.807403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:39.519 qpair failed and we were unable to recover it. 00:38:39.519 [2024-12-09 09:56:14.817421] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.519 [2024-12-09 09:56:14.817470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.519 [2024-12-09 09:56:14.817480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.519 [2024-12-09 09:56:14.817485] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.519 [2024-12-09 09:56:14.817489] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:39.519 [2024-12-09 09:56:14.817499] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:39.519 qpair failed and we were unable to recover it. 00:38:39.519 [2024-12-09 09:56:14.827453] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.519 [2024-12-09 09:56:14.827501] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.519 [2024-12-09 09:56:14.827511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.519 [2024-12-09 09:56:14.827516] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.519 [2024-12-09 09:56:14.827520] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:39.519 [2024-12-09 09:56:14.827530] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:39.519 qpair failed and we were unable to recover it. 00:38:39.519 [2024-12-09 09:56:14.837391] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.519 [2024-12-09 09:56:14.837460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.519 [2024-12-09 09:56:14.837470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.519 [2024-12-09 09:56:14.837475] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.519 [2024-12-09 09:56:14.837479] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:39.519 [2024-12-09 09:56:14.837489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:39.519 qpair failed and we were unable to recover it. 00:38:39.519 [2024-12-09 09:56:14.847455] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.519 [2024-12-09 09:56:14.847498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.519 [2024-12-09 09:56:14.847507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.519 [2024-12-09 09:56:14.847512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.519 [2024-12-09 09:56:14.847517] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:39.519 [2024-12-09 09:56:14.847530] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:39.519 qpair failed and we were unable to recover it. 00:38:39.519 [2024-12-09 09:56:14.857530] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.519 [2024-12-09 09:56:14.857602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.519 [2024-12-09 09:56:14.857613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.519 [2024-12-09 09:56:14.857617] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.519 [2024-12-09 09:56:14.857622] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:39.519 [2024-12-09 09:56:14.857632] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:39.519 qpair failed and we were unable to recover it. 00:38:39.519 [2024-12-09 09:56:14.867457] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.519 [2024-12-09 09:56:14.867512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.519 [2024-12-09 09:56:14.867522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.519 [2024-12-09 09:56:14.867527] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.519 [2024-12-09 09:56:14.867531] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:39.519 [2024-12-09 09:56:14.867541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:39.519 qpair failed and we were unable to recover it. 00:38:39.519 [2024-12-09 09:56:14.877534] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.519 [2024-12-09 09:56:14.877575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.519 [2024-12-09 09:56:14.877585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.519 [2024-12-09 09:56:14.877590] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.519 [2024-12-09 09:56:14.877594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:39.519 [2024-12-09 09:56:14.877604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:39.519 qpair failed and we were unable to recover it. 00:38:39.519 [2024-12-09 09:56:14.887437] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.519 [2024-12-09 09:56:14.887502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.519 [2024-12-09 09:56:14.887512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.519 [2024-12-09 09:56:14.887517] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.519 [2024-12-09 09:56:14.887521] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:39.519 [2024-12-09 09:56:14.887531] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:39.519 qpair failed and we were unable to recover it. 00:38:39.519 [2024-12-09 09:56:14.897659] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.519 [2024-12-09 09:56:14.897712] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.519 [2024-12-09 09:56:14.897722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.519 [2024-12-09 09:56:14.897727] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.519 [2024-12-09 09:56:14.897731] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:39.519 [2024-12-09 09:56:14.897742] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:39.519 qpair failed and we were unable to recover it. 00:38:39.519 [2024-12-09 09:56:14.907673] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.519 [2024-12-09 09:56:14.907726] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.519 [2024-12-09 09:56:14.907736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.519 [2024-12-09 09:56:14.907741] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.519 [2024-12-09 09:56:14.907745] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:39.519 [2024-12-09 09:56:14.907756] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:39.519 qpair failed and we were unable to recover it. 00:38:39.519 [2024-12-09 09:56:14.917666] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.519 [2024-12-09 09:56:14.917710] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.519 [2024-12-09 09:56:14.917720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.519 [2024-12-09 09:56:14.917725] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.519 [2024-12-09 09:56:14.917729] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:39.519 [2024-12-09 09:56:14.917740] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:39.519 qpair failed and we were unable to recover it. 00:38:39.519 [2024-12-09 09:56:14.927692] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.519 [2024-12-09 09:56:14.927737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.519 [2024-12-09 09:56:14.927747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.519 [2024-12-09 09:56:14.927752] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.519 [2024-12-09 09:56:14.927756] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:39.519 [2024-12-09 09:56:14.927766] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:39.519 qpair failed and we were unable to recover it. 00:38:39.519 [2024-12-09 09:56:14.937748] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.519 [2024-12-09 09:56:14.937838] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.519 [2024-12-09 09:56:14.937848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.519 [2024-12-09 09:56:14.937856] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.519 [2024-12-09 09:56:14.937861] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:39.520 [2024-12-09 09:56:14.937871] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:39.520 qpair failed and we were unable to recover it. 00:38:39.520 [2024-12-09 09:56:14.947788] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.520 [2024-12-09 09:56:14.947843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.520 [2024-12-09 09:56:14.947853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.520 [2024-12-09 09:56:14.947858] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.520 [2024-12-09 09:56:14.947863] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:39.520 [2024-12-09 09:56:14.947873] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:39.520 qpair failed and we were unable to recover it. 00:38:39.520 [2024-12-09 09:56:14.957731] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.520 [2024-12-09 09:56:14.957773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.520 [2024-12-09 09:56:14.957783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.520 [2024-12-09 09:56:14.957787] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.520 [2024-12-09 09:56:14.957792] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:39.520 [2024-12-09 09:56:14.957802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:39.520 qpair failed and we were unable to recover it. 00:38:39.782 [2024-12-09 09:56:14.967796] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.782 [2024-12-09 09:56:14.967842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.782 [2024-12-09 09:56:14.967852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.782 [2024-12-09 09:56:14.967857] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.782 [2024-12-09 09:56:14.967862] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:39.782 [2024-12-09 09:56:14.967873] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:39.782 qpair failed and we were unable to recover it. 00:38:39.782 [2024-12-09 09:56:14.977878] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.782 [2024-12-09 09:56:14.977956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.782 [2024-12-09 09:56:14.977966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.782 [2024-12-09 09:56:14.977971] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.782 [2024-12-09 09:56:14.977975] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:39.782 [2024-12-09 09:56:14.977989] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:39.782 qpair failed and we were unable to recover it. 00:38:39.782 [2024-12-09 09:56:14.987909] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.782 [2024-12-09 09:56:14.987965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.782 [2024-12-09 09:56:14.987975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.782 [2024-12-09 09:56:14.987980] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.782 [2024-12-09 09:56:14.987985] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:39.782 [2024-12-09 09:56:14.987995] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:39.782 qpair failed and we were unable to recover it. 00:38:39.782 [2024-12-09 09:56:14.997889] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.782 [2024-12-09 09:56:14.997935] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.782 [2024-12-09 09:56:14.997945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.782 [2024-12-09 09:56:14.997950] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.782 [2024-12-09 09:56:14.997955] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:39.782 [2024-12-09 09:56:14.997965] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:39.782 qpair failed and we were unable to recover it. 00:38:39.782 [2024-12-09 09:56:15.007918] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.782 [2024-12-09 09:56:15.007961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.782 [2024-12-09 09:56:15.007971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.782 [2024-12-09 09:56:15.007976] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.782 [2024-12-09 09:56:15.007981] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:39.782 [2024-12-09 09:56:15.007992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:39.782 qpair failed and we were unable to recover it. 00:38:39.782 [2024-12-09 09:56:15.017995] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.782 [2024-12-09 09:56:15.018074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.782 [2024-12-09 09:56:15.018084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.782 [2024-12-09 09:56:15.018089] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.782 [2024-12-09 09:56:15.018094] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:39.782 [2024-12-09 09:56:15.018104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:39.782 qpair failed and we were unable to recover it. 00:38:39.782 [2024-12-09 09:56:15.028009] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.782 [2024-12-09 09:56:15.028056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.782 [2024-12-09 09:56:15.028066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.782 [2024-12-09 09:56:15.028072] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.782 [2024-12-09 09:56:15.028076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:39.782 [2024-12-09 09:56:15.028087] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:39.782 qpair failed and we were unable to recover it. 00:38:39.782 [2024-12-09 09:56:15.037989] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.782 [2024-12-09 09:56:15.038054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.782 [2024-12-09 09:56:15.038064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.782 [2024-12-09 09:56:15.038069] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.782 [2024-12-09 09:56:15.038074] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:39.782 [2024-12-09 09:56:15.038084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:39.782 qpair failed and we were unable to recover it. 00:38:39.782 [2024-12-09 09:56:15.048051] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.782 [2024-12-09 09:56:15.048169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.782 [2024-12-09 09:56:15.048180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.782 [2024-12-09 09:56:15.048186] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.782 [2024-12-09 09:56:15.048190] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:39.782 [2024-12-09 09:56:15.048200] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:39.782 qpair failed and we were unable to recover it. 00:38:39.782 [2024-12-09 09:56:15.057968] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.782 [2024-12-09 09:56:15.058019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.782 [2024-12-09 09:56:15.058029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.782 [2024-12-09 09:56:15.058034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.782 [2024-12-09 09:56:15.058038] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:39.782 [2024-12-09 09:56:15.058049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:39.782 qpair failed and we were unable to recover it. 00:38:39.782 [2024-12-09 09:56:15.068132] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.782 [2024-12-09 09:56:15.068202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.782 [2024-12-09 09:56:15.068215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.782 [2024-12-09 09:56:15.068220] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.782 [2024-12-09 09:56:15.068225] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:39.782 [2024-12-09 09:56:15.068235] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:39.782 qpair failed and we were unable to recover it. 00:38:39.782 [2024-12-09 09:56:15.078098] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.782 [2024-12-09 09:56:15.078141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.782 [2024-12-09 09:56:15.078151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.782 [2024-12-09 09:56:15.078156] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.782 [2024-12-09 09:56:15.078161] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:39.782 [2024-12-09 09:56:15.078171] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:39.782 qpair failed and we were unable to recover it. 00:38:39.782 [2024-12-09 09:56:15.088091] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.782 [2024-12-09 09:56:15.088176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.782 [2024-12-09 09:56:15.088185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.783 [2024-12-09 09:56:15.088191] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.783 [2024-12-09 09:56:15.088196] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:39.783 [2024-12-09 09:56:15.088206] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:39.783 qpair failed and we were unable to recover it. 00:38:39.783 [2024-12-09 09:56:15.098227] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.783 [2024-12-09 09:56:15.098277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.783 [2024-12-09 09:56:15.098287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.783 [2024-12-09 09:56:15.098292] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.783 [2024-12-09 09:56:15.098297] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:39.783 [2024-12-09 09:56:15.098307] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:39.783 qpair failed and we were unable to recover it. 00:38:39.783 [2024-12-09 09:56:15.108210] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.783 [2024-12-09 09:56:15.108271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.783 [2024-12-09 09:56:15.108281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.783 [2024-12-09 09:56:15.108286] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.783 [2024-12-09 09:56:15.108293] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:39.783 [2024-12-09 09:56:15.108304] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:39.783 qpair failed and we were unable to recover it. 00:38:39.783 [2024-12-09 09:56:15.118218] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.783 [2024-12-09 09:56:15.118263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.783 [2024-12-09 09:56:15.118273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.783 [2024-12-09 09:56:15.118279] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.783 [2024-12-09 09:56:15.118283] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:39.783 [2024-12-09 09:56:15.118294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:39.783 qpair failed and we were unable to recover it. 00:38:39.783 [2024-12-09 09:56:15.128241] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.783 [2024-12-09 09:56:15.128306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.783 [2024-12-09 09:56:15.128316] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.783 [2024-12-09 09:56:15.128321] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.783 [2024-12-09 09:56:15.128326] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:39.783 [2024-12-09 09:56:15.128336] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:39.783 qpair failed and we were unable to recover it. 00:38:39.783 [2024-12-09 09:56:15.138320] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.783 [2024-12-09 09:56:15.138378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.783 [2024-12-09 09:56:15.138388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.783 [2024-12-09 09:56:15.138393] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.783 [2024-12-09 09:56:15.138398] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:39.783 [2024-12-09 09:56:15.138408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:39.783 qpair failed and we were unable to recover it. 00:38:39.783 [2024-12-09 09:56:15.148335] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.783 [2024-12-09 09:56:15.148390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.783 [2024-12-09 09:56:15.148400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.783 [2024-12-09 09:56:15.148405] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.783 [2024-12-09 09:56:15.148410] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:39.783 [2024-12-09 09:56:15.148420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:39.783 qpair failed and we were unable to recover it. 00:38:39.783 [2024-12-09 09:56:15.158326] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.783 [2024-12-09 09:56:15.158371] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.783 [2024-12-09 09:56:15.158382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.783 [2024-12-09 09:56:15.158387] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.783 [2024-12-09 09:56:15.158392] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:39.783 [2024-12-09 09:56:15.158402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:39.783 qpair failed and we were unable to recover it. 00:38:39.783 [2024-12-09 09:56:15.168326] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.783 [2024-12-09 09:56:15.168371] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.783 [2024-12-09 09:56:15.168381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.783 [2024-12-09 09:56:15.168386] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.783 [2024-12-09 09:56:15.168390] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:39.783 [2024-12-09 09:56:15.168400] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:39.783 qpair failed and we were unable to recover it. 00:38:39.783 [2024-12-09 09:56:15.178431] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.783 [2024-12-09 09:56:15.178505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.783 [2024-12-09 09:56:15.178515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.783 [2024-12-09 09:56:15.178520] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.783 [2024-12-09 09:56:15.178525] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:39.783 [2024-12-09 09:56:15.178535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:39.783 qpair failed and we were unable to recover it. 00:38:39.783 [2024-12-09 09:56:15.188432] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.783 [2024-12-09 09:56:15.188520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.783 [2024-12-09 09:56:15.188529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.783 [2024-12-09 09:56:15.188534] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.783 [2024-12-09 09:56:15.188539] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:39.783 [2024-12-09 09:56:15.188550] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:39.783 qpair failed and we were unable to recover it. 00:38:39.783 [2024-12-09 09:56:15.198432] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.783 [2024-12-09 09:56:15.198476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.783 [2024-12-09 09:56:15.198488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.783 [2024-12-09 09:56:15.198493] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.783 [2024-12-09 09:56:15.198498] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:39.783 [2024-12-09 09:56:15.198508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:39.783 qpair failed and we were unable to recover it. 00:38:39.783 [2024-12-09 09:56:15.208430] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.783 [2024-12-09 09:56:15.208475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.783 [2024-12-09 09:56:15.208485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.783 [2024-12-09 09:56:15.208490] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.783 [2024-12-09 09:56:15.208494] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:39.783 [2024-12-09 09:56:15.208504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:39.783 qpair failed and we were unable to recover it. 00:38:39.783 [2024-12-09 09:56:15.218500] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.783 [2024-12-09 09:56:15.218552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.783 [2024-12-09 09:56:15.218562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.783 [2024-12-09 09:56:15.218567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.784 [2024-12-09 09:56:15.218572] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:39.784 [2024-12-09 09:56:15.218582] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:39.784 qpair failed and we were unable to recover it. 00:38:39.784 [2024-12-09 09:56:15.228567] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:39.784 [2024-12-09 09:56:15.228618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:39.784 [2024-12-09 09:56:15.228628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:39.784 [2024-12-09 09:56:15.228634] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:39.784 [2024-12-09 09:56:15.228644] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:39.784 [2024-12-09 09:56:15.228655] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:39.784 qpair failed and we were unable to recover it. 00:38:40.045 [2024-12-09 09:56:15.238523] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.045 [2024-12-09 09:56:15.238567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.045 [2024-12-09 09:56:15.238577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.045 [2024-12-09 09:56:15.238582] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.045 [2024-12-09 09:56:15.238590] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:40.045 [2024-12-09 09:56:15.238601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:40.045 qpair failed and we were unable to recover it. 00:38:40.045 [2024-12-09 09:56:15.248560] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.045 [2024-12-09 09:56:15.248606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.045 [2024-12-09 09:56:15.248617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.045 [2024-12-09 09:56:15.248623] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.045 [2024-12-09 09:56:15.248628] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:40.045 [2024-12-09 09:56:15.248642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:40.045 qpair failed and we were unable to recover it. 00:38:40.045 [2024-12-09 09:56:15.258513] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.045 [2024-12-09 09:56:15.258563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.045 [2024-12-09 09:56:15.258574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.045 [2024-12-09 09:56:15.258580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.045 [2024-12-09 09:56:15.258585] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:40.045 [2024-12-09 09:56:15.258595] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:40.045 qpair failed and we were unable to recover it. 00:38:40.045 [2024-12-09 09:56:15.268674] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.045 [2024-12-09 09:56:15.268762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.045 [2024-12-09 09:56:15.268773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.045 [2024-12-09 09:56:15.268778] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.045 [2024-12-09 09:56:15.268784] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:40.045 [2024-12-09 09:56:15.268794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:40.045 qpair failed and we were unable to recover it. 00:38:40.045 [2024-12-09 09:56:15.278656] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.045 [2024-12-09 09:56:15.278712] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.045 [2024-12-09 09:56:15.278722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.045 [2024-12-09 09:56:15.278727] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.045 [2024-12-09 09:56:15.278732] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:40.045 [2024-12-09 09:56:15.278743] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:40.045 qpair failed and we were unable to recover it. 00:38:40.045 [2024-12-09 09:56:15.288657] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.045 [2024-12-09 09:56:15.288699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.045 [2024-12-09 09:56:15.288709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.045 [2024-12-09 09:56:15.288715] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.045 [2024-12-09 09:56:15.288719] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:40.045 [2024-12-09 09:56:15.288730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:40.045 qpair failed and we were unable to recover it. 00:38:40.045 [2024-12-09 09:56:15.298771] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.045 [2024-12-09 09:56:15.298824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.045 [2024-12-09 09:56:15.298836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.045 [2024-12-09 09:56:15.298841] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.045 [2024-12-09 09:56:15.298846] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:40.045 [2024-12-09 09:56:15.298857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:40.045 qpair failed and we were unable to recover it. 00:38:40.045 [2024-12-09 09:56:15.308774] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.045 [2024-12-09 09:56:15.308827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.045 [2024-12-09 09:56:15.308838] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.045 [2024-12-09 09:56:15.308843] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.045 [2024-12-09 09:56:15.308848] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:40.045 [2024-12-09 09:56:15.308859] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:40.045 qpair failed and we were unable to recover it. 00:38:40.045 [2024-12-09 09:56:15.318763] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.045 [2024-12-09 09:56:15.318817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.045 [2024-12-09 09:56:15.318828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.045 [2024-12-09 09:56:15.318833] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.045 [2024-12-09 09:56:15.318838] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:40.045 [2024-12-09 09:56:15.318848] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:40.045 qpair failed and we were unable to recover it. 00:38:40.045 [2024-12-09 09:56:15.328774] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.045 [2024-12-09 09:56:15.328824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.045 [2024-12-09 09:56:15.328834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.045 [2024-12-09 09:56:15.328839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.045 [2024-12-09 09:56:15.328844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:40.045 [2024-12-09 09:56:15.328854] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:40.045 qpair failed and we were unable to recover it. 00:38:40.045 [2024-12-09 09:56:15.338868] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.045 [2024-12-09 09:56:15.338919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.045 [2024-12-09 09:56:15.338929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.045 [2024-12-09 09:56:15.338934] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.045 [2024-12-09 09:56:15.338939] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:40.045 [2024-12-09 09:56:15.338949] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:40.045 qpair failed and we were unable to recover it. 00:38:40.045 [2024-12-09 09:56:15.348767] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.045 [2024-12-09 09:56:15.348821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.045 [2024-12-09 09:56:15.348831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.045 [2024-12-09 09:56:15.348836] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.045 [2024-12-09 09:56:15.348841] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:40.045 [2024-12-09 09:56:15.348852] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:40.045 qpair failed and we were unable to recover it. 00:38:40.045 [2024-12-09 09:56:15.358868] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.045 [2024-12-09 09:56:15.358948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.045 [2024-12-09 09:56:15.358958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.045 [2024-12-09 09:56:15.358963] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.045 [2024-12-09 09:56:15.358968] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:40.045 [2024-12-09 09:56:15.358979] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:40.045 qpair failed and we were unable to recover it. 00:38:40.045 [2024-12-09 09:56:15.368868] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.045 [2024-12-09 09:56:15.368910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.045 [2024-12-09 09:56:15.368920] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.045 [2024-12-09 09:56:15.368929] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.045 [2024-12-09 09:56:15.368934] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:40.045 [2024-12-09 09:56:15.368945] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:40.045 qpair failed and we were unable to recover it. 00:38:40.045 [2024-12-09 09:56:15.379018] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.045 [2024-12-09 09:56:15.379089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.045 [2024-12-09 09:56:15.379100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.045 [2024-12-09 09:56:15.379106] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.045 [2024-12-09 09:56:15.379111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:40.045 [2024-12-09 09:56:15.379123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:40.045 qpair failed and we were unable to recover it. 00:38:40.045 [2024-12-09 09:56:15.388999] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.045 [2024-12-09 09:56:15.389054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.045 [2024-12-09 09:56:15.389064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.045 [2024-12-09 09:56:15.389069] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.045 [2024-12-09 09:56:15.389074] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:40.045 [2024-12-09 09:56:15.389085] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:40.045 qpair failed and we were unable to recover it. 00:38:40.045 [2024-12-09 09:56:15.398983] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.046 [2024-12-09 09:56:15.399055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.046 [2024-12-09 09:56:15.399064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.046 [2024-12-09 09:56:15.399070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.046 [2024-12-09 09:56:15.399074] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:40.046 [2024-12-09 09:56:15.399085] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:40.046 qpair failed and we were unable to recover it. 00:38:40.046 [2024-12-09 09:56:15.409049] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.046 [2024-12-09 09:56:15.409090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.046 [2024-12-09 09:56:15.409100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.046 [2024-12-09 09:56:15.409105] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.046 [2024-12-09 09:56:15.409110] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:40.046 [2024-12-09 09:56:15.409126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:40.046 qpair failed and we were unable to recover it. 00:38:40.046 [2024-12-09 09:56:15.419074] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.046 [2024-12-09 09:56:15.419138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.046 [2024-12-09 09:56:15.419148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.046 [2024-12-09 09:56:15.419153] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.046 [2024-12-09 09:56:15.419158] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:40.046 [2024-12-09 09:56:15.419168] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:40.046 qpair failed and we were unable to recover it. 00:38:40.046 [2024-12-09 09:56:15.429123] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.046 [2024-12-09 09:56:15.429170] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.046 [2024-12-09 09:56:15.429180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.046 [2024-12-09 09:56:15.429185] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.046 [2024-12-09 09:56:15.429189] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:40.046 [2024-12-09 09:56:15.429200] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:40.046 qpair failed and we were unable to recover it. 00:38:40.046 [2024-12-09 09:56:15.439066] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.046 [2024-12-09 09:56:15.439110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.046 [2024-12-09 09:56:15.439120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.046 [2024-12-09 09:56:15.439125] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.046 [2024-12-09 09:56:15.439129] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:40.046 [2024-12-09 09:56:15.439139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:40.046 qpair failed and we were unable to recover it. 00:38:40.046 [2024-12-09 09:56:15.449105] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.046 [2024-12-09 09:56:15.449147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.046 [2024-12-09 09:56:15.449157] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.046 [2024-12-09 09:56:15.449162] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.046 [2024-12-09 09:56:15.449166] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:40.046 [2024-12-09 09:56:15.449176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:40.046 qpair failed and we were unable to recover it. 00:38:40.046 [2024-12-09 09:56:15.459167] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.046 [2024-12-09 09:56:15.459222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.046 [2024-12-09 09:56:15.459232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.046 [2024-12-09 09:56:15.459238] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.046 [2024-12-09 09:56:15.459242] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:40.046 [2024-12-09 09:56:15.459253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:40.046 qpair failed and we were unable to recover it. 00:38:40.046 [2024-12-09 09:56:15.469128] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.046 [2024-12-09 09:56:15.469181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.046 [2024-12-09 09:56:15.469191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.046 [2024-12-09 09:56:15.469197] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.046 [2024-12-09 09:56:15.469201] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:40.046 [2024-12-09 09:56:15.469211] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:40.046 qpair failed and we were unable to recover it. 00:38:40.046 [2024-12-09 09:56:15.479203] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.046 [2024-12-09 09:56:15.479248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.046 [2024-12-09 09:56:15.479258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.046 [2024-12-09 09:56:15.479263] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.046 [2024-12-09 09:56:15.479268] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:40.046 [2024-12-09 09:56:15.479278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:40.046 qpair failed and we were unable to recover it. 00:38:40.046 [2024-12-09 09:56:15.489218] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.046 [2024-12-09 09:56:15.489264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.046 [2024-12-09 09:56:15.489273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.046 [2024-12-09 09:56:15.489278] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.046 [2024-12-09 09:56:15.489283] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:40.046 [2024-12-09 09:56:15.489293] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:40.046 qpair failed and we were unable to recover it. 00:38:40.307 [2024-12-09 09:56:15.499290] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.307 [2024-12-09 09:56:15.499339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.307 [2024-12-09 09:56:15.499352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.307 [2024-12-09 09:56:15.499358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.307 [2024-12-09 09:56:15.499362] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:40.307 [2024-12-09 09:56:15.499372] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:40.307 qpair failed and we were unable to recover it. 00:38:40.307 [2024-12-09 09:56:15.509327] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.307 [2024-12-09 09:56:15.509377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.307 [2024-12-09 09:56:15.509387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.307 [2024-12-09 09:56:15.509392] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.307 [2024-12-09 09:56:15.509396] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:40.307 [2024-12-09 09:56:15.509407] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:40.307 qpair failed and we were unable to recover it. 00:38:40.307 [2024-12-09 09:56:15.519312] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.307 [2024-12-09 09:56:15.519359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.307 [2024-12-09 09:56:15.519370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.307 [2024-12-09 09:56:15.519375] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.307 [2024-12-09 09:56:15.519380] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:40.307 [2024-12-09 09:56:15.519390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:40.307 qpair failed and we were unable to recover it. 00:38:40.307 [2024-12-09 09:56:15.529342] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.307 [2024-12-09 09:56:15.529383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.307 [2024-12-09 09:56:15.529393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.307 [2024-12-09 09:56:15.529398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.307 [2024-12-09 09:56:15.529402] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:40.307 [2024-12-09 09:56:15.529413] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:40.307 qpair failed and we were unable to recover it. 00:38:40.307 [2024-12-09 09:56:15.539399] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.307 [2024-12-09 09:56:15.539447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.307 [2024-12-09 09:56:15.539457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.307 [2024-12-09 09:56:15.539462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.308 [2024-12-09 09:56:15.539466] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:40.308 [2024-12-09 09:56:15.539480] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:40.308 qpair failed and we were unable to recover it. 00:38:40.308 [2024-12-09 09:56:15.549402] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.308 [2024-12-09 09:56:15.549448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.308 [2024-12-09 09:56:15.549458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.308 [2024-12-09 09:56:15.549464] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.308 [2024-12-09 09:56:15.549469] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:40.308 [2024-12-09 09:56:15.549479] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:40.308 qpair failed and we were unable to recover it. 00:38:40.308 [2024-12-09 09:56:15.559424] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.308 [2024-12-09 09:56:15.559469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.308 [2024-12-09 09:56:15.559480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.308 [2024-12-09 09:56:15.559485] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.308 [2024-12-09 09:56:15.559489] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:40.308 [2024-12-09 09:56:15.559500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:40.308 qpair failed and we were unable to recover it. 00:38:40.308 [2024-12-09 09:56:15.569446] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.308 [2024-12-09 09:56:15.569486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.308 [2024-12-09 09:56:15.569496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.308 [2024-12-09 09:56:15.569502] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.308 [2024-12-09 09:56:15.569507] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:40.308 [2024-12-09 09:56:15.569517] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:40.308 qpair failed and we were unable to recover it. 00:38:40.308 [2024-12-09 09:56:15.579520] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.308 [2024-12-09 09:56:15.579614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.308 [2024-12-09 09:56:15.579624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.308 [2024-12-09 09:56:15.579631] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.308 [2024-12-09 09:56:15.579636] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:40.308 [2024-12-09 09:56:15.579650] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:40.308 qpair failed and we were unable to recover it. 00:38:40.308 [2024-12-09 09:56:15.589554] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.308 [2024-12-09 09:56:15.589608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.308 [2024-12-09 09:56:15.589618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.308 [2024-12-09 09:56:15.589623] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.308 [2024-12-09 09:56:15.589628] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:40.308 [2024-12-09 09:56:15.589643] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:40.308 qpair failed and we were unable to recover it. 00:38:40.308 [2024-12-09 09:56:15.599507] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.308 [2024-12-09 09:56:15.599550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.308 [2024-12-09 09:56:15.599560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.308 [2024-12-09 09:56:15.599565] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.308 [2024-12-09 09:56:15.599569] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:40.308 [2024-12-09 09:56:15.599580] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:40.308 qpair failed and we were unable to recover it. 00:38:40.308 [2024-12-09 09:56:15.609554] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.308 [2024-12-09 09:56:15.609647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.308 [2024-12-09 09:56:15.609657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.308 [2024-12-09 09:56:15.609662] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.308 [2024-12-09 09:56:15.609667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:40.308 [2024-12-09 09:56:15.609677] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:40.308 qpair failed and we were unable to recover it. 00:38:40.308 [2024-12-09 09:56:15.619613] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.308 [2024-12-09 09:56:15.619667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.308 [2024-12-09 09:56:15.619677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.308 [2024-12-09 09:56:15.619682] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.308 [2024-12-09 09:56:15.619687] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:40.308 [2024-12-09 09:56:15.619697] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:40.308 qpair failed and we were unable to recover it. 00:38:40.308 [2024-12-09 09:56:15.629617] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.308 [2024-12-09 09:56:15.629689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.308 [2024-12-09 09:56:15.629703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.308 [2024-12-09 09:56:15.629709] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.308 [2024-12-09 09:56:15.629715] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:40.308 [2024-12-09 09:56:15.629725] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:40.308 qpair failed and we were unable to recover it. 00:38:40.308 [2024-12-09 09:56:15.639636] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.308 [2024-12-09 09:56:15.639684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.308 [2024-12-09 09:56:15.639694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.308 [2024-12-09 09:56:15.639699] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.308 [2024-12-09 09:56:15.639704] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:40.308 [2024-12-09 09:56:15.639714] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:40.308 qpair failed and we were unable to recover it. 00:38:40.308 [2024-12-09 09:56:15.649669] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.308 [2024-12-09 09:56:15.649708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.308 [2024-12-09 09:56:15.649718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.308 [2024-12-09 09:56:15.649723] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.308 [2024-12-09 09:56:15.649728] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:40.308 [2024-12-09 09:56:15.649738] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:40.308 qpair failed and we were unable to recover it. 00:38:40.308 [2024-12-09 09:56:15.659726] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.308 [2024-12-09 09:56:15.659777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.308 [2024-12-09 09:56:15.659788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.308 [2024-12-09 09:56:15.659794] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.308 [2024-12-09 09:56:15.659798] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:40.308 [2024-12-09 09:56:15.659809] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:40.308 qpair failed and we were unable to recover it. 00:38:40.308 [2024-12-09 09:56:15.669736] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.308 [2024-12-09 09:56:15.669786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.308 [2024-12-09 09:56:15.669796] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.308 [2024-12-09 09:56:15.669801] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.308 [2024-12-09 09:56:15.669809] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:40.309 [2024-12-09 09:56:15.669819] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:40.309 qpair failed and we were unable to recover it. 00:38:40.309 [2024-12-09 09:56:15.679738] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.309 [2024-12-09 09:56:15.679780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.309 [2024-12-09 09:56:15.679790] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.309 [2024-12-09 09:56:15.679795] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.309 [2024-12-09 09:56:15.679800] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:40.309 [2024-12-09 09:56:15.679810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:40.309 qpair failed and we were unable to recover it. 00:38:40.309 [2024-12-09 09:56:15.689803] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.309 [2024-12-09 09:56:15.689860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.309 [2024-12-09 09:56:15.689870] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.309 [2024-12-09 09:56:15.689875] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.309 [2024-12-09 09:56:15.689879] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:40.309 [2024-12-09 09:56:15.689890] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:40.309 qpair failed and we were unable to recover it. 00:38:40.309 [2024-12-09 09:56:15.699841] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.309 [2024-12-09 09:56:15.699890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.309 [2024-12-09 09:56:15.699900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.309 [2024-12-09 09:56:15.699905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.309 [2024-12-09 09:56:15.699910] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:40.309 [2024-12-09 09:56:15.699920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:40.309 qpair failed and we were unable to recover it. 00:38:40.309 [2024-12-09 09:56:15.709883] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.309 [2024-12-09 09:56:15.709932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.309 [2024-12-09 09:56:15.709942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.309 [2024-12-09 09:56:15.709948] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.309 [2024-12-09 09:56:15.709952] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:40.309 [2024-12-09 09:56:15.709963] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:40.309 qpair failed and we were unable to recover it. 00:38:40.309 [2024-12-09 09:56:15.719880] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.309 [2024-12-09 09:56:15.719923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.309 [2024-12-09 09:56:15.719933] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.309 [2024-12-09 09:56:15.719939] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.309 [2024-12-09 09:56:15.719943] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:40.309 [2024-12-09 09:56:15.719954] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:40.309 qpair failed and we were unable to recover it. 00:38:40.309 [2024-12-09 09:56:15.729886] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.309 [2024-12-09 09:56:15.729929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.309 [2024-12-09 09:56:15.729939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.309 [2024-12-09 09:56:15.729944] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.309 [2024-12-09 09:56:15.729948] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:40.309 [2024-12-09 09:56:15.729959] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:40.309 qpair failed and we were unable to recover it. 00:38:40.309 [2024-12-09 09:56:15.739968] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.309 [2024-12-09 09:56:15.740053] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.309 [2024-12-09 09:56:15.740063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.309 [2024-12-09 09:56:15.740068] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.309 [2024-12-09 09:56:15.740073] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:40.309 [2024-12-09 09:56:15.740084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:40.309 qpair failed and we were unable to recover it. 00:38:40.309 [2024-12-09 09:56:15.750003] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.309 [2024-12-09 09:56:15.750087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.309 [2024-12-09 09:56:15.750098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.309 [2024-12-09 09:56:15.750104] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.309 [2024-12-09 09:56:15.750109] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:40.309 [2024-12-09 09:56:15.750120] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:40.309 qpair failed and we were unable to recover it. 00:38:40.570 [2024-12-09 09:56:15.759976] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.570 [2024-12-09 09:56:15.760021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.570 [2024-12-09 09:56:15.760035] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.570 [2024-12-09 09:56:15.760040] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.570 [2024-12-09 09:56:15.760044] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:40.570 [2024-12-09 09:56:15.760055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:40.570 qpair failed and we were unable to recover it. 00:38:40.570 [2024-12-09 09:56:15.770010] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.570 [2024-12-09 09:56:15.770059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.570 [2024-12-09 09:56:15.770069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.570 [2024-12-09 09:56:15.770075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.570 [2024-12-09 09:56:15.770079] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:40.570 [2024-12-09 09:56:15.770090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:40.570 qpair failed and we were unable to recover it. 00:38:40.570 [2024-12-09 09:56:15.780063] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.570 [2024-12-09 09:56:15.780144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.570 [2024-12-09 09:56:15.780154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.570 [2024-12-09 09:56:15.780160] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.570 [2024-12-09 09:56:15.780165] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:40.570 [2024-12-09 09:56:15.780175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:40.570 qpair failed and we were unable to recover it. 00:38:40.570 [2024-12-09 09:56:15.790083] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.570 [2024-12-09 09:56:15.790169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.570 [2024-12-09 09:56:15.790178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.570 [2024-12-09 09:56:15.790184] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.570 [2024-12-09 09:56:15.790189] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:40.570 [2024-12-09 09:56:15.790200] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:40.570 qpair failed and we were unable to recover it. 00:38:40.570 [2024-12-09 09:56:15.800071] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.570 [2024-12-09 09:56:15.800112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.570 [2024-12-09 09:56:15.800122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.570 [2024-12-09 09:56:15.800129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.570 [2024-12-09 09:56:15.800134] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:40.570 [2024-12-09 09:56:15.800145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:40.570 qpair failed and we were unable to recover it. 00:38:40.570 [2024-12-09 09:56:15.810151] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.570 [2024-12-09 09:56:15.810242] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.570 [2024-12-09 09:56:15.810252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.570 [2024-12-09 09:56:15.810258] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.570 [2024-12-09 09:56:15.810263] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:40.570 [2024-12-09 09:56:15.810273] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:40.570 qpair failed and we were unable to recover it. 00:38:40.570 [2024-12-09 09:56:15.820179] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.570 [2024-12-09 09:56:15.820227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.570 [2024-12-09 09:56:15.820237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.570 [2024-12-09 09:56:15.820242] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.570 [2024-12-09 09:56:15.820247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:40.570 [2024-12-09 09:56:15.820257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:40.570 qpair failed and we were unable to recover it. 00:38:40.570 [2024-12-09 09:56:15.830117] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.570 [2024-12-09 09:56:15.830166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.570 [2024-12-09 09:56:15.830176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.570 [2024-12-09 09:56:15.830182] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.570 [2024-12-09 09:56:15.830186] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:40.570 [2024-12-09 09:56:15.830197] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:40.570 qpair failed and we were unable to recover it. 00:38:40.570 [2024-12-09 09:56:15.840180] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.570 [2024-12-09 09:56:15.840231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.570 [2024-12-09 09:56:15.840241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.570 [2024-12-09 09:56:15.840246] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.570 [2024-12-09 09:56:15.840251] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:40.570 [2024-12-09 09:56:15.840261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:40.570 qpair failed and we were unable to recover it. 00:38:40.570 [2024-12-09 09:56:15.850186] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.570 [2024-12-09 09:56:15.850232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.570 [2024-12-09 09:56:15.850242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.570 [2024-12-09 09:56:15.850248] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.570 [2024-12-09 09:56:15.850253] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:40.570 [2024-12-09 09:56:15.850262] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:40.570 qpair failed and we were unable to recover it. 00:38:40.570 [2024-12-09 09:56:15.860299] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.570 [2024-12-09 09:56:15.860349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.570 [2024-12-09 09:56:15.860359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.570 [2024-12-09 09:56:15.860364] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.570 [2024-12-09 09:56:15.860369] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:40.571 [2024-12-09 09:56:15.860379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:40.571 qpair failed and we were unable to recover it. 00:38:40.571 [2024-12-09 09:56:15.870291] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.571 [2024-12-09 09:56:15.870353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.571 [2024-12-09 09:56:15.870372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.571 [2024-12-09 09:56:15.870378] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.571 [2024-12-09 09:56:15.870383] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:40.571 [2024-12-09 09:56:15.870397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:40.571 qpair failed and we were unable to recover it. 00:38:40.571 [2024-12-09 09:56:15.880333] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.571 [2024-12-09 09:56:15.880378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.571 [2024-12-09 09:56:15.880389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.571 [2024-12-09 09:56:15.880394] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.571 [2024-12-09 09:56:15.880399] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:40.571 [2024-12-09 09:56:15.880410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:40.571 qpair failed and we were unable to recover it. 00:38:40.571 [2024-12-09 09:56:15.890351] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.571 [2024-12-09 09:56:15.890399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.571 [2024-12-09 09:56:15.890418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.571 [2024-12-09 09:56:15.890424] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.571 [2024-12-09 09:56:15.890429] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:40.571 [2024-12-09 09:56:15.890443] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:40.571 qpair failed and we were unable to recover it. 00:38:40.571 [2024-12-09 09:56:15.900382] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.571 [2024-12-09 09:56:15.900440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.571 [2024-12-09 09:56:15.900452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.571 [2024-12-09 09:56:15.900458] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.571 [2024-12-09 09:56:15.900463] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:40.571 [2024-12-09 09:56:15.900475] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:40.571 qpair failed and we were unable to recover it. 00:38:40.571 [2024-12-09 09:56:15.910439] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.571 [2024-12-09 09:56:15.910491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.571 [2024-12-09 09:56:15.910509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.571 [2024-12-09 09:56:15.910515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.571 [2024-12-09 09:56:15.910521] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:40.571 [2024-12-09 09:56:15.910535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:40.571 qpair failed and we were unable to recover it. 00:38:40.571 [2024-12-09 09:56:15.920387] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.571 [2024-12-09 09:56:15.920430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.571 [2024-12-09 09:56:15.920441] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.571 [2024-12-09 09:56:15.920447] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.571 [2024-12-09 09:56:15.920451] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:40.571 [2024-12-09 09:56:15.920463] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:40.571 qpair failed and we were unable to recover it. 00:38:40.571 [2024-12-09 09:56:15.930414] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.571 [2024-12-09 09:56:15.930465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.571 [2024-12-09 09:56:15.930483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.571 [2024-12-09 09:56:15.930493] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.571 [2024-12-09 09:56:15.930498] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:40.571 [2024-12-09 09:56:15.930513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:40.571 qpair failed and we were unable to recover it. 00:38:40.571 [2024-12-09 09:56:15.940515] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.571 [2024-12-09 09:56:15.940565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.571 [2024-12-09 09:56:15.940576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.571 [2024-12-09 09:56:15.940582] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.571 [2024-12-09 09:56:15.940586] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:40.571 [2024-12-09 09:56:15.940598] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:40.571 qpair failed and we were unable to recover it. 00:38:40.571 [2024-12-09 09:56:15.950544] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.571 [2024-12-09 09:56:15.950627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.571 [2024-12-09 09:56:15.950646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.571 [2024-12-09 09:56:15.950652] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.571 [2024-12-09 09:56:15.950657] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:40.571 [2024-12-09 09:56:15.950669] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:40.571 qpair failed and we were unable to recover it. 00:38:40.571 [2024-12-09 09:56:15.960472] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.571 [2024-12-09 09:56:15.960520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.571 [2024-12-09 09:56:15.960530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.571 [2024-12-09 09:56:15.960535] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.571 [2024-12-09 09:56:15.960540] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:40.571 [2024-12-09 09:56:15.960550] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:40.571 qpair failed and we were unable to recover it. 00:38:40.571 [2024-12-09 09:56:15.970531] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.571 [2024-12-09 09:56:15.970597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.571 [2024-12-09 09:56:15.970608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.571 [2024-12-09 09:56:15.970613] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.571 [2024-12-09 09:56:15.970618] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:40.571 [2024-12-09 09:56:15.970631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:40.571 qpair failed and we were unable to recover it. 00:38:40.571 [2024-12-09 09:56:15.980611] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.571 [2024-12-09 09:56:15.980665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.571 [2024-12-09 09:56:15.980675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.571 [2024-12-09 09:56:15.980681] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.571 [2024-12-09 09:56:15.980685] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:40.571 [2024-12-09 09:56:15.980696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:40.571 qpair failed and we were unable to recover it. 00:38:40.571 [2024-12-09 09:56:15.990605] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.571 [2024-12-09 09:56:15.990660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.571 [2024-12-09 09:56:15.990670] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.571 [2024-12-09 09:56:15.990676] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.571 [2024-12-09 09:56:15.990680] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:40.571 [2024-12-09 09:56:15.990691] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:40.572 qpair failed and we were unable to recover it. 00:38:40.572 [2024-12-09 09:56:16.000630] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.572 [2024-12-09 09:56:16.000675] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.572 [2024-12-09 09:56:16.000686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.572 [2024-12-09 09:56:16.000691] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.572 [2024-12-09 09:56:16.000696] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:40.572 [2024-12-09 09:56:16.000706] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:40.572 qpair failed and we were unable to recover it. 00:38:40.572 [2024-12-09 09:56:16.010615] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.572 [2024-12-09 09:56:16.010665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.572 [2024-12-09 09:56:16.010675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.572 [2024-12-09 09:56:16.010680] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.572 [2024-12-09 09:56:16.010684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:40.572 [2024-12-09 09:56:16.010695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:40.572 qpair failed and we were unable to recover it. 00:38:40.832 [2024-12-09 09:56:16.020697] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.832 [2024-12-09 09:56:16.020751] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.832 [2024-12-09 09:56:16.020762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.832 [2024-12-09 09:56:16.020767] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.832 [2024-12-09 09:56:16.020772] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:40.832 [2024-12-09 09:56:16.020782] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:40.832 qpair failed and we were unable to recover it. 00:38:40.832 [2024-12-09 09:56:16.030757] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.833 [2024-12-09 09:56:16.030805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.833 [2024-12-09 09:56:16.030815] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.833 [2024-12-09 09:56:16.030820] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.833 [2024-12-09 09:56:16.030825] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:40.833 [2024-12-09 09:56:16.030836] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:40.833 qpair failed and we were unable to recover it. 00:38:40.833 [2024-12-09 09:56:16.040739] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.833 [2024-12-09 09:56:16.040819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.833 [2024-12-09 09:56:16.040829] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.833 [2024-12-09 09:56:16.040834] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.833 [2024-12-09 09:56:16.040839] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:40.833 [2024-12-09 09:56:16.040850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:40.833 qpair failed and we were unable to recover it. 00:38:40.833 [2024-12-09 09:56:16.050776] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.833 [2024-12-09 09:56:16.050817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.833 [2024-12-09 09:56:16.050828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.833 [2024-12-09 09:56:16.050833] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.833 [2024-12-09 09:56:16.050838] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:40.833 [2024-12-09 09:56:16.050848] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:40.833 qpair failed and we were unable to recover it. 00:38:40.833 [2024-12-09 09:56:16.060840] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.833 [2024-12-09 09:56:16.060887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.833 [2024-12-09 09:56:16.060899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.833 [2024-12-09 09:56:16.060905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.833 [2024-12-09 09:56:16.060909] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:40.833 [2024-12-09 09:56:16.060920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:40.833 qpair failed and we were unable to recover it. 00:38:40.833 [2024-12-09 09:56:16.070883] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.833 [2024-12-09 09:56:16.070928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.833 [2024-12-09 09:56:16.070938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.833 [2024-12-09 09:56:16.070943] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.833 [2024-12-09 09:56:16.070948] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf8000b90 00:38:40.833 [2024-12-09 09:56:16.070958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:38:40.833 qpair failed and we were unable to recover it. 00:38:40.833 [2024-12-09 09:56:16.080864] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.833 [2024-12-09 09:56:16.080961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.833 [2024-12-09 09:56:16.081024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.833 [2024-12-09 09:56:16.081050] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.833 [2024-12-09 09:56:16.081071] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf4000b90 00:38:40.833 [2024-12-09 09:56:16.081126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:40.833 qpair failed and we were unable to recover it. 00:38:40.833 [2024-12-09 09:56:16.090890] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.833 [2024-12-09 09:56:16.090970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.833 [2024-12-09 09:56:16.091001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.833 [2024-12-09 09:56:16.091017] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.833 [2024-12-09 09:56:16.091031] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf4000b90 00:38:40.833 [2024-12-09 09:56:16.091062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:40.833 qpair failed and we were unable to recover it. 00:38:40.833 [2024-12-09 09:56:16.100947] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.833 [2024-12-09 09:56:16.101009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.833 [2024-12-09 09:56:16.101029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.833 [2024-12-09 09:56:16.101040] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.833 [2024-12-09 09:56:16.101050] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2cf4000b90 00:38:40.833 [2024-12-09 09:56:16.101078] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:38:40.833 qpair failed and we were unable to recover it. 00:38:40.833 [2024-12-09 09:56:16.110980] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.833 [2024-12-09 09:56:16.111092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.833 [2024-12-09 09:56:16.111156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.833 [2024-12-09 09:56:16.111182] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.833 [2024-12-09 09:56:16.111204] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc74130 00:38:40.833 [2024-12-09 09:56:16.111257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:40.833 qpair failed and we were unable to recover it. 00:38:40.833 [2024-12-09 09:56:16.120930] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:38:40.833 [2024-12-09 09:56:16.120996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:38:40.833 [2024-12-09 09:56:16.121025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:38:40.833 [2024-12-09 09:56:16.121040] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:40.833 [2024-12-09 09:56:16.121053] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xc74130 00:38:40.833 [2024-12-09 09:56:16.121080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:38:40.833 qpair failed and we were unable to recover it. 00:38:40.833 [2024-12-09 09:56:16.121238] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:38:40.833 A controller has encountered a failure and is being reset. 00:38:40.833 [2024-12-09 09:56:16.121355] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc81ff0 (9): Bad file descriptor 00:38:40.833 Controller properly reset. 00:38:40.833 Read completed with error (sct=0, sc=8) 00:38:40.833 starting I/O failed 00:38:40.833 Read completed with error (sct=0, sc=8) 00:38:40.833 starting I/O failed 00:38:40.833 Read completed with error (sct=0, sc=8) 00:38:40.833 starting I/O failed 00:38:40.833 Read completed with error (sct=0, sc=8) 00:38:40.833 starting I/O failed 00:38:40.833 Read completed with error (sct=0, sc=8) 00:38:40.833 starting I/O failed 00:38:40.833 Read completed with error (sct=0, sc=8) 00:38:40.833 starting I/O failed 00:38:40.833 Read completed with error (sct=0, sc=8) 00:38:40.833 starting I/O failed 00:38:40.833 Read completed with error (sct=0, sc=8) 00:38:40.833 starting I/O failed 00:38:40.833 Read completed with error (sct=0, sc=8) 00:38:40.833 starting I/O failed 00:38:40.833 Read completed with error (sct=0, sc=8) 00:38:40.833 starting I/O failed 00:38:40.833 Read completed with error (sct=0, sc=8) 00:38:40.833 starting I/O failed 00:38:40.833 Read completed with error (sct=0, sc=8) 00:38:40.833 starting I/O failed 00:38:40.833 Read completed with error (sct=0, sc=8) 00:38:40.833 starting I/O failed 00:38:40.833 Read completed with error (sct=0, sc=8) 00:38:40.833 starting I/O failed 00:38:40.833 Read completed with error (sct=0, sc=8) 00:38:40.833 starting I/O failed 00:38:40.833 Write completed with error (sct=0, sc=8) 00:38:40.833 starting I/O failed 00:38:40.833 Write completed with error (sct=0, sc=8) 00:38:40.833 starting I/O failed 00:38:40.833 Write completed with error (sct=0, sc=8) 00:38:40.833 starting I/O failed 00:38:40.833 Write completed with error (sct=0, sc=8) 00:38:40.833 starting I/O failed 00:38:40.833 Read completed with error (sct=0, sc=8) 00:38:40.833 starting I/O failed 00:38:40.833 Read completed with error (sct=0, sc=8) 00:38:40.833 starting I/O failed 00:38:40.833 Read completed with error (sct=0, sc=8) 00:38:40.833 starting I/O failed 00:38:40.833 Write completed with error (sct=0, sc=8) 00:38:40.833 starting I/O failed 00:38:40.833 Write completed with error (sct=0, sc=8) 00:38:40.833 starting I/O failed 00:38:40.833 Write completed with error (sct=0, sc=8) 00:38:40.833 starting I/O failed 00:38:40.833 Read completed with error (sct=0, sc=8) 00:38:40.833 starting I/O failed 00:38:40.834 Write completed with error (sct=0, sc=8) 00:38:40.834 starting I/O failed 00:38:40.834 Read completed with error (sct=0, sc=8) 00:38:40.834 starting I/O failed 00:38:40.834 Read completed with error (sct=0, sc=8) 00:38:40.834 starting I/O failed 00:38:40.834 Read completed with error (sct=0, sc=8) 00:38:40.834 starting I/O failed 00:38:40.834 Write completed with error (sct=0, sc=8) 00:38:40.834 starting I/O failed 00:38:40.834 Write completed with error (sct=0, sc=8) 00:38:40.834 starting I/O failed 00:38:40.834 [2024-12-09 09:56:16.136370] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:38:40.834 Initializing NVMe Controllers 00:38:40.834 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:40.834 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:40.834 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:38:40.834 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:38:40.834 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:38:40.834 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:38:40.834 Initialization complete. Launching workers. 00:38:40.834 Starting thread on core 1 00:38:40.834 Starting thread on core 2 00:38:40.834 Starting thread on core 3 00:38:40.834 Starting thread on core 0 00:38:40.834 09:56:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:38:40.834 00:38:40.834 real 0m11.324s 00:38:40.834 user 0m21.874s 00:38:40.834 sys 0m3.836s 00:38:40.834 09:56:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:40.834 09:56:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:38:40.834 ************************************ 00:38:40.834 END TEST nvmf_target_disconnect_tc2 00:38:40.834 ************************************ 00:38:40.834 09:56:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:38:40.834 09:56:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:38:40.834 09:56:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:38:40.834 09:56:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:40.834 09:56:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:38:40.834 09:56:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:40.834 09:56:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:38:40.834 09:56:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:40.834 09:56:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:40.834 rmmod nvme_tcp 00:38:40.834 rmmod nvme_fabrics 00:38:40.834 rmmod nvme_keyring 00:38:40.834 09:56:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:40.834 09:56:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:38:40.834 09:56:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:38:40.834 09:56:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 3066510 ']' 00:38:40.834 09:56:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 3066510 00:38:40.834 09:56:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 3066510 ']' 00:38:40.834 09:56:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 3066510 00:38:40.834 09:56:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:38:40.834 09:56:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:40.834 09:56:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3066510 00:38:41.095 09:56:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:38:41.095 09:56:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:38:41.095 09:56:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3066510' 00:38:41.095 killing process with pid 3066510 00:38:41.095 09:56:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 3066510 00:38:41.095 09:56:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 3066510 00:38:41.095 09:56:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:41.095 09:56:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:41.095 09:56:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:41.095 09:56:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:38:41.095 09:56:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:38:41.095 09:56:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:41.095 09:56:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:38:41.095 09:56:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:41.095 09:56:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:41.095 09:56:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:41.095 09:56:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:41.095 09:56:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:43.646 09:56:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:43.646 00:38:43.646 real 0m21.508s 00:38:43.646 user 0m49.448s 00:38:43.646 sys 0m9.852s 00:38:43.646 09:56:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:43.646 09:56:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:38:43.646 ************************************ 00:38:43.646 END TEST nvmf_target_disconnect 00:38:43.646 ************************************ 00:38:43.646 09:56:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:38:43.646 00:38:43.646 real 7m45.682s 00:38:43.646 user 17m13.589s 00:38:43.646 sys 2m21.824s 00:38:43.646 09:56:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:43.646 09:56:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:38:43.646 ************************************ 00:38:43.646 END TEST nvmf_host 00:38:43.646 ************************************ 00:38:43.646 09:56:18 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:38:43.646 09:56:18 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:38:43.646 09:56:18 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:38:43.646 09:56:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:43.646 09:56:18 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:43.646 09:56:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:38:43.646 ************************************ 00:38:43.646 START TEST nvmf_target_core_interrupt_mode 00:38:43.646 ************************************ 00:38:43.646 09:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:38:43.646 * Looking for test storage... 00:38:43.646 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:38:43.646 09:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:43.646 09:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lcov --version 00:38:43.646 09:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:43.646 09:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:43.646 09:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:43.646 09:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:43.646 09:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:43.646 09:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:38:43.646 09:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:38:43.646 09:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:38:43.646 09:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:38:43.646 09:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:38:43.646 09:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:38:43.646 09:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:38:43.646 09:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:43.647 09:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:38:43.647 09:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:38:43.647 09:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:43.647 09:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:43.647 09:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:38:43.647 09:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:38:43.647 09:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:43.647 09:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:38:43.647 09:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:38:43.647 09:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:38:43.647 09:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:38:43.647 09:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:43.647 09:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:38:43.647 09:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:38:43.647 09:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:43.647 09:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:43.647 09:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:38:43.647 09:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:43.647 09:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:43.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:43.647 --rc genhtml_branch_coverage=1 00:38:43.647 --rc genhtml_function_coverage=1 00:38:43.647 --rc genhtml_legend=1 00:38:43.647 --rc geninfo_all_blocks=1 00:38:43.647 --rc geninfo_unexecuted_blocks=1 00:38:43.647 00:38:43.647 ' 00:38:43.647 09:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:43.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:43.647 --rc genhtml_branch_coverage=1 00:38:43.647 --rc genhtml_function_coverage=1 00:38:43.647 --rc genhtml_legend=1 00:38:43.647 --rc geninfo_all_blocks=1 00:38:43.647 --rc geninfo_unexecuted_blocks=1 00:38:43.647 00:38:43.647 ' 00:38:43.647 09:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:43.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:43.647 --rc genhtml_branch_coverage=1 00:38:43.647 --rc genhtml_function_coverage=1 00:38:43.647 --rc genhtml_legend=1 00:38:43.647 --rc geninfo_all_blocks=1 00:38:43.647 --rc geninfo_unexecuted_blocks=1 00:38:43.647 00:38:43.647 ' 00:38:43.647 09:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:43.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:43.647 --rc genhtml_branch_coverage=1 00:38:43.647 --rc genhtml_function_coverage=1 00:38:43.647 --rc genhtml_legend=1 00:38:43.647 --rc geninfo_all_blocks=1 00:38:43.647 --rc geninfo_unexecuted_blocks=1 00:38:43.647 00:38:43.647 ' 00:38:43.647 09:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:38:43.647 09:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:38:43.647 09:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:43.647 09:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:38:43.647 09:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:43.647 09:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:43.647 09:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:43.647 09:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:43.647 09:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:43.647 09:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:43.647 09:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:43.647 09:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:43.647 09:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:43.647 09:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:43.647 09:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:43.647 09:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:43.647 09:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:43.647 09:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:43.647 09:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:43.647 09:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:43.647 09:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:43.647 09:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:38:43.647 09:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:43.647 09:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:43.647 09:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:43.647 09:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:43.647 09:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:43.647 09:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:43.647 09:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:38:43.647 09:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:43.647 09:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:38:43.647 09:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:43.647 09:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:43.647 09:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:43.647 09:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:43.647 09:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:43.647 09:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:43.647 09:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:43.647 09:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:43.647 09:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:43.647 09:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:43.647 09:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:38:43.647 09:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:38:43.647 09:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:38:43.647 09:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:38:43.647 09:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:43.647 09:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:43.647 09:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:43.647 ************************************ 00:38:43.647 START TEST nvmf_abort 00:38:43.647 ************************************ 00:38:43.647 09:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:38:43.647 * Looking for test storage... 00:38:43.647 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:43.647 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:43.647 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:38:43.647 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:43.647 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:43.647 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:43.647 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:43.647 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:43.648 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:38:43.648 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:38:43.648 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:38:43.648 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:38:43.648 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:38:43.648 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:38:43.648 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:38:43.648 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:43.648 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:38:43.648 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:38:43.648 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:43.648 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:43.648 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:38:43.648 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:38:43.648 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:43.648 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:38:43.648 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:38:43.648 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:38:43.909 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:38:43.909 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:43.909 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:38:43.909 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:38:43.909 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:43.909 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:43.909 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:38:43.910 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:43.910 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:43.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:43.910 --rc genhtml_branch_coverage=1 00:38:43.910 --rc genhtml_function_coverage=1 00:38:43.910 --rc genhtml_legend=1 00:38:43.910 --rc geninfo_all_blocks=1 00:38:43.910 --rc geninfo_unexecuted_blocks=1 00:38:43.910 00:38:43.910 ' 00:38:43.910 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:43.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:43.910 --rc genhtml_branch_coverage=1 00:38:43.910 --rc genhtml_function_coverage=1 00:38:43.910 --rc genhtml_legend=1 00:38:43.910 --rc geninfo_all_blocks=1 00:38:43.910 --rc geninfo_unexecuted_blocks=1 00:38:43.910 00:38:43.910 ' 00:38:43.910 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:43.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:43.910 --rc genhtml_branch_coverage=1 00:38:43.910 --rc genhtml_function_coverage=1 00:38:43.910 --rc genhtml_legend=1 00:38:43.910 --rc geninfo_all_blocks=1 00:38:43.910 --rc geninfo_unexecuted_blocks=1 00:38:43.910 00:38:43.910 ' 00:38:43.910 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:43.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:43.910 --rc genhtml_branch_coverage=1 00:38:43.910 --rc genhtml_function_coverage=1 00:38:43.910 --rc genhtml_legend=1 00:38:43.910 --rc geninfo_all_blocks=1 00:38:43.910 --rc geninfo_unexecuted_blocks=1 00:38:43.910 00:38:43.910 ' 00:38:43.910 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:43.910 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:38:43.910 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:43.910 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:43.910 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:43.910 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:43.910 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:43.910 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:43.910 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:43.910 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:43.910 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:43.910 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:43.910 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:43.910 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:43.910 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:43.910 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:43.910 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:43.910 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:43.910 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:43.910 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:38:43.910 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:43.910 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:43.910 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:43.910 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:43.910 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:43.910 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:43.910 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:38:43.910 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:43.910 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:38:43.910 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:43.910 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:43.910 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:43.910 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:43.910 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:43.910 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:43.910 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:43.910 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:43.910 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:43.910 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:43.910 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:43.910 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:38:43.910 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:38:43.910 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:43.910 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:43.910 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:43.910 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:43.910 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:43.910 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:43.910 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:43.910 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:43.910 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:43.910 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:43.910 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:38:43.910 09:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:50.504 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:50.504 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:38:50.504 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:50.504 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:50.504 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:50.504 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:50.504 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:50.504 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:38:50.504 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:50.504 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:38:50.504 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:38:50.504 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:38:50.504 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:38:50.504 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:38:50.504 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:38:50.504 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:50.504 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:50.504 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:50.504 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:50.504 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:50.504 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:50.504 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:50.504 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:50.504 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:50.504 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:50.504 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:50.504 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:50.504 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:50.504 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:50.504 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:50.504 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:50.504 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:50.504 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:50.504 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:50.504 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:38:50.504 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:38:50.504 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:50.504 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:50.504 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:50.504 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:50.504 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:50.504 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:50.504 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:38:50.504 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:38:50.504 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:50.504 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:50.504 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:50.504 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:50.504 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:50.504 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:50.504 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:50.504 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:50.504 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:50.504 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:50.504 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:50.504 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:50.504 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:50.504 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:50.504 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:50.504 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:38:50.504 Found net devices under 0000:4b:00.0: cvl_0_0 00:38:50.504 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:50.504 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:50.504 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:50.504 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:50.505 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:50.505 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:50.505 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:50.505 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:50.505 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:38:50.505 Found net devices under 0000:4b:00.1: cvl_0_1 00:38:50.505 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:50.505 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:50.505 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:38:50.505 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:50.505 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:50.505 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:50.505 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:50.505 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:50.505 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:50.505 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:50.505 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:50.505 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:50.505 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:50.505 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:50.505 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:50.505 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:50.505 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:50.505 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:50.505 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:50.505 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:50.505 09:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:50.765 09:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:50.765 09:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:50.765 09:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:50.765 09:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:50.765 09:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:50.765 09:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:50.765 09:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:50.765 09:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:50.765 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:50.765 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.610 ms 00:38:50.765 00:38:50.766 --- 10.0.0.2 ping statistics --- 00:38:50.766 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:50.766 rtt min/avg/max/mdev = 0.610/0.610/0.610/0.000 ms 00:38:50.766 09:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:50.766 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:50.766 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.303 ms 00:38:50.766 00:38:50.766 --- 10.0.0.1 ping statistics --- 00:38:50.766 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:50.766 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:38:50.766 09:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:50.766 09:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:38:50.766 09:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:50.766 09:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:50.766 09:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:50.766 09:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:50.766 09:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:50.766 09:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:50.766 09:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:50.766 09:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:38:50.766 09:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:50.766 09:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:50.766 09:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:50.766 09:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=3071935 00:38:50.766 09:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 3071935 00:38:50.766 09:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:38:50.766 09:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 3071935 ']' 00:38:50.766 09:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:50.766 09:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:50.766 09:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:50.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:50.766 09:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:50.766 09:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:51.026 [2024-12-09 09:56:26.253513] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:51.026 [2024-12-09 09:56:26.254497] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:38:51.026 [2024-12-09 09:56:26.254535] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:51.026 [2024-12-09 09:56:26.348407] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:38:51.026 [2024-12-09 09:56:26.366073] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:51.026 [2024-12-09 09:56:26.366106] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:51.026 [2024-12-09 09:56:26.366114] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:51.026 [2024-12-09 09:56:26.366120] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:51.026 [2024-12-09 09:56:26.366130] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:51.026 [2024-12-09 09:56:26.367406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:51.026 [2024-12-09 09:56:26.367561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:51.026 [2024-12-09 09:56:26.367563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:38:51.026 [2024-12-09 09:56:26.416921] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:51.026 [2024-12-09 09:56:26.416966] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:51.026 [2024-12-09 09:56:26.417580] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:38:51.026 [2024-12-09 09:56:26.417901] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:51.596 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:51.596 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:38:51.596 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:51.596 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:51.596 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:51.856 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:51.856 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:38:51.856 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:51.856 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:51.856 [2024-12-09 09:56:27.088382] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:51.856 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:51.856 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:38:51.856 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:51.856 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:51.856 Malloc0 00:38:51.856 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:51.856 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:38:51.856 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:51.856 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:51.856 Delay0 00:38:51.856 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:51.856 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:38:51.856 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:51.856 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:51.856 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:51.856 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:38:51.856 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:51.856 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:51.856 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:51.856 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:51.856 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:51.856 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:51.856 [2024-12-09 09:56:27.184297] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:51.856 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:51.856 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:51.856 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:51.856 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:51.856 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:51.856 09:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:38:52.116 [2024-12-09 09:56:27.368818] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:38:54.662 Initializing NVMe Controllers 00:38:54.662 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:38:54.662 controller IO queue size 128 less than required 00:38:54.662 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:38:54.662 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:38:54.662 Initialization complete. Launching workers. 00:38:54.662 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 29012 00:38:54.662 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 29069, failed to submit 66 00:38:54.662 success 29012, unsuccessful 57, failed 0 00:38:54.662 09:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:54.662 09:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:54.662 09:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:54.662 09:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:54.662 09:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:38:54.662 09:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:38:54.662 09:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:54.662 09:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:38:54.662 09:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:54.662 09:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:38:54.662 09:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:54.662 09:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:54.662 rmmod nvme_tcp 00:38:54.662 rmmod nvme_fabrics 00:38:54.662 rmmod nvme_keyring 00:38:54.662 09:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:54.662 09:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:38:54.662 09:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:38:54.662 09:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 3071935 ']' 00:38:54.662 09:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 3071935 00:38:54.662 09:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 3071935 ']' 00:38:54.662 09:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 3071935 00:38:54.662 09:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:38:54.662 09:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:54.662 09:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3071935 00:38:54.662 09:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:54.662 09:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:54.662 09:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3071935' 00:38:54.662 killing process with pid 3071935 00:38:54.662 09:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 3071935 00:38:54.662 09:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 3071935 00:38:54.662 09:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:54.662 09:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:54.662 09:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:54.662 09:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:38:54.662 09:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:38:54.662 09:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:54.662 09:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:38:54.662 09:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:54.662 09:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:54.662 09:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:54.662 09:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:54.662 09:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:56.579 09:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:56.579 00:38:56.579 real 0m13.001s 00:38:56.579 user 0m11.275s 00:38:56.579 sys 0m6.548s 00:38:56.579 09:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:56.579 09:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:38:56.579 ************************************ 00:38:56.579 END TEST nvmf_abort 00:38:56.579 ************************************ 00:38:56.579 09:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:38:56.579 09:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:56.579 09:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:56.579 09:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:56.579 ************************************ 00:38:56.579 START TEST nvmf_ns_hotplug_stress 00:38:56.579 ************************************ 00:38:56.579 09:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:38:56.840 * Looking for test storage... 00:38:56.840 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:56.840 09:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:56.840 09:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:38:56.840 09:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:56.840 09:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:56.840 09:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:56.840 09:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:56.840 09:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:56.840 09:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:38:56.840 09:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:38:56.840 09:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:38:56.840 09:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:38:56.840 09:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:38:56.840 09:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:38:56.840 09:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:38:56.840 09:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:56.840 09:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:38:56.840 09:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:38:56.840 09:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:56.840 09:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:56.840 09:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:38:56.840 09:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:38:56.840 09:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:56.840 09:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:38:56.840 09:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:38:56.840 09:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:38:56.840 09:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:38:56.840 09:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:56.840 09:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:38:56.840 09:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:38:56.840 09:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:56.840 09:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:56.840 09:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:38:56.840 09:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:56.840 09:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:56.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:56.840 --rc genhtml_branch_coverage=1 00:38:56.840 --rc genhtml_function_coverage=1 00:38:56.840 --rc genhtml_legend=1 00:38:56.840 --rc geninfo_all_blocks=1 00:38:56.840 --rc geninfo_unexecuted_blocks=1 00:38:56.840 00:38:56.840 ' 00:38:56.840 09:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:56.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:56.840 --rc genhtml_branch_coverage=1 00:38:56.840 --rc genhtml_function_coverage=1 00:38:56.840 --rc genhtml_legend=1 00:38:56.840 --rc geninfo_all_blocks=1 00:38:56.840 --rc geninfo_unexecuted_blocks=1 00:38:56.840 00:38:56.840 ' 00:38:56.840 09:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:56.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:56.840 --rc genhtml_branch_coverage=1 00:38:56.840 --rc genhtml_function_coverage=1 00:38:56.840 --rc genhtml_legend=1 00:38:56.840 --rc geninfo_all_blocks=1 00:38:56.840 --rc geninfo_unexecuted_blocks=1 00:38:56.840 00:38:56.840 ' 00:38:56.840 09:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:56.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:56.840 --rc genhtml_branch_coverage=1 00:38:56.840 --rc genhtml_function_coverage=1 00:38:56.840 --rc genhtml_legend=1 00:38:56.840 --rc geninfo_all_blocks=1 00:38:56.840 --rc geninfo_unexecuted_blocks=1 00:38:56.840 00:38:56.840 ' 00:38:56.840 09:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:56.840 09:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:38:56.840 09:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:56.840 09:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:56.840 09:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:56.840 09:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:56.840 09:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:56.840 09:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:56.840 09:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:56.840 09:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:56.840 09:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:56.840 09:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:56.840 09:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:56.840 09:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:56.840 09:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:56.840 09:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:56.840 09:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:56.840 09:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:56.840 09:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:56.840 09:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:38:56.840 09:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:56.840 09:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:56.840 09:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:56.840 09:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:56.840 09:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:56.840 09:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:56.840 09:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:38:56.841 09:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:56.841 09:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:38:56.841 09:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:56.841 09:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:56.841 09:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:56.841 09:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:56.841 09:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:56.841 09:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:56.841 09:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:56.841 09:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:56.841 09:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:56.841 09:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:56.841 09:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:56.841 09:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:38:56.841 09:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:56.841 09:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:56.841 09:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:56.841 09:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:56.841 09:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:56.841 09:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:56.841 09:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:56.841 09:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:56.841 09:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:56.841 09:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:56.841 09:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:38:56.841 09:56:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:39:04.975 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:04.975 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:39:04.975 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:04.975 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:04.975 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:04.975 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:04.975 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:04.975 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:39:04.975 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:04.975 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:39:04.975 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:39:04.975 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:39:04.975 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:39:04.975 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:39:04.975 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:39:04.975 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:04.975 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:04.975 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:04.975 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:04.976 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:04.976 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:04.976 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:04.976 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:04.976 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:04.976 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:04.976 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:04.976 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:04.976 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:04.976 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:04.976 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:04.976 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:04.976 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:04.976 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:04.976 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:04.976 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:39:04.976 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:39:04.976 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:04.976 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:04.976 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:04.976 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:04.976 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:04.976 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:04.976 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:39:04.976 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:39:04.976 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:04.976 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:04.976 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:04.976 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:04.976 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:04.976 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:04.976 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:04.976 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:04.976 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:04.976 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:04.976 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:04.976 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:04.976 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:04.976 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:04.976 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:04.976 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:39:04.976 Found net devices under 0000:4b:00.0: cvl_0_0 00:39:04.976 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:04.976 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:04.976 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:04.976 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:04.976 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:04.976 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:04.976 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:04.976 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:04.976 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:39:04.976 Found net devices under 0000:4b:00.1: cvl_0_1 00:39:04.976 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:04.976 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:04.976 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:39:04.976 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:04.976 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:04.976 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:04.976 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:04.976 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:04.976 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:04.976 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:04.976 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:04.976 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:04.976 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:04.976 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:04.976 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:04.976 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:04.976 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:04.976 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:04.976 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:04.976 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:04.976 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:04.976 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:04.976 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:04.976 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:04.976 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:04.976 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:04.976 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:04.976 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:04.976 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:04.976 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:04.976 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.618 ms 00:39:04.976 00:39:04.976 --- 10.0.0.2 ping statistics --- 00:39:04.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:04.976 rtt min/avg/max/mdev = 0.618/0.618/0.618/0.000 ms 00:39:04.976 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:04.976 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:04.976 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.304 ms 00:39:04.976 00:39:04.976 --- 10.0.0.1 ping statistics --- 00:39:04.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:04.976 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:39:04.976 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:04.976 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:39:04.976 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:04.976 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:04.976 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:04.976 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:04.976 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:04.977 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:04.977 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:04.977 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:39:04.977 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:04.977 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:04.977 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:39:04.977 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=3076635 00:39:04.977 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 3076635 00:39:04.977 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:39:04.977 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 3076635 ']' 00:39:04.977 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:04.977 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:04.977 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:04.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:04.977 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:04.977 09:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:39:04.977 [2024-12-09 09:56:39.481332] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:04.977 [2024-12-09 09:56:39.482379] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:39:04.977 [2024-12-09 09:56:39.482424] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:04.977 [2024-12-09 09:56:39.580952] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:39:04.977 [2024-12-09 09:56:39.608069] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:04.977 [2024-12-09 09:56:39.608116] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:04.977 [2024-12-09 09:56:39.608125] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:04.977 [2024-12-09 09:56:39.608132] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:04.977 [2024-12-09 09:56:39.608139] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:04.977 [2024-12-09 09:56:39.610043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:04.977 [2024-12-09 09:56:39.610206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:04.977 [2024-12-09 09:56:39.610208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:04.977 [2024-12-09 09:56:39.671452] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:04.977 [2024-12-09 09:56:39.671519] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:04.977 [2024-12-09 09:56:39.672139] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:04.977 [2024-12-09 09:56:39.672451] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:04.977 09:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:04.977 09:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:39:04.977 09:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:04.977 09:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:04.977 09:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:39:04.977 09:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:04.977 09:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:39:04.977 09:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:39:05.238 [2024-12-09 09:56:40.507068] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:05.238 09:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:39:05.498 09:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:05.498 [2024-12-09 09:56:40.855751] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:05.498 09:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:05.758 09:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:39:06.019 Malloc0 00:39:06.019 09:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:39:06.019 Delay0 00:39:06.019 09:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:06.280 09:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:39:06.280 NULL1 00:39:06.541 09:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:39:06.541 09:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:39:06.541 09:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3077136 00:39:06.541 09:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3077136 00:39:06.541 09:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:06.802 09:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:07.062 09:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:39:07.062 09:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:39:07.062 true 00:39:07.062 09:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3077136 00:39:07.062 09:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:07.323 09:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:07.584 09:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:39:07.584 09:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:39:07.584 true 00:39:07.845 09:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3077136 00:39:07.845 09:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:07.845 09:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:08.106 09:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:39:08.106 09:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:39:08.368 true 00:39:08.368 09:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3077136 00:39:08.368 09:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:08.368 09:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:08.629 09:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:39:08.630 09:56:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:39:08.891 true 00:39:08.891 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3077136 00:39:08.891 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:09.152 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:09.152 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:39:09.152 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:39:09.413 true 00:39:09.413 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3077136 00:39:09.413 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:09.674 09:56:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:09.958 09:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:39:09.958 09:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:39:09.958 true 00:39:09.958 09:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3077136 00:39:09.958 09:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:10.250 09:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:10.250 09:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:39:10.250 09:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:39:10.517 true 00:39:10.517 09:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3077136 00:39:10.517 09:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:10.778 09:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:10.779 09:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:39:10.779 09:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:39:11.039 true 00:39:11.039 09:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3077136 00:39:11.039 09:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:11.299 09:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:11.559 09:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:39:11.559 09:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:39:11.559 true 00:39:11.559 09:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3077136 00:39:11.559 09:56:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:11.820 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:12.080 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:39:12.080 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:39:12.080 true 00:39:12.080 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3077136 00:39:12.080 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:12.340 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:12.600 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:39:12.600 09:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:39:12.860 true 00:39:12.860 09:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3077136 00:39:12.860 09:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:12.860 09:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:13.119 09:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:39:13.119 09:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:39:13.378 true 00:39:13.378 09:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3077136 00:39:13.378 09:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:13.378 09:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:13.637 09:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:39:13.637 09:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:39:13.896 true 00:39:13.896 09:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3077136 00:39:13.896 09:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:14.155 09:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:14.155 09:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:39:14.155 09:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:39:14.414 true 00:39:14.414 09:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3077136 00:39:14.414 09:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:14.674 09:56:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:14.674 09:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:39:14.674 09:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:39:14.935 true 00:39:14.935 09:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3077136 00:39:14.935 09:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:15.197 09:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:15.458 09:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:39:15.458 09:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:39:15.458 true 00:39:15.458 09:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3077136 00:39:15.458 09:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:15.720 09:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:15.980 09:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:39:15.980 09:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:39:15.980 true 00:39:16.241 09:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3077136 00:39:16.241 09:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:16.241 09:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:16.501 09:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:39:16.501 09:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:39:16.762 true 00:39:16.762 09:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3077136 00:39:16.762 09:56:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:16.762 09:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:17.024 09:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:39:17.024 09:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:39:17.285 true 00:39:17.285 09:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3077136 00:39:17.285 09:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:17.285 09:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:17.546 09:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:39:17.546 09:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:39:17.808 true 00:39:17.808 09:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3077136 00:39:17.808 09:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:18.069 09:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:18.070 09:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:39:18.070 09:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:39:18.331 true 00:39:18.331 09:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3077136 00:39:18.331 09:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:18.593 09:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:18.856 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:39:18.856 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:39:18.856 true 00:39:18.856 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3077136 00:39:18.856 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:19.120 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:19.383 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:39:19.383 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:39:19.383 true 00:39:19.383 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3077136 00:39:19.383 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:19.646 09:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:19.907 09:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:39:19.907 09:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:39:19.907 true 00:39:19.907 09:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3077136 00:39:19.907 09:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:20.167 09:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:20.427 09:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:39:20.427 09:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:39:20.427 true 00:39:20.688 09:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3077136 00:39:20.688 09:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:20.688 09:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:20.948 09:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:39:20.948 09:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:39:21.209 true 00:39:21.209 09:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3077136 00:39:21.209 09:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:21.209 09:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:21.469 09:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:39:21.469 09:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:39:21.730 true 00:39:21.730 09:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3077136 00:39:21.730 09:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:21.991 09:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:21.991 09:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:39:21.991 09:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:39:22.253 true 00:39:22.253 09:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3077136 00:39:22.253 09:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:22.515 09:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:22.515 09:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:39:22.515 09:56:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:39:22.777 true 00:39:22.777 09:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3077136 00:39:22.777 09:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:23.039 09:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:23.301 09:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:39:23.301 09:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:39:23.301 true 00:39:23.301 09:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3077136 00:39:23.301 09:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:23.563 09:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:23.825 09:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:39:23.825 09:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:39:23.825 true 00:39:23.825 09:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3077136 00:39:23.825 09:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:24.087 09:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:24.347 09:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:39:24.347 09:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:39:24.347 true 00:39:24.347 09:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3077136 00:39:24.347 09:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:24.608 09:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:24.869 09:57:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:39:24.869 09:57:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:39:24.869 true 00:39:25.131 09:57:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3077136 00:39:25.131 09:57:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:25.131 09:57:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:25.405 09:57:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:39:25.405 09:57:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:39:25.667 true 00:39:25.667 09:57:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3077136 00:39:25.667 09:57:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:25.667 09:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:25.927 09:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:39:25.927 09:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:39:26.188 true 00:39:26.188 09:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3077136 00:39:26.188 09:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:26.188 09:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:26.448 09:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:39:26.448 09:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:39:26.709 true 00:39:26.709 09:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3077136 00:39:26.709 09:57:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:26.969 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:26.969 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:39:26.969 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:39:27.229 true 00:39:27.229 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3077136 00:39:27.229 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:27.490 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:27.490 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:39:27.490 09:57:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:39:27.749 true 00:39:27.749 09:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3077136 00:39:27.749 09:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:28.009 09:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:28.268 09:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:39:28.268 09:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:39:28.268 true 00:39:28.268 09:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3077136 00:39:28.268 09:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:28.529 09:57:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:28.790 09:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:39:28.790 09:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:39:28.790 true 00:39:28.790 09:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3077136 00:39:28.790 09:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:29.049 09:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:29.311 09:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:39:29.311 09:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:39:29.311 true 00:39:29.573 09:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3077136 00:39:29.573 09:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:29.573 09:57:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:29.843 09:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:39:29.843 09:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:39:30.104 true 00:39:30.104 09:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3077136 00:39:30.104 09:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:30.104 09:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:30.365 09:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:39:30.365 09:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:39:30.625 true 00:39:30.625 09:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3077136 00:39:30.625 09:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:30.625 09:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:30.886 09:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:39:30.886 09:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:39:31.147 true 00:39:31.147 09:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3077136 00:39:31.147 09:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:31.408 09:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:31.408 09:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:39:31.408 09:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:39:31.668 true 00:39:31.668 09:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3077136 00:39:31.668 09:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:31.930 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:31.930 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:39:31.930 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:39:32.212 true 00:39:32.212 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3077136 00:39:32.212 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:32.474 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:32.474 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:39:32.474 09:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:39:32.735 true 00:39:32.735 09:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3077136 00:39:32.735 09:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:32.996 09:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:33.257 09:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:39:33.257 09:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:39:33.257 true 00:39:33.257 09:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3077136 00:39:33.257 09:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:33.518 09:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:33.779 09:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:39:33.779 09:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:39:33.779 true 00:39:33.779 09:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3077136 00:39:33.779 09:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:34.040 09:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:34.300 09:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:39:34.300 09:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:39:34.300 true 00:39:34.561 09:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3077136 00:39:34.562 09:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:34.562 09:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:34.822 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:39:34.822 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:39:35.083 true 00:39:35.083 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3077136 00:39:35.083 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:35.083 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:35.345 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:39:35.345 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:39:35.606 true 00:39:35.606 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3077136 00:39:35.606 09:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:35.867 09:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:35.867 09:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:39:35.867 09:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:39:36.128 true 00:39:36.128 09:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3077136 00:39:36.128 09:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:36.389 09:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:36.389 09:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:39:36.389 09:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:39:36.649 true 00:39:36.649 09:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3077136 00:39:36.649 09:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:36.910 09:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:36.910 Initializing NVMe Controllers 00:39:36.910 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:36.910 Controller IO queue size 128, less than required. 00:39:36.910 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:39:36.910 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:39:36.910 Initialization complete. Launching workers. 00:39:36.910 ======================================================== 00:39:36.910 Latency(us) 00:39:36.910 Device Information : IOPS MiB/s Average min max 00:39:36.910 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 30366.20 14.83 4215.32 1116.43 10877.60 00:39:36.910 ======================================================== 00:39:36.910 Total : 30366.20 14.83 4215.32 1116.43 10877.60 00:39:36.910 00:39:36.910 09:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:39:36.910 09:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:39:37.171 true 00:39:37.172 09:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3077136 00:39:37.172 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3077136) - No such process 00:39:37.172 09:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3077136 00:39:37.172 09:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:37.433 09:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:37.433 09:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:39:37.433 09:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:39:37.433 09:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:39:37.433 09:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:37.433 09:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:39:37.693 null0 00:39:37.693 09:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:39:37.693 09:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:37.693 09:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:39:37.954 null1 00:39:37.955 09:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:39:37.955 09:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:37.955 09:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:39:37.955 null2 00:39:37.955 09:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:39:37.955 09:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:37.955 09:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:39:38.215 null3 00:39:38.215 09:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:39:38.215 09:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:38.215 09:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:39:38.476 null4 00:39:38.476 09:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:39:38.476 09:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:38.476 09:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:39:38.476 null5 00:39:38.476 09:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:39:38.476 09:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:38.476 09:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:39:38.737 null6 00:39:38.738 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:39:38.738 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:38.738 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:39:39.000 null7 00:39:39.000 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:39:39.000 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:39:39.000 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:39:39.000 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:39.000 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:39:39.000 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:39:39.000 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:39.000 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:39:39.000 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:39:39.000 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:39:39.000 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:39:39.000 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:39:39.000 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:39.000 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:39:39.000 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:39:39.000 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:39:39.000 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:39.000 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:39.000 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:39.000 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:39.000 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:39:39.000 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:39:39.000 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:39.000 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:39:39.000 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:39:39.000 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:39:39.000 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:39.000 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:39:39.000 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:39.000 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:39:39.000 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:39.000 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:39:39.000 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:39:39.000 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:39:39.000 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:39.000 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:39:39.000 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:39.000 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:39:39.000 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:39.000 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:39:39.000 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:39:39.000 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:39:39.000 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:39.000 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:39:39.000 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:39.000 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:39:39.000 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:39.000 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:39:39.000 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:39:39.000 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:39:39.000 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:39.000 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:39:39.000 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:39.000 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:39:39.001 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:39.001 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:39:39.001 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:39:39.001 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:39:39.001 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:39.001 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:39:39.001 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:39.001 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:39:39.001 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:39:39.001 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3083877 3083879 3083882 3083884 3083887 3083890 3083893 3083895 00:39:39.001 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:39:39.001 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:39:39.001 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:39:39.001 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:39.001 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:39.001 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:39.001 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:39.001 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:39.262 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:39.262 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:39.262 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:39.262 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:39.262 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:39.262 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:39.262 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:39.262 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:39.262 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:39.262 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:39.262 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:39.262 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:39.262 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:39.262 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:39.262 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:39.262 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:39.262 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:39.262 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:39.262 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:39.262 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:39.262 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:39.262 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:39.262 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:39.262 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:39.262 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:39.262 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:39.262 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:39.262 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:39.262 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:39.523 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:39.524 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:39.524 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:39.524 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:39.524 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:39.524 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:39.524 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:39.524 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:39.524 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:39.524 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:39.524 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:39.794 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:39.794 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:39.794 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:39.794 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:39.794 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:39.794 09:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:39.794 09:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:39.794 09:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:39.794 09:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:39.794 09:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:39.794 09:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:39.794 09:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:39.794 09:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:39.794 09:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:39.794 09:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:39.794 09:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:39.794 09:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:39.794 09:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:39.794 09:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:39.794 09:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:39.794 09:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:39.794 09:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:39.794 09:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:39.794 09:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:39.794 09:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:39.794 09:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:39.794 09:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:39.794 09:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:40.055 09:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:40.055 09:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:40.055 09:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:40.055 09:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:40.055 09:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:40.055 09:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:40.055 09:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:40.055 09:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:40.055 09:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:40.055 09:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:40.055 09:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:40.055 09:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:40.055 09:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:40.055 09:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:40.055 09:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:40.055 09:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:40.055 09:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:40.055 09:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:40.055 09:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:40.055 09:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:40.055 09:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:40.055 09:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:40.055 09:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:40.055 09:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:40.055 09:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:40.055 09:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:40.317 09:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:40.317 09:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:40.317 09:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:40.317 09:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:40.317 09:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:40.317 09:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:40.317 09:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:40.317 09:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:40.317 09:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:40.317 09:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:40.317 09:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:40.317 09:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:40.317 09:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:40.317 09:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:40.317 09:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:40.317 09:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:40.317 09:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:40.317 09:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:40.317 09:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:40.317 09:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:40.317 09:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:40.317 09:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:40.578 09:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:40.578 09:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:40.578 09:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:40.578 09:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:40.578 09:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:40.579 09:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:40.579 09:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:40.579 09:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:40.579 09:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:40.579 09:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:40.579 09:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:40.579 09:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:40.579 09:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:40.579 09:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:40.579 09:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:40.579 09:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:40.579 09:57:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:40.840 09:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:40.840 09:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:40.840 09:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:40.840 09:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:40.840 09:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:40.840 09:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:40.840 09:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:40.840 09:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:40.840 09:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:40.840 09:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:40.840 09:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:40.840 09:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:40.840 09:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:40.840 09:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:40.840 09:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:40.840 09:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:40.840 09:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:40.840 09:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:40.840 09:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:40.840 09:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:40.840 09:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:40.840 09:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:40.840 09:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:40.840 09:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:40.840 09:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:40.840 09:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:40.840 09:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:40.840 09:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:40.840 09:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:41.102 09:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:41.102 09:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:41.102 09:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:41.102 09:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:41.102 09:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:41.102 09:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:41.102 09:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:41.102 09:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:41.102 09:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:41.102 09:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:41.102 09:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:41.102 09:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:41.102 09:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:41.102 09:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:41.102 09:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:41.102 09:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:41.102 09:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:41.102 09:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:41.102 09:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:41.102 09:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:41.102 09:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:41.102 09:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:41.102 09:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:41.102 09:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:41.102 09:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:41.102 09:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:41.102 09:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:41.362 09:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:41.362 09:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:41.362 09:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:41.362 09:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:41.362 09:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:41.362 09:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:41.362 09:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:41.362 09:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:41.362 09:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:41.362 09:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:41.362 09:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:41.362 09:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:41.362 09:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:41.362 09:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:41.623 09:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:41.623 09:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:41.623 09:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:41.623 09:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:41.623 09:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:41.623 09:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:41.623 09:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:41.623 09:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:41.623 09:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:41.623 09:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:41.623 09:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:41.623 09:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:41.623 09:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:41.623 09:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:41.623 09:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:41.623 09:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:41.623 09:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:41.623 09:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:41.623 09:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:41.623 09:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:41.623 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:41.885 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:41.885 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:41.885 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:41.885 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:41.885 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:41.885 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:41.885 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:41.885 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:41.885 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:41.885 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:41.885 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:41.885 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:41.885 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:41.885 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:41.885 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:41.885 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:41.885 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:41.885 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:41.885 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:41.885 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:41.885 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:41.885 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:41.885 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:41.885 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:41.885 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:41.885 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:42.147 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:42.147 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:42.147 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:42.147 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:42.147 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:42.147 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:42.147 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:42.147 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:42.147 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:42.147 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:42.147 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:42.147 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:42.147 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:42.147 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:39:42.147 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:42.147 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:42.148 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:39:42.148 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:42.148 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:42.148 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:39:42.408 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:42.409 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:42.409 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:39:42.409 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:42.409 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:42.409 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:39:42.409 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:42.409 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:42.409 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:39:42.409 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:42.409 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:42.409 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:39:42.409 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:42.409 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:42.409 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:39:42.409 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:42.409 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:39:42.409 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:39:42.409 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:39:42.409 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:39:42.669 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:39:42.669 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:39:42.669 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:42.670 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:42.670 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:42.670 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:42.670 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:39:42.670 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:42.670 09:57:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:42.670 09:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:42.670 09:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:42.670 09:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:42.670 09:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:42.670 09:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:42.670 09:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:42.670 09:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:42.670 09:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:42.670 09:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:39:42.670 09:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:39:42.670 09:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:39:42.670 09:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:39:42.670 09:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:42.670 09:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:39:42.670 09:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:42.670 09:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:39:42.670 09:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:42.670 09:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:42.670 rmmod nvme_tcp 00:39:42.670 rmmod nvme_fabrics 00:39:42.930 rmmod nvme_keyring 00:39:42.930 09:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:42.930 09:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:39:42.930 09:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:39:42.930 09:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 3076635 ']' 00:39:42.930 09:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 3076635 00:39:42.930 09:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 3076635 ']' 00:39:42.930 09:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 3076635 00:39:42.930 09:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:39:42.930 09:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:42.930 09:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3076635 00:39:42.930 09:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:42.930 09:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:42.930 09:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3076635' 00:39:42.930 killing process with pid 3076635 00:39:42.930 09:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 3076635 00:39:42.930 09:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 3076635 00:39:42.930 09:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:42.930 09:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:42.930 09:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:42.930 09:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:39:42.930 09:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:39:42.930 09:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:42.930 09:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:39:42.930 09:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:42.930 09:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:42.930 09:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:42.930 09:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:42.930 09:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:45.477 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:45.477 00:39:45.477 real 0m48.429s 00:39:45.477 user 3m1.596s 00:39:45.477 sys 0m22.276s 00:39:45.477 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:45.477 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:39:45.477 ************************************ 00:39:45.477 END TEST nvmf_ns_hotplug_stress 00:39:45.477 ************************************ 00:39:45.477 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:39:45.477 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:45.477 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:45.477 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:45.477 ************************************ 00:39:45.477 START TEST nvmf_delete_subsystem 00:39:45.477 ************************************ 00:39:45.477 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:39:45.477 * Looking for test storage... 00:39:45.477 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:45.477 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:39:45.477 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:39:45.477 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:39:45.477 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:39:45.477 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:45.477 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:45.477 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:45.477 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:39:45.477 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:39:45.477 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:39:45.477 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:39:45.477 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:39:45.477 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:39:45.477 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:39:45.477 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:45.477 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:39:45.477 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:39:45.477 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:45.477 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:45.477 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:39:45.477 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:39:45.477 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:45.477 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:39:45.477 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:39:45.477 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:39:45.477 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:39:45.477 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:45.477 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:39:45.477 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:39:45.477 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:45.477 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:45.477 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:39:45.477 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:45.477 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:39:45.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:45.477 --rc genhtml_branch_coverage=1 00:39:45.477 --rc genhtml_function_coverage=1 00:39:45.477 --rc genhtml_legend=1 00:39:45.477 --rc geninfo_all_blocks=1 00:39:45.477 --rc geninfo_unexecuted_blocks=1 00:39:45.477 00:39:45.477 ' 00:39:45.477 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:39:45.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:45.477 --rc genhtml_branch_coverage=1 00:39:45.477 --rc genhtml_function_coverage=1 00:39:45.477 --rc genhtml_legend=1 00:39:45.477 --rc geninfo_all_blocks=1 00:39:45.477 --rc geninfo_unexecuted_blocks=1 00:39:45.477 00:39:45.477 ' 00:39:45.477 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:39:45.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:45.477 --rc genhtml_branch_coverage=1 00:39:45.477 --rc genhtml_function_coverage=1 00:39:45.477 --rc genhtml_legend=1 00:39:45.477 --rc geninfo_all_blocks=1 00:39:45.477 --rc geninfo_unexecuted_blocks=1 00:39:45.477 00:39:45.477 ' 00:39:45.477 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:39:45.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:45.477 --rc genhtml_branch_coverage=1 00:39:45.477 --rc genhtml_function_coverage=1 00:39:45.477 --rc genhtml_legend=1 00:39:45.477 --rc geninfo_all_blocks=1 00:39:45.477 --rc geninfo_unexecuted_blocks=1 00:39:45.477 00:39:45.477 ' 00:39:45.477 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:45.477 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:39:45.477 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:45.477 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:45.477 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:45.477 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:45.477 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:45.477 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:45.477 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:45.477 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:45.477 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:45.477 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:45.477 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:45.477 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:45.477 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:45.478 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:45.478 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:45.478 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:45.478 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:45.478 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:39:45.478 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:45.478 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:45.478 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:45.478 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:45.478 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:45.478 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:45.478 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:39:45.478 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:45.478 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:39:45.478 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:45.478 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:45.478 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:45.478 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:45.478 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:45.478 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:45.478 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:45.478 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:45.478 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:45.478 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:45.478 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:39:45.478 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:45.478 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:45.478 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:45.478 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:45.478 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:45.478 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:45.478 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:45.478 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:45.478 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:45.478 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:45.478 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:39:45.478 09:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:52.085 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:52.085 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:39:52.085 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:52.085 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:52.085 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:52.085 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:52.085 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:52.085 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:39:52.085 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:52.085 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:39:52.085 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:39:52.085 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:39:52.085 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:39:52.085 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:39:52.085 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:39:52.085 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:52.085 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:52.085 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:52.085 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:52.085 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:52.085 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:52.085 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:52.086 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:52.086 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:52.086 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:52.086 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:52.086 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:52.086 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:52.086 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:52.086 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:52.086 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:52.086 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:52.086 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:52.086 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:52.086 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:39:52.086 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:39:52.086 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:52.086 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:52.086 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:52.086 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:52.086 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:52.086 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:52.086 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:39:52.086 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:39:52.086 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:52.086 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:52.086 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:52.086 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:52.086 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:52.086 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:52.086 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:52.086 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:52.086 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:52.086 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:52.086 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:52.086 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:52.086 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:52.086 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:52.086 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:52.086 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:39:52.086 Found net devices under 0000:4b:00.0: cvl_0_0 00:39:52.086 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:52.086 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:52.086 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:52.086 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:52.086 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:52.086 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:52.086 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:52.086 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:52.086 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:39:52.086 Found net devices under 0000:4b:00.1: cvl_0_1 00:39:52.086 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:52.086 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:52.086 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:39:52.086 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:52.086 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:52.086 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:52.086 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:52.086 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:52.086 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:52.086 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:52.086 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:52.086 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:52.086 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:52.086 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:52.086 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:52.086 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:52.086 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:52.086 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:52.086 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:52.086 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:52.086 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:52.347 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:52.347 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:52.347 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:52.347 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:52.347 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:52.347 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:52.347 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:52.347 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:52.347 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:52.347 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.583 ms 00:39:52.347 00:39:52.347 --- 10.0.0.2 ping statistics --- 00:39:52.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:52.347 rtt min/avg/max/mdev = 0.583/0.583/0.583/0.000 ms 00:39:52.347 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:52.347 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:52.347 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.313 ms 00:39:52.347 00:39:52.347 --- 10.0.0.1 ping statistics --- 00:39:52.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:52.347 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:39:52.347 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:52.347 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:39:52.347 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:52.347 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:52.347 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:52.348 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:52.348 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:52.348 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:52.348 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:52.609 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:39:52.609 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:52.609 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:52.609 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:52.609 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=3088909 00:39:52.609 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 3088909 00:39:52.609 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:39:52.609 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 3088909 ']' 00:39:52.609 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:52.609 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:52.609 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:52.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:52.609 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:52.609 09:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:52.609 [2024-12-09 09:57:27.882659] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:52.609 [2024-12-09 09:57:27.883785] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:39:52.609 [2024-12-09 09:57:27.883843] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:52.609 [2024-12-09 09:57:27.983817] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:39:52.609 [2024-12-09 09:57:28.011098] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:52.609 [2024-12-09 09:57:28.011149] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:52.609 [2024-12-09 09:57:28.011158] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:52.609 [2024-12-09 09:57:28.011165] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:52.609 [2024-12-09 09:57:28.011171] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:52.609 [2024-12-09 09:57:28.012649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:52.609 [2024-12-09 09:57:28.012659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:52.870 [2024-12-09 09:57:28.067250] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:52.870 [2024-12-09 09:57:28.067843] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:52.870 [2024-12-09 09:57:28.068143] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:53.442 09:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:53.442 09:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:39:53.442 09:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:53.442 09:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:53.442 09:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:53.442 09:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:53.442 09:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:53.442 09:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:53.442 09:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:53.442 [2024-12-09 09:57:28.729736] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:53.442 09:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:53.442 09:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:39:53.442 09:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:53.442 09:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:53.442 09:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:53.442 09:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:53.442 09:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:53.442 09:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:53.442 [2024-12-09 09:57:28.762083] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:53.442 09:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:53.442 09:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:39:53.442 09:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:53.442 09:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:53.442 NULL1 00:39:53.442 09:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:53.442 09:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:39:53.442 09:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:53.442 09:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:53.442 Delay0 00:39:53.442 09:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:53.442 09:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:53.442 09:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:53.442 09:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:53.442 09:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:53.442 09:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3089185 00:39:53.442 09:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:39:53.442 09:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:39:53.442 [2024-12-09 09:57:28.856685] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:39:55.362 09:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:55.362 09:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:55.362 09:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:55.623 Write completed with error (sct=0, sc=8) 00:39:55.623 Read completed with error (sct=0, sc=8) 00:39:55.623 starting I/O failed: -6 00:39:55.623 Write completed with error (sct=0, sc=8) 00:39:55.623 Read completed with error (sct=0, sc=8) 00:39:55.623 Write completed with error (sct=0, sc=8) 00:39:55.623 Read completed with error (sct=0, sc=8) 00:39:55.623 starting I/O failed: -6 00:39:55.623 Read completed with error (sct=0, sc=8) 00:39:55.623 Write completed with error (sct=0, sc=8) 00:39:55.623 Write completed with error (sct=0, sc=8) 00:39:55.623 Read completed with error (sct=0, sc=8) 00:39:55.623 starting I/O failed: -6 00:39:55.623 Read completed with error (sct=0, sc=8) 00:39:55.623 Read completed with error (sct=0, sc=8) 00:39:55.623 Read completed with error (sct=0, sc=8) 00:39:55.623 Read completed with error (sct=0, sc=8) 00:39:55.623 starting I/O failed: -6 00:39:55.623 Write completed with error (sct=0, sc=8) 00:39:55.623 Write completed with error (sct=0, sc=8) 00:39:55.623 Read completed with error (sct=0, sc=8) 00:39:55.623 Read completed with error (sct=0, sc=8) 00:39:55.623 starting I/O failed: -6 00:39:55.623 Write completed with error (sct=0, sc=8) 00:39:55.623 Write completed with error (sct=0, sc=8) 00:39:55.623 Read completed with error (sct=0, sc=8) 00:39:55.623 Write completed with error (sct=0, sc=8) 00:39:55.623 starting I/O failed: -6 00:39:55.623 Read completed with error (sct=0, sc=8) 00:39:55.623 Read completed with error (sct=0, sc=8) 00:39:55.623 Write completed with error (sct=0, sc=8) 00:39:55.623 Write completed with error (sct=0, sc=8) 00:39:55.623 starting I/O failed: -6 00:39:55.623 Read completed with error (sct=0, sc=8) 00:39:55.623 Read completed with error (sct=0, sc=8) 00:39:55.623 Read completed with error (sct=0, sc=8) 00:39:55.623 Read completed with error (sct=0, sc=8) 00:39:55.623 starting I/O failed: -6 00:39:55.623 Write completed with error (sct=0, sc=8) 00:39:55.623 Write completed with error (sct=0, sc=8) 00:39:55.623 Read completed with error (sct=0, sc=8) 00:39:55.623 Write completed with error (sct=0, sc=8) 00:39:55.623 starting I/O failed: -6 00:39:55.623 Read completed with error (sct=0, sc=8) 00:39:55.623 Read completed with error (sct=0, sc=8) 00:39:55.623 Write completed with error (sct=0, sc=8) 00:39:55.623 Read completed with error (sct=0, sc=8) 00:39:55.623 starting I/O failed: -6 00:39:55.623 Write completed with error (sct=0, sc=8) 00:39:55.623 Write completed with error (sct=0, sc=8) 00:39:55.623 Read completed with error (sct=0, sc=8) 00:39:55.623 Read completed with error (sct=0, sc=8) 00:39:55.623 starting I/O failed: -6 00:39:55.623 starting I/O failed: -6 00:39:55.623 Read completed with error (sct=0, sc=8) 00:39:55.623 Write completed with error (sct=0, sc=8) 00:39:55.623 starting I/O failed: -6 00:39:55.623 Read completed with error (sct=0, sc=8) 00:39:55.623 Read completed with error (sct=0, sc=8) 00:39:55.623 starting I/O failed: -6 00:39:55.623 Read completed with error (sct=0, sc=8) 00:39:55.623 Write completed with error (sct=0, sc=8) 00:39:55.623 starting I/O failed: -6 00:39:55.623 Read completed with error (sct=0, sc=8) 00:39:55.623 Read completed with error (sct=0, sc=8) 00:39:55.623 starting I/O failed: -6 00:39:55.623 Read completed with error (sct=0, sc=8) 00:39:55.623 Read completed with error (sct=0, sc=8) 00:39:55.623 starting I/O failed: -6 00:39:55.623 Read completed with error (sct=0, sc=8) 00:39:55.623 Read completed with error (sct=0, sc=8) 00:39:55.623 starting I/O failed: -6 00:39:55.623 Read completed with error (sct=0, sc=8) 00:39:55.623 Read completed with error (sct=0, sc=8) 00:39:55.623 starting I/O failed: -6 00:39:55.623 Read completed with error (sct=0, sc=8) 00:39:55.623 Write completed with error (sct=0, sc=8) 00:39:55.623 starting I/O failed: -6 00:39:55.623 Write completed with error (sct=0, sc=8) 00:39:55.623 Read completed with error (sct=0, sc=8) 00:39:55.623 starting I/O failed: -6 00:39:55.623 Read completed with error (sct=0, sc=8) 00:39:55.623 Read completed with error (sct=0, sc=8) 00:39:55.623 starting I/O failed: -6 00:39:55.623 Read completed with error (sct=0, sc=8) 00:39:55.623 Read completed with error (sct=0, sc=8) 00:39:55.623 starting I/O failed: -6 00:39:55.623 Read completed with error (sct=0, sc=8) 00:39:55.623 Write completed with error (sct=0, sc=8) 00:39:55.623 starting I/O failed: -6 00:39:55.623 Read completed with error (sct=0, sc=8) 00:39:55.623 Read completed with error (sct=0, sc=8) 00:39:55.623 starting I/O failed: -6 00:39:55.624 Read completed with error (sct=0, sc=8) 00:39:55.624 Read completed with error (sct=0, sc=8) 00:39:55.624 starting I/O failed: -6 00:39:55.624 Write completed with error (sct=0, sc=8) 00:39:55.624 Read completed with error (sct=0, sc=8) 00:39:55.624 starting I/O failed: -6 00:39:55.624 Write completed with error (sct=0, sc=8) 00:39:55.624 Read completed with error (sct=0, sc=8) 00:39:55.624 starting I/O failed: -6 00:39:55.624 Read completed with error (sct=0, sc=8) 00:39:55.624 Read completed with error (sct=0, sc=8) 00:39:55.624 starting I/O failed: -6 00:39:55.624 Read completed with error (sct=0, sc=8) 00:39:55.624 Write completed with error (sct=0, sc=8) 00:39:55.624 starting I/O failed: -6 00:39:55.624 Read completed with error (sct=0, sc=8) 00:39:55.624 Read completed with error (sct=0, sc=8) 00:39:55.624 starting I/O failed: -6 00:39:55.624 Write completed with error (sct=0, sc=8) 00:39:55.624 Read completed with error (sct=0, sc=8) 00:39:55.624 starting I/O failed: -6 00:39:55.624 Write completed with error (sct=0, sc=8) 00:39:55.624 Read completed with error (sct=0, sc=8) 00:39:55.624 starting I/O failed: -6 00:39:55.624 Read completed with error (sct=0, sc=8) 00:39:55.624 Read completed with error (sct=0, sc=8) 00:39:55.624 starting I/O failed: -6 00:39:55.624 Write completed with error (sct=0, sc=8) 00:39:55.624 Read completed with error (sct=0, sc=8) 00:39:55.624 starting I/O failed: -6 00:39:55.624 Read completed with error (sct=0, sc=8) 00:39:55.624 Read completed with error (sct=0, sc=8) 00:39:55.624 starting I/O failed: -6 00:39:55.624 Write completed with error (sct=0, sc=8) 00:39:55.624 Read completed with error (sct=0, sc=8) 00:39:55.624 starting I/O failed: -6 00:39:55.624 Write completed with error (sct=0, sc=8) 00:39:55.624 Read completed with error (sct=0, sc=8) 00:39:55.624 starting I/O failed: -6 00:39:55.624 Read completed with error (sct=0, sc=8) 00:39:55.624 Read completed with error (sct=0, sc=8) 00:39:55.624 starting I/O failed: -6 00:39:55.624 Write completed with error (sct=0, sc=8) 00:39:55.624 [2024-12-09 09:57:31.032467] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218df70 is same with the state(6) to be set 00:39:55.624 Read completed with error (sct=0, sc=8) 00:39:55.624 Write completed with error (sct=0, sc=8) 00:39:55.624 starting I/O failed: -6 00:39:55.624 Read completed with error (sct=0, sc=8) 00:39:55.624 Write completed with error (sct=0, sc=8) 00:39:55.624 Read completed with error (sct=0, sc=8) 00:39:55.624 Write completed with error (sct=0, sc=8) 00:39:55.624 starting I/O failed: -6 00:39:55.624 Write completed with error (sct=0, sc=8) 00:39:55.624 Write completed with error (sct=0, sc=8) 00:39:55.624 Read completed with error (sct=0, sc=8) 00:39:55.624 Read completed with error (sct=0, sc=8) 00:39:55.624 starting I/O failed: -6 00:39:55.624 Read completed with error (sct=0, sc=8) 00:39:55.624 Read completed with error (sct=0, sc=8) 00:39:55.624 Read completed with error (sct=0, sc=8) 00:39:55.624 Read completed with error (sct=0, sc=8) 00:39:55.624 starting I/O failed: -6 00:39:55.624 Write completed with error (sct=0, sc=8) 00:39:55.624 Write completed with error (sct=0, sc=8) 00:39:55.624 Read completed with error (sct=0, sc=8) 00:39:55.624 Write completed with error (sct=0, sc=8) 00:39:55.624 starting I/O failed: -6 00:39:55.624 Write completed with error (sct=0, sc=8) 00:39:55.624 Write completed with error (sct=0, sc=8) 00:39:55.624 Read completed with error (sct=0, sc=8) 00:39:55.624 Read completed with error (sct=0, sc=8) 00:39:55.624 starting I/O failed: -6 00:39:55.624 Write completed with error (sct=0, sc=8) 00:39:55.624 Read completed with error (sct=0, sc=8) 00:39:55.624 Read completed with error (sct=0, sc=8) 00:39:55.624 Write completed with error (sct=0, sc=8) 00:39:55.624 starting I/O failed: -6 00:39:55.624 Write completed with error (sct=0, sc=8) 00:39:55.624 Read completed with error (sct=0, sc=8) 00:39:55.624 Write completed with error (sct=0, sc=8) 00:39:55.624 Read completed with error (sct=0, sc=8) 00:39:55.624 starting I/O failed: -6 00:39:55.624 Write completed with error (sct=0, sc=8) 00:39:55.624 Read completed with error (sct=0, sc=8) 00:39:55.624 Read completed with error (sct=0, sc=8) 00:39:55.624 Read completed with error (sct=0, sc=8) 00:39:55.624 starting I/O failed: -6 00:39:55.624 Read completed with error (sct=0, sc=8) 00:39:55.624 Write completed with error (sct=0, sc=8) 00:39:55.624 Read completed with error (sct=0, sc=8) 00:39:55.624 Write completed with error (sct=0, sc=8) 00:39:55.624 starting I/O failed: -6 00:39:55.624 Read completed with error (sct=0, sc=8) 00:39:55.624 Write completed with error (sct=0, sc=8) 00:39:55.624 Read completed with error (sct=0, sc=8) 00:39:55.624 Read completed with error (sct=0, sc=8) 00:39:55.624 starting I/O failed: -6 00:39:55.624 Read completed with error (sct=0, sc=8) 00:39:55.624 Write completed with error (sct=0, sc=8) 00:39:55.624 starting I/O failed: -6 00:39:55.624 Read completed with error (sct=0, sc=8) 00:39:55.624 Write completed with error (sct=0, sc=8) 00:39:55.624 starting I/O failed: -6 00:39:55.624 Read completed with error (sct=0, sc=8) 00:39:55.624 Read completed with error (sct=0, sc=8) 00:39:55.624 starting I/O failed: -6 00:39:55.624 Write completed with error (sct=0, sc=8) 00:39:55.624 Read completed with error (sct=0, sc=8) 00:39:55.624 starting I/O failed: -6 00:39:55.624 Write completed with error (sct=0, sc=8) 00:39:55.624 Read completed with error (sct=0, sc=8) 00:39:55.624 starting I/O failed: -6 00:39:55.624 Write completed with error (sct=0, sc=8) 00:39:55.624 Write completed with error (sct=0, sc=8) 00:39:55.624 starting I/O failed: -6 00:39:55.624 Write completed with error (sct=0, sc=8) 00:39:55.624 Read completed with error (sct=0, sc=8) 00:39:55.624 starting I/O failed: -6 00:39:55.624 Read completed with error (sct=0, sc=8) 00:39:55.624 Read completed with error (sct=0, sc=8) 00:39:55.624 starting I/O failed: -6 00:39:55.624 Write completed with error (sct=0, sc=8) 00:39:55.624 Read completed with error (sct=0, sc=8) 00:39:55.624 starting I/O failed: -6 00:39:55.624 Write completed with error (sct=0, sc=8) 00:39:55.624 Write completed with error (sct=0, sc=8) 00:39:55.624 starting I/O failed: -6 00:39:55.624 Read completed with error (sct=0, sc=8) 00:39:55.624 Read completed with error (sct=0, sc=8) 00:39:55.624 starting I/O failed: -6 00:39:55.624 Read completed with error (sct=0, sc=8) 00:39:55.624 Read completed with error (sct=0, sc=8) 00:39:55.624 starting I/O failed: -6 00:39:55.624 Read completed with error (sct=0, sc=8) 00:39:55.624 Read completed with error (sct=0, sc=8) 00:39:55.624 starting I/O failed: -6 00:39:55.624 Write completed with error (sct=0, sc=8) 00:39:55.624 Read completed with error (sct=0, sc=8) 00:39:55.624 starting I/O failed: -6 00:39:55.624 Write completed with error (sct=0, sc=8) 00:39:55.624 Read completed with error (sct=0, sc=8) 00:39:55.624 starting I/O failed: -6 00:39:55.624 Write completed with error (sct=0, sc=8) 00:39:55.624 Read completed with error (sct=0, sc=8) 00:39:55.624 starting I/O failed: -6 00:39:55.624 Read completed with error (sct=0, sc=8) 00:39:55.624 Read completed with error (sct=0, sc=8) 00:39:55.624 starting I/O failed: -6 00:39:55.624 Write completed with error (sct=0, sc=8) 00:39:55.624 Read completed with error (sct=0, sc=8) 00:39:55.624 starting I/O failed: -6 00:39:55.624 Read completed with error (sct=0, sc=8) 00:39:55.624 Read completed with error (sct=0, sc=8) 00:39:55.624 starting I/O failed: -6 00:39:55.624 Write completed with error (sct=0, sc=8) 00:39:55.624 Write completed with error (sct=0, sc=8) 00:39:55.624 starting I/O failed: -6 00:39:55.624 Write completed with error (sct=0, sc=8) 00:39:55.624 Read completed with error (sct=0, sc=8) 00:39:55.624 starting I/O failed: -6 00:39:55.624 Read completed with error (sct=0, sc=8) 00:39:55.624 Read completed with error (sct=0, sc=8) 00:39:55.624 starting I/O failed: -6 00:39:55.624 Read completed with error (sct=0, sc=8) 00:39:55.624 Read completed with error (sct=0, sc=8) 00:39:55.624 starting I/O failed: -6 00:39:55.624 Write completed with error (sct=0, sc=8) 00:39:55.624 Read completed with error (sct=0, sc=8) 00:39:55.624 starting I/O failed: -6 00:39:55.624 Read completed with error (sct=0, sc=8) 00:39:55.624 Read completed with error (sct=0, sc=8) 00:39:55.624 starting I/O failed: -6 00:39:55.624 starting I/O failed: -6 00:39:55.624 starting I/O failed: -6 00:39:55.624 starting I/O failed: -6 00:39:55.624 starting I/O failed: -6 00:39:55.624 starting I/O failed: -6 00:39:55.624 starting I/O failed: -6 00:39:55.624 starting I/O failed: -6 00:39:55.624 starting I/O failed: -6 00:39:55.624 starting I/O failed: -6 00:39:55.624 starting I/O failed: -6 00:39:55.624 starting I/O failed: -6 00:39:55.624 starting I/O failed: -6 00:39:55.624 starting I/O failed: -6 00:39:55.624 starting I/O failed: -6 00:39:56.565 [2024-12-09 09:57:31.996749] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218c190 is same with the state(6) to be set 00:39:56.845 Read completed with error (sct=0, sc=8) 00:39:56.845 Read completed with error (sct=0, sc=8) 00:39:56.845 Read completed with error (sct=0, sc=8) 00:39:56.845 Read completed with error (sct=0, sc=8) 00:39:56.845 Read completed with error (sct=0, sc=8) 00:39:56.845 Read completed with error (sct=0, sc=8) 00:39:56.845 Write completed with error (sct=0, sc=8) 00:39:56.845 Write completed with error (sct=0, sc=8) 00:39:56.845 Write completed with error (sct=0, sc=8) 00:39:56.845 Write completed with error (sct=0, sc=8) 00:39:56.845 Read completed with error (sct=0, sc=8) 00:39:56.845 Write completed with error (sct=0, sc=8) 00:39:56.845 Read completed with error (sct=0, sc=8) 00:39:56.845 Read completed with error (sct=0, sc=8) 00:39:56.845 Write completed with error (sct=0, sc=8) 00:39:56.845 Write completed with error (sct=0, sc=8) 00:39:56.845 Write completed with error (sct=0, sc=8) 00:39:56.845 Read completed with error (sct=0, sc=8) 00:39:56.845 Read completed with error (sct=0, sc=8) 00:39:56.845 Read completed with error (sct=0, sc=8) 00:39:56.845 Read completed with error (sct=0, sc=8) 00:39:56.845 Read completed with error (sct=0, sc=8) 00:39:56.845 Read completed with error (sct=0, sc=8) 00:39:56.845 Read completed with error (sct=0, sc=8) 00:39:56.845 Read completed with error (sct=0, sc=8) 00:39:56.845 Read completed with error (sct=0, sc=8) 00:39:56.845 Read completed with error (sct=0, sc=8) 00:39:56.845 Read completed with error (sct=0, sc=8) 00:39:56.845 Read completed with error (sct=0, sc=8) 00:39:56.845 Read completed with error (sct=0, sc=8) 00:39:56.845 Write completed with error (sct=0, sc=8) 00:39:56.845 Read completed with error (sct=0, sc=8) 00:39:56.845 Write completed with error (sct=0, sc=8) 00:39:56.845 Write completed with error (sct=0, sc=8) 00:39:56.845 Write completed with error (sct=0, sc=8) 00:39:56.845 Read completed with error (sct=0, sc=8) 00:39:56.845 [2024-12-09 09:57:32.037152] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218e150 is same with the state(6) to be set 00:39:56.845 Read completed with error (sct=0, sc=8) 00:39:56.845 Read completed with error (sct=0, sc=8) 00:39:56.845 Read completed with error (sct=0, sc=8) 00:39:56.845 Read completed with error (sct=0, sc=8) 00:39:56.845 Read completed with error (sct=0, sc=8) 00:39:56.845 Write completed with error (sct=0, sc=8) 00:39:56.845 Read completed with error (sct=0, sc=8) 00:39:56.845 Write completed with error (sct=0, sc=8) 00:39:56.845 Read completed with error (sct=0, sc=8) 00:39:56.845 Write completed with error (sct=0, sc=8) 00:39:56.845 Write completed with error (sct=0, sc=8) 00:39:56.845 Read completed with error (sct=0, sc=8) 00:39:56.845 Read completed with error (sct=0, sc=8) 00:39:56.845 Read completed with error (sct=0, sc=8) 00:39:56.845 Read completed with error (sct=0, sc=8) 00:39:56.845 Write completed with error (sct=0, sc=8) 00:39:56.845 Write completed with error (sct=0, sc=8) 00:39:56.845 Read completed with error (sct=0, sc=8) 00:39:56.845 Read completed with error (sct=0, sc=8) 00:39:56.845 Read completed with error (sct=0, sc=8) 00:39:56.845 Read completed with error (sct=0, sc=8) 00:39:56.845 Read completed with error (sct=0, sc=8) 00:39:56.845 Write completed with error (sct=0, sc=8) 00:39:56.845 Read completed with error (sct=0, sc=8) 00:39:56.845 Read completed with error (sct=0, sc=8) 00:39:56.845 Read completed with error (sct=0, sc=8) 00:39:56.845 Read completed with error (sct=0, sc=8) 00:39:56.845 Read completed with error (sct=0, sc=8) 00:39:56.845 Read completed with error (sct=0, sc=8) 00:39:56.845 Read completed with error (sct=0, sc=8) 00:39:56.845 Read completed with error (sct=0, sc=8) 00:39:56.845 Read completed with error (sct=0, sc=8) 00:39:56.845 Write completed with error (sct=0, sc=8) 00:39:56.845 Write completed with error (sct=0, sc=8) 00:39:56.845 Read completed with error (sct=0, sc=8) 00:39:56.845 Read completed with error (sct=0, sc=8) 00:39:56.845 [2024-12-09 09:57:32.037347] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218e510 is same with the state(6) to be set 00:39:56.845 Write completed with error (sct=0, sc=8) 00:39:56.845 Read completed with error (sct=0, sc=8) 00:39:56.845 Read completed with error (sct=0, sc=8) 00:39:56.845 Read completed with error (sct=0, sc=8) 00:39:56.845 Read completed with error (sct=0, sc=8) 00:39:56.845 Read completed with error (sct=0, sc=8) 00:39:56.845 Write completed with error (sct=0, sc=8) 00:39:56.845 Write completed with error (sct=0, sc=8) 00:39:56.845 Read completed with error (sct=0, sc=8) 00:39:56.845 Read completed with error (sct=0, sc=8) 00:39:56.845 Write completed with error (sct=0, sc=8) 00:39:56.845 Read completed with error (sct=0, sc=8) 00:39:56.845 Read completed with error (sct=0, sc=8) 00:39:56.845 Read completed with error (sct=0, sc=8) 00:39:56.845 Read completed with error (sct=0, sc=8) 00:39:56.845 Read completed with error (sct=0, sc=8) 00:39:56.845 Write completed with error (sct=0, sc=8) 00:39:56.845 Write completed with error (sct=0, sc=8) 00:39:56.845 Read completed with error (sct=0, sc=8) 00:39:56.845 Write completed with error (sct=0, sc=8) 00:39:56.845 Read completed with error (sct=0, sc=8) 00:39:56.845 Write completed with error (sct=0, sc=8) 00:39:56.845 Read completed with error (sct=0, sc=8) 00:39:56.845 Read completed with error (sct=0, sc=8) 00:39:56.845 Read completed with error (sct=0, sc=8) 00:39:56.845 Read completed with error (sct=0, sc=8) 00:39:56.845 Read completed with error (sct=0, sc=8) 00:39:56.846 Write completed with error (sct=0, sc=8) 00:39:56.846 Read completed with error (sct=0, sc=8) 00:39:56.846 Write completed with error (sct=0, sc=8) 00:39:56.846 Write completed with error (sct=0, sc=8) 00:39:56.846 Read completed with error (sct=0, sc=8) 00:39:56.846 Read completed with error (sct=0, sc=8) 00:39:56.846 Write completed with error (sct=0, sc=8) 00:39:56.846 Read completed with error (sct=0, sc=8) 00:39:56.846 Read completed with error (sct=0, sc=8) 00:39:56.846 Read completed with error (sct=0, sc=8) 00:39:56.846 Read completed with error (sct=0, sc=8) 00:39:56.846 Write completed with error (sct=0, sc=8) 00:39:56.846 [2024-12-09 09:57:32.038491] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ff4b800d020 is same with the state(6) to be set 00:39:56.846 Write completed with error (sct=0, sc=8) 00:39:56.846 Write completed with error (sct=0, sc=8) 00:39:56.846 Read completed with error (sct=0, sc=8) 00:39:56.846 Read completed with error (sct=0, sc=8) 00:39:56.846 Write completed with error (sct=0, sc=8) 00:39:56.846 Read completed with error (sct=0, sc=8) 00:39:56.846 Read completed with error (sct=0, sc=8) 00:39:56.846 Read completed with error (sct=0, sc=8) 00:39:56.846 Read completed with error (sct=0, sc=8) 00:39:56.846 Read completed with error (sct=0, sc=8) 00:39:56.846 Write completed with error (sct=0, sc=8) 00:39:56.846 Read completed with error (sct=0, sc=8) 00:39:56.846 Write completed with error (sct=0, sc=8) 00:39:56.846 Write completed with error (sct=0, sc=8) 00:39:56.846 Read completed with error (sct=0, sc=8) 00:39:56.846 Read completed with error (sct=0, sc=8) 00:39:56.846 Read completed with error (sct=0, sc=8) 00:39:56.846 Write completed with error (sct=0, sc=8) 00:39:56.846 Read completed with error (sct=0, sc=8) 00:39:56.846 Write completed with error (sct=0, sc=8) 00:39:56.846 Read completed with error (sct=0, sc=8) 00:39:56.846 Write completed with error (sct=0, sc=8) 00:39:56.846 Write completed with error (sct=0, sc=8) 00:39:56.846 Read completed with error (sct=0, sc=8) 00:39:56.846 Write completed with error (sct=0, sc=8) 00:39:56.846 Read completed with error (sct=0, sc=8) 00:39:56.846 Read completed with error (sct=0, sc=8) 00:39:56.846 Read completed with error (sct=0, sc=8) 00:39:56.846 Read completed with error (sct=0, sc=8) 00:39:56.846 Write completed with error (sct=0, sc=8) 00:39:56.846 Read completed with error (sct=0, sc=8) 00:39:56.846 Read completed with error (sct=0, sc=8) 00:39:56.846 Read completed with error (sct=0, sc=8) 00:39:56.846 Read completed with error (sct=0, sc=8) 00:39:56.846 Read completed with error (sct=0, sc=8) 00:39:56.846 Read completed with error (sct=0, sc=8) 00:39:56.846 Read completed with error (sct=0, sc=8) 00:39:56.846 Read completed with error (sct=0, sc=8) 00:39:56.846 Read completed with error (sct=0, sc=8) 00:39:56.846 [2024-12-09 09:57:32.038740] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ff4b800d7c0 is same with the state(6) to be set 00:39:56.846 Initializing NVMe Controllers 00:39:56.846 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:56.846 Controller IO queue size 128, less than required. 00:39:56.846 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:39:56.846 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:39:56.846 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:39:56.846 Initialization complete. Launching workers. 00:39:56.846 ======================================================== 00:39:56.846 Latency(us) 00:39:56.846 Device Information : IOPS MiB/s Average min max 00:39:56.846 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 179.76 0.09 912087.39 298.88 1008474.26 00:39:56.846 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 180.26 0.09 926871.11 403.48 1010753.07 00:39:56.846 ======================================================== 00:39:56.846 Total : 360.02 0.18 919489.48 298.88 1010753.07 00:39:56.846 00:39:56.846 [2024-12-09 09:57:32.039324] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x218c190 (9): Bad file descriptor 00:39:56.846 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:39:56.846 09:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:56.846 09:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:39:56.846 09:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3089185 00:39:56.846 09:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:39:57.134 09:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:39:57.134 09:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3089185 00:39:57.134 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3089185) - No such process 00:39:57.134 09:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3089185 00:39:57.134 09:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:39:57.134 09:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3089185 00:39:57.134 09:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:39:57.134 09:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:57.134 09:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:39:57.134 09:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:57.134 09:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 3089185 00:39:57.134 09:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:39:57.134 09:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:57.134 09:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:57.134 09:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:57.134 09:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:39:57.134 09:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:57.134 09:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:57.134 09:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:57.134 09:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:57.134 09:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:57.134 09:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:57.416 [2024-12-09 09:57:32.574109] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:57.416 09:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:57.416 09:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:39:57.416 09:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:57.416 09:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:39:57.416 09:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:57.416 09:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3089940 00:39:57.416 09:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:39:57.416 09:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:39:57.416 09:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3089940 00:39:57.416 09:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:57.416 [2024-12-09 09:57:32.649566] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:39:57.677 09:57:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:57.677 09:57:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3089940 00:39:57.677 09:57:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:58.250 09:57:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:58.250 09:57:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3089940 00:39:58.250 09:57:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:58.823 09:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:58.823 09:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3089940 00:39:58.823 09:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:59.396 09:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:59.396 09:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3089940 00:39:59.396 09:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:39:59.969 09:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:39:59.969 09:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3089940 00:39:59.969 09:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:40:00.230 09:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:40:00.230 09:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3089940 00:40:00.230 09:57:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:40:00.490 Initializing NVMe Controllers 00:40:00.490 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:40:00.490 Controller IO queue size 128, less than required. 00:40:00.490 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:40:00.490 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:40:00.490 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:40:00.490 Initialization complete. Launching workers. 00:40:00.490 ======================================================== 00:40:00.490 Latency(us) 00:40:00.490 Device Information : IOPS MiB/s Average min max 00:40:00.490 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002186.90 1000190.21 1005808.93 00:40:00.490 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003958.03 1000540.37 1010734.82 00:40:00.490 ======================================================== 00:40:00.490 Total : 256.00 0.12 1003072.46 1000190.21 1010734.82 00:40:00.490 00:40:00.750 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:40:00.750 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3089940 00:40:00.750 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3089940) - No such process 00:40:00.750 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3089940 00:40:00.750 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:40:00.750 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:40:00.750 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:00.750 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:40:00.751 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:00.751 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:40:00.751 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:00.751 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:00.751 rmmod nvme_tcp 00:40:00.751 rmmod nvme_fabrics 00:40:00.751 rmmod nvme_keyring 00:40:00.751 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:00.751 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:40:00.751 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:40:00.751 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 3088909 ']' 00:40:00.751 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 3088909 00:40:00.751 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 3088909 ']' 00:40:00.751 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 3088909 00:40:00.751 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:40:00.751 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:00.751 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3088909 00:40:01.011 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:01.011 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:01.011 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3088909' 00:40:01.011 killing process with pid 3088909 00:40:01.011 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 3088909 00:40:01.011 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 3088909 00:40:01.011 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:01.011 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:01.011 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:01.011 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:40:01.011 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:40:01.011 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:40:01.011 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:01.011 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:01.011 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:01.011 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:01.011 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:01.011 09:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:03.556 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:03.556 00:40:03.556 real 0m17.935s 00:40:03.556 user 0m26.538s 00:40:03.556 sys 0m7.115s 00:40:03.556 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:03.556 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:40:03.556 ************************************ 00:40:03.556 END TEST nvmf_delete_subsystem 00:40:03.556 ************************************ 00:40:03.556 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:40:03.556 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:03.556 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:03.556 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:03.556 ************************************ 00:40:03.556 START TEST nvmf_host_management 00:40:03.556 ************************************ 00:40:03.556 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:40:03.556 * Looking for test storage... 00:40:03.556 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:03.556 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:03.556 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:40:03.556 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:03.556 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:03.556 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:03.556 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:03.556 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:03.556 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:40:03.556 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:40:03.556 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:40:03.556 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:40:03.556 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:40:03.556 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:40:03.556 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:40:03.556 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:03.556 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:40:03.556 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:40:03.556 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:03.556 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:03.556 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:40:03.556 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:40:03.556 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:03.556 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:40:03.556 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:40:03.556 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:40:03.556 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:40:03.556 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:03.556 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:40:03.556 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:40:03.556 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:03.556 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:03.556 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:40:03.556 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:03.556 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:03.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:03.556 --rc genhtml_branch_coverage=1 00:40:03.556 --rc genhtml_function_coverage=1 00:40:03.556 --rc genhtml_legend=1 00:40:03.556 --rc geninfo_all_blocks=1 00:40:03.556 --rc geninfo_unexecuted_blocks=1 00:40:03.556 00:40:03.556 ' 00:40:03.556 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:03.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:03.556 --rc genhtml_branch_coverage=1 00:40:03.556 --rc genhtml_function_coverage=1 00:40:03.556 --rc genhtml_legend=1 00:40:03.556 --rc geninfo_all_blocks=1 00:40:03.556 --rc geninfo_unexecuted_blocks=1 00:40:03.556 00:40:03.556 ' 00:40:03.556 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:03.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:03.556 --rc genhtml_branch_coverage=1 00:40:03.556 --rc genhtml_function_coverage=1 00:40:03.556 --rc genhtml_legend=1 00:40:03.556 --rc geninfo_all_blocks=1 00:40:03.556 --rc geninfo_unexecuted_blocks=1 00:40:03.556 00:40:03.556 ' 00:40:03.556 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:03.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:03.556 --rc genhtml_branch_coverage=1 00:40:03.556 --rc genhtml_function_coverage=1 00:40:03.556 --rc genhtml_legend=1 00:40:03.556 --rc geninfo_all_blocks=1 00:40:03.556 --rc geninfo_unexecuted_blocks=1 00:40:03.556 00:40:03.556 ' 00:40:03.556 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:03.556 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:40:03.556 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:03.556 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:03.556 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:03.556 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:03.556 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:03.556 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:03.556 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:03.556 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:03.556 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:03.556 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:03.556 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:40:03.556 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:40:03.557 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:03.557 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:03.557 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:03.557 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:03.557 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:03.557 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:40:03.557 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:03.557 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:03.557 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:03.557 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:03.557 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:03.557 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:03.557 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:40:03.557 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:03.557 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:40:03.557 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:03.557 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:03.557 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:03.557 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:03.557 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:03.557 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:03.557 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:03.557 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:03.557 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:03.557 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:03.557 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:03.557 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:03.557 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:40:03.557 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:03.557 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:03.557 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:03.557 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:03.557 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:03.557 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:03.557 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:03.557 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:03.557 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:03.557 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:03.557 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:40:03.557 09:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:11.702 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:11.702 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:40:11.702 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:11.702 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:11.702 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:11.702 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:11.702 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:11.702 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:40:11.702 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:11.702 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:40:11.702 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:40:11.702 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:40:11.702 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:40:11.702 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:40:11.702 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:40:11.702 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:11.702 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:11.702 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:11.702 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:11.702 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:11.702 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:11.702 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:11.702 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:11.702 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:11.702 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:11.702 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:11.702 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:11.702 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:11.702 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:11.702 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:11.702 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:11.702 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:11.702 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:11.702 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:11.702 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:40:11.702 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:40:11.702 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:11.702 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:11.702 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:11.702 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:11.702 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:11.702 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:11.702 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:40:11.702 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:40:11.702 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:11.702 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:11.702 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:11.702 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:11.702 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:11.702 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:11.702 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:11.702 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:11.702 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:11.702 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:11.702 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:11.702 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:11.702 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:11.702 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:11.702 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:11.702 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:40:11.702 Found net devices under 0000:4b:00.0: cvl_0_0 00:40:11.702 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:11.702 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:11.702 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:11.702 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:11.702 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:11.702 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:11.702 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:11.702 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:11.702 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:40:11.702 Found net devices under 0000:4b:00.1: cvl_0_1 00:40:11.702 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:11.702 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:11.702 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:40:11.702 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:11.702 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:11.702 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:11.702 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:11.702 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:11.702 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:11.702 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:11.702 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:11.703 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:11.703 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:11.703 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:11.703 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:11.703 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:11.703 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:11.703 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:11.703 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:11.703 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:11.703 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:11.703 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:11.703 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:11.703 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:11.703 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:11.703 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:11.703 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:11.703 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:11.703 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:11.703 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:11.703 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.599 ms 00:40:11.703 00:40:11.703 --- 10.0.0.2 ping statistics --- 00:40:11.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:11.703 rtt min/avg/max/mdev = 0.599/0.599/0.599/0.000 ms 00:40:11.703 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:11.703 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:11.703 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.270 ms 00:40:11.703 00:40:11.703 --- 10.0.0.1 ping statistics --- 00:40:11.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:11.703 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:40:11.703 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:11.703 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:40:11.703 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:11.703 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:11.703 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:11.703 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:11.703 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:11.703 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:11.703 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:11.703 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:40:11.703 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:40:11.703 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:40:11.703 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:11.703 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:11.703 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:11.703 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=3094627 00:40:11.703 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 3094627 00:40:11.703 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:40:11.703 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3094627 ']' 00:40:11.703 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:11.703 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:11.703 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:11.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:11.703 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:11.703 09:57:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:11.703 [2024-12-09 09:57:46.035909] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:11.703 [2024-12-09 09:57:46.036862] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:40:11.703 [2024-12-09 09:57:46.036898] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:11.703 [2024-12-09 09:57:46.130762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:11.703 [2024-12-09 09:57:46.148751] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:11.703 [2024-12-09 09:57:46.148785] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:11.703 [2024-12-09 09:57:46.148793] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:11.703 [2024-12-09 09:57:46.148800] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:11.703 [2024-12-09 09:57:46.148806] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:11.703 [2024-12-09 09:57:46.153653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:40:11.703 [2024-12-09 09:57:46.153787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:40:11.703 [2024-12-09 09:57:46.154042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:11.703 [2024-12-09 09:57:46.154042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:40:11.703 [2024-12-09 09:57:46.202745] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:11.703 [2024-12-09 09:57:46.203372] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:11.703 [2024-12-09 09:57:46.204320] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:40:11.703 [2024-12-09 09:57:46.204608] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:11.703 [2024-12-09 09:57:46.204773] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:40:11.703 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:11.703 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:40:11.703 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:11.703 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:11.703 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:11.703 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:11.703 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:11.703 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:11.703 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:11.703 [2024-12-09 09:57:46.882917] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:11.703 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:11.703 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:40:11.703 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:11.703 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:11.703 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:40:11.703 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:40:11.703 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:40:11.703 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:11.703 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:11.703 Malloc0 00:40:11.703 [2024-12-09 09:57:46.975139] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:11.703 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:11.703 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:40:11.703 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:11.703 09:57:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:11.703 09:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3094945 00:40:11.704 09:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3094945 /var/tmp/bdevperf.sock 00:40:11.704 09:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3094945 ']' 00:40:11.704 09:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:40:11.704 09:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:11.704 09:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:40:11.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:40:11.704 09:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:40:11.704 09:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:40:11.704 09:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:11.704 09:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:11.704 09:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:40:11.704 09:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:40:11.704 09:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:11.704 09:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:11.704 { 00:40:11.704 "params": { 00:40:11.704 "name": "Nvme$subsystem", 00:40:11.704 "trtype": "$TEST_TRANSPORT", 00:40:11.704 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:11.704 "adrfam": "ipv4", 00:40:11.704 "trsvcid": "$NVMF_PORT", 00:40:11.704 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:11.704 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:11.704 "hdgst": ${hdgst:-false}, 00:40:11.704 "ddgst": ${ddgst:-false} 00:40:11.704 }, 00:40:11.704 "method": "bdev_nvme_attach_controller" 00:40:11.704 } 00:40:11.704 EOF 00:40:11.704 )") 00:40:11.704 09:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:40:11.704 09:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:40:11.704 09:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:40:11.704 09:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:11.704 "params": { 00:40:11.704 "name": "Nvme0", 00:40:11.704 "trtype": "tcp", 00:40:11.704 "traddr": "10.0.0.2", 00:40:11.704 "adrfam": "ipv4", 00:40:11.704 "trsvcid": "4420", 00:40:11.704 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:11.704 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:11.704 "hdgst": false, 00:40:11.704 "ddgst": false 00:40:11.704 }, 00:40:11.704 "method": "bdev_nvme_attach_controller" 00:40:11.704 }' 00:40:11.704 [2024-12-09 09:57:47.079939] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:40:11.704 [2024-12-09 09:57:47.079993] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3094945 ] 00:40:11.964 [2024-12-09 09:57:47.168413] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:11.964 [2024-12-09 09:57:47.186560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:12.225 Running I/O for 10 seconds... 00:40:12.485 09:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:12.485 09:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:40:12.485 09:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:40:12.485 09:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:12.485 09:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:12.485 09:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:12.485 09:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:40:12.485 09:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:40:12.485 09:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:40:12.485 09:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:40:12.485 09:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:40:12.485 09:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:40:12.485 09:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:40:12.485 09:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:40:12.485 09:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:40:12.485 09:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:40:12.485 09:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:12.485 09:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:12.486 09:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:12.748 09:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=707 00:40:12.748 09:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 707 -ge 100 ']' 00:40:12.748 09:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:40:12.748 09:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:40:12.748 09:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:40:12.748 09:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:40:12.748 09:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:12.748 09:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:12.748 [2024-12-09 09:57:47.966769] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ba7a0 is same with the state(6) to be set 00:40:12.748 [2024-12-09 09:57:47.966813] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ba7a0 is same with the state(6) to be set 00:40:12.748 [2024-12-09 09:57:47.966821] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ba7a0 is same with the state(6) to be set 00:40:12.748 [2024-12-09 09:57:47.966828] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ba7a0 is same with the state(6) to be set 00:40:12.748 [2024-12-09 09:57:47.966836] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ba7a0 is same with the state(6) to be set 00:40:12.748 [2024-12-09 09:57:47.966843] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ba7a0 is same with the state(6) to be set 00:40:12.748 [2024-12-09 09:57:47.966850] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ba7a0 is same with the state(6) to be set 00:40:12.748 [2024-12-09 09:57:47.966857] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ba7a0 is same with the state(6) to be set 00:40:12.748 [2024-12-09 09:57:47.966864] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ba7a0 is same with the state(6) to be set 00:40:12.748 [2024-12-09 09:57:47.966870] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ba7a0 is same with the state(6) to be set 00:40:12.748 [2024-12-09 09:57:47.966884] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ba7a0 is same with the state(6) to be set 00:40:12.748 [2024-12-09 09:57:47.966891] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ba7a0 is same with the state(6) to be set 00:40:12.748 [2024-12-09 09:57:47.966897] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ba7a0 is same with the state(6) to be set 00:40:12.748 [2024-12-09 09:57:47.966905] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ba7a0 is same with the state(6) to be set 00:40:12.748 [2024-12-09 09:57:47.966911] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ba7a0 is same with the state(6) to be set 00:40:12.748 [2024-12-09 09:57:47.966918] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ba7a0 is same with the state(6) to be set 00:40:12.748 [2024-12-09 09:57:47.966925] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ba7a0 is same with the state(6) to be set 00:40:12.748 [2024-12-09 09:57:47.966932] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ba7a0 is same with the state(6) to be set 00:40:12.748 [2024-12-09 09:57:47.966938] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ba7a0 is same with the state(6) to be set 00:40:12.748 [2024-12-09 09:57:47.966945] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ba7a0 is same with the state(6) to be set 00:40:12.748 [2024-12-09 09:57:47.966952] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ba7a0 is same with the state(6) to be set 00:40:12.748 [2024-12-09 09:57:47.966959] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ba7a0 is same with the state(6) to be set 00:40:12.748 [2024-12-09 09:57:47.966965] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ba7a0 is same with the state(6) to be set 00:40:12.748 [2024-12-09 09:57:47.966972] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ba7a0 is same with the state(6) to be set 00:40:12.748 [2024-12-09 09:57:47.966979] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ba7a0 is same with the state(6) to be set 00:40:12.748 [2024-12-09 09:57:47.966985] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ba7a0 is same with the state(6) to be set 00:40:12.748 [2024-12-09 09:57:47.966992] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ba7a0 is same with the state(6) to be set 00:40:12.748 [2024-12-09 09:57:47.966999] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ba7a0 is same with the state(6) to be set 00:40:12.748 [2024-12-09 09:57:47.967006] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ba7a0 is same with the state(6) to be set 00:40:12.749 [2024-12-09 09:57:47.967012] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ba7a0 is same with the state(6) to be set 00:40:12.749 [2024-12-09 09:57:47.967019] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ba7a0 is same with the state(6) to be set 00:40:12.749 [2024-12-09 09:57:47.967025] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ba7a0 is same with the state(6) to be set 00:40:12.749 [2024-12-09 09:57:47.967032] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ba7a0 is same with the state(6) to be set 00:40:12.749 [2024-12-09 09:57:47.967040] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ba7a0 is same with the state(6) to be set 00:40:12.749 [2024-12-09 09:57:47.967047] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ba7a0 is same with the state(6) to be set 00:40:12.749 [2024-12-09 09:57:47.967054] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ba7a0 is same with the state(6) to be set 00:40:12.749 [2024-12-09 09:57:47.967060] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ba7a0 is same with the state(6) to be set 00:40:12.749 [2024-12-09 09:57:47.967069] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ba7a0 is same with the state(6) to be set 00:40:12.749 [2024-12-09 09:57:47.967076] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ba7a0 is same with the state(6) to be set 00:40:12.749 [2024-12-09 09:57:47.967082] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ba7a0 is same with the state(6) to be set 00:40:12.749 [2024-12-09 09:57:47.967089] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ba7a0 is same with the state(6) to be set 00:40:12.749 [2024-12-09 09:57:47.967096] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ba7a0 is same with the state(6) to be set 00:40:12.749 [2024-12-09 09:57:47.967102] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ba7a0 is same with the state(6) to be set 00:40:12.749 [2024-12-09 09:57:47.967109] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ba7a0 is same with the state(6) to be set 00:40:12.749 [2024-12-09 09:57:47.967116] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ba7a0 is same with the state(6) to be set 00:40:12.749 [2024-12-09 09:57:47.967122] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ba7a0 is same with the state(6) to be set 00:40:12.749 [2024-12-09 09:57:47.967129] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ba7a0 is same with the state(6) to be set 00:40:12.749 [2024-12-09 09:57:47.967135] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ba7a0 is same with the state(6) to be set 00:40:12.749 [2024-12-09 09:57:47.967142] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ba7a0 is same with the state(6) to be set 00:40:12.749 [2024-12-09 09:57:47.967149] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ba7a0 is same with the state(6) to be set 00:40:12.749 [2024-12-09 09:57:47.967156] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ba7a0 is same with the state(6) to be set 00:40:12.749 [2024-12-09 09:57:47.967162] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ba7a0 is same with the state(6) to be set 00:40:12.749 [2024-12-09 09:57:47.967168] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ba7a0 is same with the state(6) to be set 00:40:12.749 [2024-12-09 09:57:47.967175] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ba7a0 is same with the state(6) to be set 00:40:12.749 [2024-12-09 09:57:47.967182] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ba7a0 is same with the state(6) to be set 00:40:12.749 [2024-12-09 09:57:47.967188] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ba7a0 is same with the state(6) to be set 00:40:12.749 [2024-12-09 09:57:47.967195] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ba7a0 is same with the state(6) to be set 00:40:12.749 [2024-12-09 09:57:47.967202] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ba7a0 is same with the state(6) to be set 00:40:12.749 [2024-12-09 09:57:47.967209] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ba7a0 is same with the state(6) to be set 00:40:12.749 [2024-12-09 09:57:47.967216] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ba7a0 is same with the state(6) to be set 00:40:12.749 [2024-12-09 09:57:47.967222] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ba7a0 is same with the state(6) to be set 00:40:12.749 [2024-12-09 09:57:47.967229] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ba7a0 is same with the state(6) to be set 00:40:12.749 [2024-12-09 09:57:47.967236] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16ba7a0 is same with the state(6) to be set 00:40:12.749 [2024-12-09 09:57:47.967347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.749 [2024-12-09 09:57:47.967390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.749 [2024-12-09 09:57:47.967409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.749 [2024-12-09 09:57:47.967417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.749 [2024-12-09 09:57:47.967428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.749 [2024-12-09 09:57:47.967435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.749 [2024-12-09 09:57:47.967444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.749 [2024-12-09 09:57:47.967452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.749 [2024-12-09 09:57:47.967462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.749 [2024-12-09 09:57:47.967469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.749 [2024-12-09 09:57:47.967478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.749 [2024-12-09 09:57:47.967486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.749 [2024-12-09 09:57:47.967495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.749 [2024-12-09 09:57:47.967502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.749 [2024-12-09 09:57:47.967511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.749 [2024-12-09 09:57:47.967518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.749 [2024-12-09 09:57:47.967528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.749 [2024-12-09 09:57:47.967535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.749 [2024-12-09 09:57:47.967545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.749 [2024-12-09 09:57:47.967552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.749 [2024-12-09 09:57:47.967562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.749 [2024-12-09 09:57:47.967569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.749 [2024-12-09 09:57:47.967578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.749 [2024-12-09 09:57:47.967585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.749 [2024-12-09 09:57:47.967595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.749 [2024-12-09 09:57:47.967602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.749 [2024-12-09 09:57:47.967613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.749 [2024-12-09 09:57:47.967620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.749 [2024-12-09 09:57:47.967630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.749 [2024-12-09 09:57:47.967643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.749 [2024-12-09 09:57:47.967653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.749 [2024-12-09 09:57:47.967660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.749 [2024-12-09 09:57:47.967670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.749 [2024-12-09 09:57:47.967677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.749 [2024-12-09 09:57:47.967686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.749 [2024-12-09 09:57:47.967694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.749 [2024-12-09 09:57:47.967703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.749 [2024-12-09 09:57:47.967710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.749 [2024-12-09 09:57:47.967720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.749 [2024-12-09 09:57:47.967727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.749 [2024-12-09 09:57:47.967736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.749 [2024-12-09 09:57:47.967743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.749 [2024-12-09 09:57:47.967752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.749 [2024-12-09 09:57:47.967759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.749 [2024-12-09 09:57:47.967769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.749 [2024-12-09 09:57:47.967776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.749 [2024-12-09 09:57:47.967785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.750 [2024-12-09 09:57:47.967792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.750 [2024-12-09 09:57:47.967802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.750 [2024-12-09 09:57:47.967809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.750 [2024-12-09 09:57:47.967819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.750 [2024-12-09 09:57:47.967827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.750 [2024-12-09 09:57:47.967837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.750 [2024-12-09 09:57:47.967843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.750 [2024-12-09 09:57:47.967853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.750 [2024-12-09 09:57:47.967860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.750 [2024-12-09 09:57:47.967869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.750 [2024-12-09 09:57:47.967877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.750 [2024-12-09 09:57:47.967886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.750 [2024-12-09 09:57:47.967893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.750 [2024-12-09 09:57:47.967903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.750 [2024-12-09 09:57:47.967911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.750 [2024-12-09 09:57:47.967920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.750 [2024-12-09 09:57:47.967927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.750 [2024-12-09 09:57:47.967937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.750 [2024-12-09 09:57:47.967945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.750 [2024-12-09 09:57:47.967954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.750 [2024-12-09 09:57:47.967962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.750 [2024-12-09 09:57:47.967971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.750 [2024-12-09 09:57:47.967978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.750 [2024-12-09 09:57:47.967988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.750 [2024-12-09 09:57:47.967995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.750 [2024-12-09 09:57:47.968004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.750 [2024-12-09 09:57:47.968011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.750 [2024-12-09 09:57:47.968021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.750 [2024-12-09 09:57:47.968028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.750 [2024-12-09 09:57:47.968039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.750 [2024-12-09 09:57:47.968046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.750 [2024-12-09 09:57:47.968056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.750 [2024-12-09 09:57:47.968063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.750 [2024-12-09 09:57:47.968072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.750 [2024-12-09 09:57:47.968079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.750 [2024-12-09 09:57:47.968089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.750 [2024-12-09 09:57:47.968096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.750 [2024-12-09 09:57:47.968105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.750 [2024-12-09 09:57:47.968113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.750 [2024-12-09 09:57:47.968122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.750 [2024-12-09 09:57:47.968130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.750 [2024-12-09 09:57:47.968139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.750 [2024-12-09 09:57:47.968146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.750 [2024-12-09 09:57:47.968155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.750 [2024-12-09 09:57:47.968163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.750 [2024-12-09 09:57:47.968172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.750 [2024-12-09 09:57:47.968180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.750 [2024-12-09 09:57:47.968189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.750 [2024-12-09 09:57:47.968196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.750 [2024-12-09 09:57:47.968205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.750 [2024-12-09 09:57:47.968213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.750 [2024-12-09 09:57:47.968222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.750 [2024-12-09 09:57:47.968229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.750 [2024-12-09 09:57:47.968239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.750 [2024-12-09 09:57:47.968248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.750 [2024-12-09 09:57:47.968257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.750 [2024-12-09 09:57:47.968264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.750 [2024-12-09 09:57:47.968274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.750 [2024-12-09 09:57:47.968281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.750 [2024-12-09 09:57:47.968290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.750 [2024-12-09 09:57:47.968298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.750 [2024-12-09 09:57:47.968307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.750 [2024-12-09 09:57:47.968314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.750 [2024-12-09 09:57:47.968323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.750 [2024-12-09 09:57:47.968331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.750 [2024-12-09 09:57:47.968340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.750 [2024-12-09 09:57:47.968347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.750 [2024-12-09 09:57:47.968357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.750 [2024-12-09 09:57:47.968364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.750 [2024-12-09 09:57:47.968373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.750 [2024-12-09 09:57:47.968380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.750 [2024-12-09 09:57:47.968390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.750 [2024-12-09 09:57:47.968397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.750 [2024-12-09 09:57:47.968406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.750 [2024-12-09 09:57:47.968414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.750 [2024-12-09 09:57:47.968423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.751 [2024-12-09 09:57:47.968430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.751 [2024-12-09 09:57:47.968440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.751 [2024-12-09 09:57:47.968447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.751 [2024-12-09 09:57:47.968458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:40:12.751 [2024-12-09 09:57:47.968466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.751 [2024-12-09 09:57:47.968474] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163b1a0 is same with the state(6) to be set 00:40:12.751 [2024-12-09 09:57:47.968552] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:40:12.751 [2024-12-09 09:57:47.968563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.751 [2024-12-09 09:57:47.968571] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:40:12.751 [2024-12-09 09:57:47.968578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.751 [2024-12-09 09:57:47.968587] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:40:12.751 [2024-12-09 09:57:47.968594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.751 [2024-12-09 09:57:47.968602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:40:12.751 [2024-12-09 09:57:47.968609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.751 [2024-12-09 09:57:47.968616] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x162a790 is same with the state(6) to be set 00:40:12.751 [2024-12-09 09:57:47.969846] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:40:12.751 task offset: 98304 on job bdev=Nvme0n1 fails 00:40:12.751 00:40:12.751 Latency(us) 00:40:12.751 [2024-12-09T08:57:48.204Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:12.751 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:40:12.751 Job: Nvme0n1 ended in about 0.53 seconds with error 00:40:12.751 Verification LBA range: start 0x0 length 0x400 00:40:12.751 Nvme0n1 : 0.53 1461.53 91.35 121.79 0.00 39405.70 4314.45 35170.99 00:40:12.751 [2024-12-09T08:57:48.204Z] =================================================================================================================== 00:40:12.751 [2024-12-09T08:57:48.204Z] Total : 1461.53 91.35 121.79 0.00 39405.70 4314.45 35170.99 00:40:12.751 09:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:12.751 [2024-12-09 09:57:47.971830] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:40:12.751 [2024-12-09 09:57:47.971853] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x162a790 (9): Bad file descriptor 00:40:12.751 09:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:40:12.751 09:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:12.751 09:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:12.751 [2024-12-09 09:57:47.973060] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:40:12.751 [2024-12-09 09:57:47.973163] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:40:12.751 [2024-12-09 09:57:47.973184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:12.751 [2024-12-09 09:57:47.973200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:40:12.751 [2024-12-09 09:57:47.973208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:40:12.751 [2024-12-09 09:57:47.973215] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:40:12.751 [2024-12-09 09:57:47.973222] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x162a790 00:40:12.751 [2024-12-09 09:57:47.973241] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x162a790 (9): Bad file descriptor 00:40:12.751 [2024-12-09 09:57:47.973253] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:40:12.751 [2024-12-09 09:57:47.973260] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:40:12.751 [2024-12-09 09:57:47.973268] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:40:12.751 [2024-12-09 09:57:47.973276] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:40:12.751 09:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:12.751 09:57:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:40:13.693 09:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3094945 00:40:13.693 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3094945) - No such process 00:40:13.694 09:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:40:13.694 09:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:40:13.694 09:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:40:13.694 09:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:40:13.694 09:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:40:13.694 09:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:40:13.694 09:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:13.694 09:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:13.694 { 00:40:13.694 "params": { 00:40:13.694 "name": "Nvme$subsystem", 00:40:13.694 "trtype": "$TEST_TRANSPORT", 00:40:13.694 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:13.694 "adrfam": "ipv4", 00:40:13.694 "trsvcid": "$NVMF_PORT", 00:40:13.694 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:13.694 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:13.694 "hdgst": ${hdgst:-false}, 00:40:13.694 "ddgst": ${ddgst:-false} 00:40:13.694 }, 00:40:13.694 "method": "bdev_nvme_attach_controller" 00:40:13.694 } 00:40:13.694 EOF 00:40:13.694 )") 00:40:13.694 09:57:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:40:13.694 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:40:13.694 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:40:13.694 09:57:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:13.694 "params": { 00:40:13.694 "name": "Nvme0", 00:40:13.694 "trtype": "tcp", 00:40:13.694 "traddr": "10.0.0.2", 00:40:13.694 "adrfam": "ipv4", 00:40:13.694 "trsvcid": "4420", 00:40:13.694 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:13.694 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:40:13.694 "hdgst": false, 00:40:13.694 "ddgst": false 00:40:13.694 }, 00:40:13.694 "method": "bdev_nvme_attach_controller" 00:40:13.694 }' 00:40:13.694 [2024-12-09 09:57:49.043176] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:40:13.694 [2024-12-09 09:57:49.043233] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3095348 ] 00:40:13.694 [2024-12-09 09:57:49.132068] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:13.955 [2024-12-09 09:57:49.149420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:13.955 Running I/O for 1 seconds... 00:40:14.898 1625.00 IOPS, 101.56 MiB/s 00:40:14.898 Latency(us) 00:40:14.898 [2024-12-09T08:57:50.351Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:14.898 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:40:14.898 Verification LBA range: start 0x0 length 0x400 00:40:14.898 Nvme0n1 : 1.01 1670.23 104.39 0.00 0.00 37547.46 2389.33 36700.16 00:40:14.898 [2024-12-09T08:57:50.351Z] =================================================================================================================== 00:40:14.898 [2024-12-09T08:57:50.351Z] Total : 1670.23 104.39 0.00 0.00 37547.46 2389.33 36700.16 00:40:15.159 09:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:40:15.159 09:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:40:15.159 09:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:40:15.159 09:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:40:15.159 09:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:40:15.159 09:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:15.159 09:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:40:15.159 09:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:15.159 09:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:40:15.159 09:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:15.159 09:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:15.159 rmmod nvme_tcp 00:40:15.159 rmmod nvme_fabrics 00:40:15.159 rmmod nvme_keyring 00:40:15.159 09:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:15.159 09:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:40:15.159 09:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:40:15.159 09:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 3094627 ']' 00:40:15.159 09:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 3094627 00:40:15.159 09:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 3094627 ']' 00:40:15.159 09:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 3094627 00:40:15.159 09:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:40:15.159 09:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:15.159 09:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3094627 00:40:15.160 09:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:40:15.160 09:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:40:15.160 09:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3094627' 00:40:15.160 killing process with pid 3094627 00:40:15.160 09:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 3094627 00:40:15.160 09:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 3094627 00:40:15.421 [2024-12-09 09:57:50.661665] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:40:15.421 09:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:15.421 09:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:15.421 09:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:15.421 09:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:40:15.421 09:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:40:15.421 09:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:15.421 09:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:40:15.421 09:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:15.421 09:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:15.421 09:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:15.421 09:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:15.421 09:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:17.333 09:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:17.333 09:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:40:17.333 00:40:17.333 real 0m14.264s 00:40:17.333 user 0m18.625s 00:40:17.333 sys 0m7.097s 00:40:17.333 09:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:17.333 09:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:40:17.333 ************************************ 00:40:17.333 END TEST nvmf_host_management 00:40:17.333 ************************************ 00:40:17.593 09:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:40:17.593 09:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:17.593 09:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:17.593 09:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:17.593 ************************************ 00:40:17.593 START TEST nvmf_lvol 00:40:17.593 ************************************ 00:40:17.593 09:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:40:17.593 * Looking for test storage... 00:40:17.593 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:17.593 09:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:17.593 09:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:40:17.593 09:57:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:17.855 09:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:17.855 09:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:17.855 09:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:17.855 09:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:17.855 09:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:40:17.855 09:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:40:17.855 09:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:40:17.855 09:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:40:17.855 09:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:40:17.855 09:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:40:17.855 09:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:40:17.855 09:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:17.855 09:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:40:17.855 09:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:40:17.855 09:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:17.855 09:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:17.855 09:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:40:17.855 09:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:40:17.855 09:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:17.855 09:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:40:17.855 09:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:40:17.855 09:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:40:17.855 09:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:40:17.855 09:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:17.855 09:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:40:17.855 09:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:40:17.855 09:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:17.855 09:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:17.855 09:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:40:17.855 09:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:17.855 09:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:17.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:17.855 --rc genhtml_branch_coverage=1 00:40:17.855 --rc genhtml_function_coverage=1 00:40:17.855 --rc genhtml_legend=1 00:40:17.855 --rc geninfo_all_blocks=1 00:40:17.855 --rc geninfo_unexecuted_blocks=1 00:40:17.855 00:40:17.855 ' 00:40:17.855 09:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:17.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:17.855 --rc genhtml_branch_coverage=1 00:40:17.855 --rc genhtml_function_coverage=1 00:40:17.855 --rc genhtml_legend=1 00:40:17.855 --rc geninfo_all_blocks=1 00:40:17.855 --rc geninfo_unexecuted_blocks=1 00:40:17.855 00:40:17.855 ' 00:40:17.855 09:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:17.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:17.855 --rc genhtml_branch_coverage=1 00:40:17.855 --rc genhtml_function_coverage=1 00:40:17.855 --rc genhtml_legend=1 00:40:17.855 --rc geninfo_all_blocks=1 00:40:17.855 --rc geninfo_unexecuted_blocks=1 00:40:17.855 00:40:17.855 ' 00:40:17.855 09:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:17.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:17.855 --rc genhtml_branch_coverage=1 00:40:17.855 --rc genhtml_function_coverage=1 00:40:17.855 --rc genhtml_legend=1 00:40:17.855 --rc geninfo_all_blocks=1 00:40:17.855 --rc geninfo_unexecuted_blocks=1 00:40:17.855 00:40:17.855 ' 00:40:17.855 09:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:17.855 09:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:40:17.855 09:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:17.855 09:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:17.855 09:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:17.855 09:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:17.855 09:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:17.855 09:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:17.856 09:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:17.856 09:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:17.856 09:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:17.856 09:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:17.856 09:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:40:17.856 09:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:40:17.856 09:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:17.856 09:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:17.856 09:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:17.856 09:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:17.856 09:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:17.856 09:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:40:17.856 09:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:17.856 09:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:17.856 09:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:17.856 09:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:17.856 09:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:17.856 09:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:17.856 09:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:40:17.856 09:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:17.856 09:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:40:17.856 09:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:17.856 09:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:17.856 09:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:17.856 09:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:17.856 09:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:17.856 09:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:17.856 09:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:17.856 09:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:17.856 09:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:17.856 09:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:17.856 09:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:17.856 09:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:17.856 09:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:40:17.856 09:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:40:17.856 09:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:17.856 09:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:40:17.856 09:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:17.856 09:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:17.856 09:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:17.856 09:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:17.856 09:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:17.856 09:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:17.856 09:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:17.856 09:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:17.856 09:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:17.856 09:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:17.856 09:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:40:17.856 09:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:40:25.999 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:25.999 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:40:25.999 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:25.999 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:25.999 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:25.999 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:25.999 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:25.999 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:40:25.999 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:25.999 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:40:25.999 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:40:25.999 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:40:25.999 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:40:25.999 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:40:25.999 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:40:25.999 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:25.999 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:25.999 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:25.999 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:25.999 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:25.999 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:25.999 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:25.999 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:25.999 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:25.999 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:25.999 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:25.999 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:25.999 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:25.999 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:25.999 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:25.999 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:26.000 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:26.000 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:26.000 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:26.000 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:40:26.000 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:40:26.000 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:26.000 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:26.000 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:26.000 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:26.000 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:26.000 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:26.000 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:40:26.000 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:40:26.000 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:26.000 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:26.000 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:26.000 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:26.000 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:26.000 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:26.000 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:26.000 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:26.000 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:26.000 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:26.000 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:26.000 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:26.000 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:26.000 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:26.000 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:26.000 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:40:26.000 Found net devices under 0000:4b:00.0: cvl_0_0 00:40:26.000 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:26.000 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:26.000 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:26.000 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:26.000 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:26.000 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:26.000 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:26.000 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:26.000 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:40:26.000 Found net devices under 0000:4b:00.1: cvl_0_1 00:40:26.000 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:26.000 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:26.000 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:40:26.000 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:26.000 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:26.000 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:26.000 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:26.000 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:26.000 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:26.000 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:26.000 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:26.000 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:26.000 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:26.000 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:26.000 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:26.000 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:26.000 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:26.000 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:26.000 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:26.000 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:26.000 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:26.000 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:26.000 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:26.000 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:26.000 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:26.000 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:26.000 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:26.000 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:26.000 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:26.000 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:26.000 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.693 ms 00:40:26.000 00:40:26.000 --- 10.0.0.2 ping statistics --- 00:40:26.000 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:26.000 rtt min/avg/max/mdev = 0.693/0.693/0.693/0.000 ms 00:40:26.000 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:26.000 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:26.000 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.308 ms 00:40:26.000 00:40:26.000 --- 10.0.0.1 ping statistics --- 00:40:26.000 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:26.000 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:40:26.000 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:26.000 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:40:26.000 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:26.000 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:26.000 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:26.000 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:26.000 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:26.000 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:26.000 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:26.000 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:40:26.000 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:26.000 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:26.000 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:40:26.000 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=3099683 00:40:26.000 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 3099683 00:40:26.000 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:40:26.000 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 3099683 ']' 00:40:26.000 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:26.000 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:26.000 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:26.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:26.000 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:26.000 09:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:40:26.000 [2024-12-09 09:58:00.568090] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:26.000 [2024-12-09 09:58:00.569203] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:40:26.001 [2024-12-09 09:58:00.569252] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:26.001 [2024-12-09 09:58:00.666323] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:40:26.001 [2024-12-09 09:58:00.686521] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:26.001 [2024-12-09 09:58:00.686561] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:26.001 [2024-12-09 09:58:00.686569] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:26.001 [2024-12-09 09:58:00.686576] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:26.001 [2024-12-09 09:58:00.686583] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:26.001 [2024-12-09 09:58:00.688266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:26.001 [2024-12-09 09:58:00.688384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:40:26.001 [2024-12-09 09:58:00.688386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:26.001 [2024-12-09 09:58:00.743017] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:26.001 [2024-12-09 09:58:00.743490] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:26.001 [2024-12-09 09:58:00.743917] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:40:26.001 [2024-12-09 09:58:00.744144] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:26.001 09:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:26.001 09:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:40:26.001 09:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:26.001 09:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:26.001 09:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:40:26.001 09:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:26.001 09:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:40:26.263 [2024-12-09 09:58:01.573252] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:26.263 09:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:26.524 09:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:40:26.524 09:58:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:26.785 09:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:40:26.785 09:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:40:26.785 09:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:40:27.046 09:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=cf570442-4c2c-4a64-b089-5aa9a8cb0f24 00:40:27.046 09:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u cf570442-4c2c-4a64-b089-5aa9a8cb0f24 lvol 20 00:40:27.307 09:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=50028acf-64a5-4e33-8a02-a660518db82d 00:40:27.307 09:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:40:27.307 09:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 50028acf-64a5-4e33-8a02-a660518db82d 00:40:27.568 09:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:27.828 [2024-12-09 09:58:03.021091] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:27.828 09:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:40:27.828 09:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:40:27.828 09:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3100180 00:40:27.828 09:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:40:29.210 09:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 50028acf-64a5-4e33-8a02-a660518db82d MY_SNAPSHOT 00:40:29.210 09:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=7bf34f71-a424-4378-b5ea-e915abf74397 00:40:29.210 09:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 50028acf-64a5-4e33-8a02-a660518db82d 30 00:40:29.472 09:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 7bf34f71-a424-4378-b5ea-e915abf74397 MY_CLONE 00:40:29.472 09:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=f1281421-17db-4be6-9590-ca5ffb4012d8 00:40:29.472 09:58:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate f1281421-17db-4be6-9590-ca5ffb4012d8 00:40:30.043 09:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3100180 00:40:38.178 Initializing NVMe Controllers 00:40:38.178 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:40:38.178 Controller IO queue size 128, less than required. 00:40:38.178 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:40:38.178 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:40:38.178 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:40:38.178 Initialization complete. Launching workers. 00:40:38.178 ======================================================== 00:40:38.178 Latency(us) 00:40:38.178 Device Information : IOPS MiB/s Average min max 00:40:38.178 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 15589.70 60.90 8212.32 4232.78 60144.82 00:40:38.178 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 15158.90 59.21 8443.93 3719.87 50562.54 00:40:38.178 ======================================================== 00:40:38.178 Total : 30748.60 120.11 8326.51 3719.87 60144.82 00:40:38.178 00:40:38.178 09:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:40:38.440 09:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 50028acf-64a5-4e33-8a02-a660518db82d 00:40:38.702 09:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u cf570442-4c2c-4a64-b089-5aa9a8cb0f24 00:40:38.702 09:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:40:38.702 09:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:40:38.702 09:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:40:38.702 09:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:38.702 09:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:40:38.702 09:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:38.702 09:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:40:38.702 09:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:38.702 09:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:38.702 rmmod nvme_tcp 00:40:38.702 rmmod nvme_fabrics 00:40:38.963 rmmod nvme_keyring 00:40:38.963 09:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:38.963 09:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:40:38.963 09:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:40:38.963 09:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 3099683 ']' 00:40:38.963 09:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 3099683 00:40:38.963 09:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 3099683 ']' 00:40:38.963 09:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 3099683 00:40:38.963 09:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:40:38.963 09:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:38.963 09:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3099683 00:40:38.963 09:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:38.963 09:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:38.963 09:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3099683' 00:40:38.963 killing process with pid 3099683 00:40:38.963 09:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 3099683 00:40:38.963 09:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 3099683 00:40:38.963 09:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:38.963 09:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:38.963 09:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:38.963 09:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:40:38.963 09:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:40:38.963 09:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:38.963 09:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:40:38.964 09:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:38.964 09:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:38.964 09:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:38.964 09:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:38.964 09:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:41.510 09:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:41.510 00:40:41.510 real 0m23.599s 00:40:41.510 user 0m55.679s 00:40:41.510 sys 0m10.563s 00:40:41.510 09:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:41.510 09:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:40:41.510 ************************************ 00:40:41.510 END TEST nvmf_lvol 00:40:41.510 ************************************ 00:40:41.510 09:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:40:41.510 09:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:41.510 09:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:41.510 09:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:41.510 ************************************ 00:40:41.510 START TEST nvmf_lvs_grow 00:40:41.510 ************************************ 00:40:41.510 09:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:40:41.510 * Looking for test storage... 00:40:41.510 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:41.510 09:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:41.510 09:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:40:41.510 09:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:41.510 09:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:41.510 09:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:41.510 09:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:41.510 09:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:41.510 09:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:40:41.510 09:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:40:41.510 09:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:40:41.510 09:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:40:41.510 09:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:40:41.510 09:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:40:41.510 09:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:40:41.510 09:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:41.511 09:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:40:41.511 09:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:40:41.511 09:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:41.511 09:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:41.511 09:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:40:41.511 09:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:40:41.511 09:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:41.511 09:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:40:41.511 09:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:40:41.511 09:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:40:41.511 09:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:40:41.511 09:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:41.511 09:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:40:41.511 09:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:40:41.511 09:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:41.511 09:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:41.511 09:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:40:41.511 09:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:41.511 09:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:41.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:41.511 --rc genhtml_branch_coverage=1 00:40:41.511 --rc genhtml_function_coverage=1 00:40:41.511 --rc genhtml_legend=1 00:40:41.511 --rc geninfo_all_blocks=1 00:40:41.511 --rc geninfo_unexecuted_blocks=1 00:40:41.511 00:40:41.511 ' 00:40:41.511 09:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:41.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:41.511 --rc genhtml_branch_coverage=1 00:40:41.511 --rc genhtml_function_coverage=1 00:40:41.511 --rc genhtml_legend=1 00:40:41.511 --rc geninfo_all_blocks=1 00:40:41.511 --rc geninfo_unexecuted_blocks=1 00:40:41.511 00:40:41.511 ' 00:40:41.511 09:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:41.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:41.511 --rc genhtml_branch_coverage=1 00:40:41.511 --rc genhtml_function_coverage=1 00:40:41.511 --rc genhtml_legend=1 00:40:41.511 --rc geninfo_all_blocks=1 00:40:41.511 --rc geninfo_unexecuted_blocks=1 00:40:41.511 00:40:41.511 ' 00:40:41.511 09:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:41.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:41.511 --rc genhtml_branch_coverage=1 00:40:41.511 --rc genhtml_function_coverage=1 00:40:41.511 --rc genhtml_legend=1 00:40:41.511 --rc geninfo_all_blocks=1 00:40:41.511 --rc geninfo_unexecuted_blocks=1 00:40:41.511 00:40:41.511 ' 00:40:41.511 09:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:41.511 09:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:40:41.511 09:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:41.511 09:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:41.511 09:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:41.511 09:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:41.511 09:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:41.511 09:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:41.511 09:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:41.511 09:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:41.511 09:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:41.511 09:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:41.511 09:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:40:41.511 09:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:40:41.511 09:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:41.511 09:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:41.511 09:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:41.511 09:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:41.511 09:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:41.511 09:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:40:41.511 09:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:41.511 09:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:41.511 09:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:41.511 09:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:41.511 09:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:41.511 09:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:41.511 09:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:40:41.511 09:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:41.511 09:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:40:41.511 09:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:41.511 09:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:41.511 09:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:41.511 09:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:41.511 09:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:41.511 09:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:41.511 09:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:41.511 09:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:41.511 09:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:41.511 09:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:41.511 09:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:41.511 09:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:40:41.511 09:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:40:41.511 09:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:41.511 09:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:41.511 09:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:41.511 09:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:41.511 09:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:41.511 09:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:41.511 09:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:41.512 09:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:41.512 09:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:41.512 09:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:41.512 09:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:40:41.512 09:58:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:40:48.100 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:48.100 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:40:48.100 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:48.100 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:48.100 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:48.100 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:48.100 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:48.100 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:40:48.100 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:48.100 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:40:48.100 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:40:48.100 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:40:48.100 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:40:48.100 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:40:48.100 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:40:48.100 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:48.100 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:48.100 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:48.100 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:48.100 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:48.100 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:48.100 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:48.100 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:48.100 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:48.100 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:48.100 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:48.100 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:48.100 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:48.100 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:48.100 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:48.100 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:48.100 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:48.100 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:48.100 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:48.100 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:40:48.100 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:40:48.100 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:48.100 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:48.100 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:48.100 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:48.100 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:48.100 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:48.100 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:40:48.100 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:40:48.100 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:48.100 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:48.100 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:48.100 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:48.100 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:48.100 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:48.100 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:48.100 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:48.100 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:48.101 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:48.101 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:48.101 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:48.101 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:48.101 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:48.101 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:48.101 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:40:48.101 Found net devices under 0000:4b:00.0: cvl_0_0 00:40:48.101 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:48.101 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:48.101 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:48.101 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:48.101 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:48.101 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:48.101 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:48.364 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:48.364 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:40:48.364 Found net devices under 0000:4b:00.1: cvl_0_1 00:40:48.364 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:48.364 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:48.364 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:40:48.364 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:48.364 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:48.364 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:48.364 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:48.364 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:48.364 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:48.364 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:48.364 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:48.364 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:48.364 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:48.364 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:48.364 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:48.364 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:48.364 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:48.364 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:48.364 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:48.364 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:48.364 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:48.364 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:48.364 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:48.364 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:48.364 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:48.364 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:48.364 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:48.364 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:48.364 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:48.364 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:48.364 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.593 ms 00:40:48.364 00:40:48.364 --- 10.0.0.2 ping statistics --- 00:40:48.364 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:48.364 rtt min/avg/max/mdev = 0.593/0.593/0.593/0.000 ms 00:40:48.364 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:48.364 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:48.364 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.266 ms 00:40:48.364 00:40:48.364 --- 10.0.0.1 ping statistics --- 00:40:48.364 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:48.364 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:40:48.364 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:48.364 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:40:48.364 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:48.364 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:48.364 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:48.364 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:48.364 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:48.364 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:48.364 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:48.626 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:40:48.626 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:48.626 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:48.626 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:40:48.626 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=3106404 00:40:48.626 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 3106404 00:40:48.626 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:40:48.626 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 3106404 ']' 00:40:48.626 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:48.626 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:48.626 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:48.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:48.626 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:48.626 09:58:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:40:48.626 [2024-12-09 09:58:23.920247] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:48.626 [2024-12-09 09:58:23.921297] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:40:48.627 [2024-12-09 09:58:23.921341] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:48.627 [2024-12-09 09:58:24.018352] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:48.627 [2024-12-09 09:58:24.041605] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:48.627 [2024-12-09 09:58:24.041654] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:48.627 [2024-12-09 09:58:24.041667] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:48.627 [2024-12-09 09:58:24.041674] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:48.627 [2024-12-09 09:58:24.041680] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:48.627 [2024-12-09 09:58:24.042296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:48.888 [2024-12-09 09:58:24.101122] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:48.888 [2024-12-09 09:58:24.101376] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:49.464 09:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:49.464 09:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:40:49.464 09:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:49.464 09:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:49.464 09:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:40:49.464 09:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:49.464 09:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:40:49.464 [2024-12-09 09:58:24.883119] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:49.464 09:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:40:49.464 09:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:40:49.464 09:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:49.464 09:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:40:49.735 ************************************ 00:40:49.735 START TEST lvs_grow_clean 00:40:49.735 ************************************ 00:40:49.735 09:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:40:49.735 09:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:40:49.735 09:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:40:49.735 09:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:40:49.735 09:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:40:49.735 09:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:40:49.735 09:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:40:49.735 09:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:40:49.735 09:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:40:49.735 09:58:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:40:49.735 09:58:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:40:49.735 09:58:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:40:49.997 09:58:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=bf3a3633-c16d-434f-9a73-f0ebeecf086c 00:40:49.997 09:58:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bf3a3633-c16d-434f-9a73-f0ebeecf086c 00:40:49.997 09:58:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:40:50.256 09:58:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:40:50.256 09:58:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:40:50.256 09:58:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u bf3a3633-c16d-434f-9a73-f0ebeecf086c lvol 150 00:40:50.256 09:58:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=9c83a1d4-0bdf-476b-b24d-a581f8f65518 00:40:50.256 09:58:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:40:50.256 09:58:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:40:50.516 [2024-12-09 09:58:25.830793] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:40:50.516 [2024-12-09 09:58:25.830938] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:40:50.516 true 00:40:50.516 09:58:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:40:50.516 09:58:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bf3a3633-c16d-434f-9a73-f0ebeecf086c 00:40:50.776 09:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:40:50.776 09:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:40:50.776 09:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 9c83a1d4-0bdf-476b-b24d-a581f8f65518 00:40:51.036 09:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:40:51.036 [2024-12-09 09:58:26.467334] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:51.036 09:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:40:51.298 09:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:40:51.298 09:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3106815 00:40:51.298 09:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:40:51.298 09:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3106815 /var/tmp/bdevperf.sock 00:40:51.298 09:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 3106815 ']' 00:40:51.298 09:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:40:51.298 09:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:51.298 09:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:40:51.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:40:51.298 09:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:51.298 09:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:40:51.298 [2024-12-09 09:58:26.678015] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:40:51.298 [2024-12-09 09:58:26.678066] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3106815 ] 00:40:51.558 [2024-12-09 09:58:26.760588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:51.558 [2024-12-09 09:58:26.785704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:51.558 09:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:51.558 09:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:40:51.558 09:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:40:51.819 Nvme0n1 00:40:51.819 09:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:40:52.081 [ 00:40:52.081 { 00:40:52.081 "name": "Nvme0n1", 00:40:52.081 "aliases": [ 00:40:52.081 "9c83a1d4-0bdf-476b-b24d-a581f8f65518" 00:40:52.081 ], 00:40:52.081 "product_name": "NVMe disk", 00:40:52.081 "block_size": 4096, 00:40:52.081 "num_blocks": 38912, 00:40:52.081 "uuid": "9c83a1d4-0bdf-476b-b24d-a581f8f65518", 00:40:52.081 "numa_id": 0, 00:40:52.081 "assigned_rate_limits": { 00:40:52.081 "rw_ios_per_sec": 0, 00:40:52.081 "rw_mbytes_per_sec": 0, 00:40:52.081 "r_mbytes_per_sec": 0, 00:40:52.081 "w_mbytes_per_sec": 0 00:40:52.081 }, 00:40:52.081 "claimed": false, 00:40:52.081 "zoned": false, 00:40:52.081 "supported_io_types": { 00:40:52.081 "read": true, 00:40:52.081 "write": true, 00:40:52.081 "unmap": true, 00:40:52.081 "flush": true, 00:40:52.081 "reset": true, 00:40:52.081 "nvme_admin": true, 00:40:52.081 "nvme_io": true, 00:40:52.081 "nvme_io_md": false, 00:40:52.081 "write_zeroes": true, 00:40:52.081 "zcopy": false, 00:40:52.081 "get_zone_info": false, 00:40:52.081 "zone_management": false, 00:40:52.081 "zone_append": false, 00:40:52.081 "compare": true, 00:40:52.081 "compare_and_write": true, 00:40:52.081 "abort": true, 00:40:52.081 "seek_hole": false, 00:40:52.081 "seek_data": false, 00:40:52.081 "copy": true, 00:40:52.081 "nvme_iov_md": false 00:40:52.081 }, 00:40:52.081 "memory_domains": [ 00:40:52.081 { 00:40:52.081 "dma_device_id": "system", 00:40:52.081 "dma_device_type": 1 00:40:52.081 } 00:40:52.081 ], 00:40:52.081 "driver_specific": { 00:40:52.081 "nvme": [ 00:40:52.081 { 00:40:52.081 "trid": { 00:40:52.081 "trtype": "TCP", 00:40:52.081 "adrfam": "IPv4", 00:40:52.081 "traddr": "10.0.0.2", 00:40:52.081 "trsvcid": "4420", 00:40:52.081 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:40:52.081 }, 00:40:52.081 "ctrlr_data": { 00:40:52.081 "cntlid": 1, 00:40:52.081 "vendor_id": "0x8086", 00:40:52.081 "model_number": "SPDK bdev Controller", 00:40:52.081 "serial_number": "SPDK0", 00:40:52.081 "firmware_revision": "25.01", 00:40:52.081 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:40:52.081 "oacs": { 00:40:52.081 "security": 0, 00:40:52.081 "format": 0, 00:40:52.081 "firmware": 0, 00:40:52.081 "ns_manage": 0 00:40:52.081 }, 00:40:52.081 "multi_ctrlr": true, 00:40:52.081 "ana_reporting": false 00:40:52.081 }, 00:40:52.081 "vs": { 00:40:52.081 "nvme_version": "1.3" 00:40:52.081 }, 00:40:52.081 "ns_data": { 00:40:52.081 "id": 1, 00:40:52.081 "can_share": true 00:40:52.081 } 00:40:52.081 } 00:40:52.081 ], 00:40:52.081 "mp_policy": "active_passive" 00:40:52.081 } 00:40:52.081 } 00:40:52.081 ] 00:40:52.081 09:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3107106 00:40:52.081 09:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:40:52.081 09:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:40:52.081 Running I/O for 10 seconds... 00:40:53.023 Latency(us) 00:40:53.023 [2024-12-09T08:58:28.476Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:53.023 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:53.023 Nvme0n1 : 1.00 17526.00 68.46 0.00 0.00 0.00 0.00 0.00 00:40:53.023 [2024-12-09T08:58:28.476Z] =================================================================================================================== 00:40:53.023 [2024-12-09T08:58:28.476Z] Total : 17526.00 68.46 0.00 0.00 0.00 0.00 0.00 00:40:53.023 00:40:53.971 09:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u bf3a3633-c16d-434f-9a73-f0ebeecf086c 00:40:54.233 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:54.233 Nvme0n1 : 2.00 17716.50 69.21 0.00 0.00 0.00 0.00 0.00 00:40:54.233 [2024-12-09T08:58:29.686Z] =================================================================================================================== 00:40:54.233 [2024-12-09T08:58:29.686Z] Total : 17716.50 69.21 0.00 0.00 0.00 0.00 0.00 00:40:54.233 00:40:54.233 true 00:40:54.233 09:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bf3a3633-c16d-434f-9a73-f0ebeecf086c 00:40:54.233 09:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:40:54.496 09:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:40:54.496 09:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:40:54.496 09:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3107106 00:40:55.075 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:55.075 Nvme0n1 : 3.00 17695.33 69.12 0.00 0.00 0.00 0.00 0.00 00:40:55.075 [2024-12-09T08:58:30.528Z] =================================================================================================================== 00:40:55.075 [2024-12-09T08:58:30.528Z] Total : 17695.33 69.12 0.00 0.00 0.00 0.00 0.00 00:40:55.075 00:40:56.096 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:56.096 Nvme0n1 : 4.00 17811.75 69.58 0.00 0.00 0.00 0.00 0.00 00:40:56.096 [2024-12-09T08:58:31.549Z] =================================================================================================================== 00:40:56.096 [2024-12-09T08:58:31.549Z] Total : 17811.75 69.58 0.00 0.00 0.00 0.00 0.00 00:40:56.096 00:40:57.038 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:57.038 Nvme0n1 : 5.00 18034.00 70.45 0.00 0.00 0.00 0.00 0.00 00:40:57.038 [2024-12-09T08:58:32.491Z] =================================================================================================================== 00:40:57.038 [2024-12-09T08:58:32.491Z] Total : 18034.00 70.45 0.00 0.00 0.00 0.00 0.00 00:40:57.038 00:40:58.436 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:58.436 Nvme0n1 : 6.00 19304.00 75.41 0.00 0.00 0.00 0.00 0.00 00:40:58.436 [2024-12-09T08:58:33.889Z] =================================================================================================================== 00:40:58.436 [2024-12-09T08:58:33.889Z] Total : 19304.00 75.41 0.00 0.00 0.00 0.00 0.00 00:40:58.436 00:40:59.376 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:40:59.376 Nvme0n1 : 7.00 20211.14 78.95 0.00 0.00 0.00 0.00 0.00 00:40:59.376 [2024-12-09T08:58:34.829Z] =================================================================================================================== 00:40:59.376 [2024-12-09T08:58:34.829Z] Total : 20211.14 78.95 0.00 0.00 0.00 0.00 0.00 00:40:59.376 00:41:00.317 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:00.317 Nvme0n1 : 8.00 20891.50 81.61 0.00 0.00 0.00 0.00 0.00 00:41:00.317 [2024-12-09T08:58:35.770Z] =================================================================================================================== 00:41:00.317 [2024-12-09T08:58:35.770Z] Total : 20891.50 81.61 0.00 0.00 0.00 0.00 0.00 00:41:00.317 00:41:01.258 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:01.258 Nvme0n1 : 9.00 21420.67 83.67 0.00 0.00 0.00 0.00 0.00 00:41:01.258 [2024-12-09T08:58:36.711Z] =================================================================================================================== 00:41:01.258 [2024-12-09T08:58:36.711Z] Total : 21420.67 83.67 0.00 0.00 0.00 0.00 0.00 00:41:01.258 00:41:02.200 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:02.200 Nvme0n1 : 10.00 21856.70 85.38 0.00 0.00 0.00 0.00 0.00 00:41:02.200 [2024-12-09T08:58:37.653Z] =================================================================================================================== 00:41:02.200 [2024-12-09T08:58:37.653Z] Total : 21856.70 85.38 0.00 0.00 0.00 0.00 0.00 00:41:02.200 00:41:02.200 00:41:02.200 Latency(us) 00:41:02.200 [2024-12-09T08:58:37.653Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:02.200 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:02.200 Nvme0n1 : 10.00 21860.17 85.39 0.00 0.00 5852.76 4505.60 31675.73 00:41:02.200 [2024-12-09T08:58:37.653Z] =================================================================================================================== 00:41:02.200 [2024-12-09T08:58:37.653Z] Total : 21860.17 85.39 0.00 0.00 5852.76 4505.60 31675.73 00:41:02.200 { 00:41:02.200 "results": [ 00:41:02.200 { 00:41:02.200 "job": "Nvme0n1", 00:41:02.200 "core_mask": "0x2", 00:41:02.200 "workload": "randwrite", 00:41:02.200 "status": "finished", 00:41:02.200 "queue_depth": 128, 00:41:02.200 "io_size": 4096, 00:41:02.200 "runtime": 10.004269, 00:41:02.200 "iops": 21860.167894325914, 00:41:02.200 "mibps": 85.3912808372106, 00:41:02.200 "io_failed": 0, 00:41:02.200 "io_timeout": 0, 00:41:02.200 "avg_latency_us": 5852.757003619958, 00:41:02.200 "min_latency_us": 4505.6, 00:41:02.200 "max_latency_us": 31675.733333333334 00:41:02.200 } 00:41:02.200 ], 00:41:02.200 "core_count": 1 00:41:02.200 } 00:41:02.200 09:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3106815 00:41:02.200 09:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 3106815 ']' 00:41:02.200 09:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 3106815 00:41:02.200 09:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:41:02.200 09:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:02.200 09:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3106815 00:41:02.200 09:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:41:02.200 09:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:41:02.200 09:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3106815' 00:41:02.200 killing process with pid 3106815 00:41:02.200 09:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 3106815 00:41:02.200 Received shutdown signal, test time was about 10.000000 seconds 00:41:02.200 00:41:02.200 Latency(us) 00:41:02.200 [2024-12-09T08:58:37.653Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:02.200 [2024-12-09T08:58:37.653Z] =================================================================================================================== 00:41:02.200 [2024-12-09T08:58:37.653Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:41:02.200 09:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 3106815 00:41:02.460 09:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:41:02.460 09:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:02.721 09:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bf3a3633-c16d-434f-9a73-f0ebeecf086c 00:41:02.721 09:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:41:02.982 09:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:41:02.982 09:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:41:02.982 09:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:41:02.982 [2024-12-09 09:58:38.350867] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:41:02.982 09:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bf3a3633-c16d-434f-9a73-f0ebeecf086c 00:41:02.982 09:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:41:02.982 09:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bf3a3633-c16d-434f-9a73-f0ebeecf086c 00:41:02.982 09:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:02.982 09:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:02.982 09:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:02.982 09:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:02.982 09:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:02.982 09:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:02.982 09:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:02.982 09:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:41:02.982 09:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bf3a3633-c16d-434f-9a73-f0ebeecf086c 00:41:03.242 request: 00:41:03.242 { 00:41:03.242 "uuid": "bf3a3633-c16d-434f-9a73-f0ebeecf086c", 00:41:03.242 "method": "bdev_lvol_get_lvstores", 00:41:03.242 "req_id": 1 00:41:03.242 } 00:41:03.242 Got JSON-RPC error response 00:41:03.242 response: 00:41:03.242 { 00:41:03.242 "code": -19, 00:41:03.242 "message": "No such device" 00:41:03.242 } 00:41:03.242 09:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:41:03.242 09:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:41:03.242 09:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:41:03.242 09:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:41:03.242 09:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:41:03.501 aio_bdev 00:41:03.502 09:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 9c83a1d4-0bdf-476b-b24d-a581f8f65518 00:41:03.502 09:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=9c83a1d4-0bdf-476b-b24d-a581f8f65518 00:41:03.502 09:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:41:03.502 09:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:41:03.502 09:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:41:03.502 09:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:41:03.502 09:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:41:03.502 09:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 9c83a1d4-0bdf-476b-b24d-a581f8f65518 -t 2000 00:41:03.761 [ 00:41:03.761 { 00:41:03.761 "name": "9c83a1d4-0bdf-476b-b24d-a581f8f65518", 00:41:03.761 "aliases": [ 00:41:03.761 "lvs/lvol" 00:41:03.761 ], 00:41:03.761 "product_name": "Logical Volume", 00:41:03.761 "block_size": 4096, 00:41:03.761 "num_blocks": 38912, 00:41:03.761 "uuid": "9c83a1d4-0bdf-476b-b24d-a581f8f65518", 00:41:03.761 "assigned_rate_limits": { 00:41:03.761 "rw_ios_per_sec": 0, 00:41:03.761 "rw_mbytes_per_sec": 0, 00:41:03.761 "r_mbytes_per_sec": 0, 00:41:03.761 "w_mbytes_per_sec": 0 00:41:03.761 }, 00:41:03.761 "claimed": false, 00:41:03.761 "zoned": false, 00:41:03.761 "supported_io_types": { 00:41:03.761 "read": true, 00:41:03.761 "write": true, 00:41:03.761 "unmap": true, 00:41:03.762 "flush": false, 00:41:03.762 "reset": true, 00:41:03.762 "nvme_admin": false, 00:41:03.762 "nvme_io": false, 00:41:03.762 "nvme_io_md": false, 00:41:03.762 "write_zeroes": true, 00:41:03.762 "zcopy": false, 00:41:03.762 "get_zone_info": false, 00:41:03.762 "zone_management": false, 00:41:03.762 "zone_append": false, 00:41:03.762 "compare": false, 00:41:03.762 "compare_and_write": false, 00:41:03.762 "abort": false, 00:41:03.762 "seek_hole": true, 00:41:03.762 "seek_data": true, 00:41:03.762 "copy": false, 00:41:03.762 "nvme_iov_md": false 00:41:03.762 }, 00:41:03.762 "driver_specific": { 00:41:03.762 "lvol": { 00:41:03.762 "lvol_store_uuid": "bf3a3633-c16d-434f-9a73-f0ebeecf086c", 00:41:03.762 "base_bdev": "aio_bdev", 00:41:03.762 "thin_provision": false, 00:41:03.762 "num_allocated_clusters": 38, 00:41:03.762 "snapshot": false, 00:41:03.762 "clone": false, 00:41:03.762 "esnap_clone": false 00:41:03.762 } 00:41:03.762 } 00:41:03.762 } 00:41:03.762 ] 00:41:03.762 09:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:41:03.762 09:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bf3a3633-c16d-434f-9a73-f0ebeecf086c 00:41:03.762 09:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:41:04.022 09:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:41:04.022 09:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bf3a3633-c16d-434f-9a73-f0ebeecf086c 00:41:04.022 09:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:41:04.022 09:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:41:04.022 09:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 9c83a1d4-0bdf-476b-b24d-a581f8f65518 00:41:04.283 09:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u bf3a3633-c16d-434f-9a73-f0ebeecf086c 00:41:04.544 09:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:41:04.805 09:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:41:04.805 00:41:04.805 real 0m15.106s 00:41:04.805 user 0m14.705s 00:41:04.805 sys 0m1.322s 00:41:04.805 09:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:04.805 09:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:41:04.805 ************************************ 00:41:04.805 END TEST lvs_grow_clean 00:41:04.805 ************************************ 00:41:04.805 09:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:41:04.805 09:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:41:04.805 09:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:04.805 09:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:41:04.805 ************************************ 00:41:04.805 START TEST lvs_grow_dirty 00:41:04.805 ************************************ 00:41:04.805 09:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:41:04.806 09:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:41:04.806 09:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:41:04.806 09:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:41:04.806 09:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:41:04.806 09:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:41:04.806 09:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:41:04.806 09:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:41:04.806 09:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:41:04.806 09:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:41:05.066 09:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:41:05.066 09:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:41:05.066 09:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=59ae59ce-c5b7-4b65-94d2-6250257bf5c0 00:41:05.066 09:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 59ae59ce-c5b7-4b65-94d2-6250257bf5c0 00:41:05.066 09:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:41:05.327 09:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:41:05.327 09:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:41:05.327 09:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 59ae59ce-c5b7-4b65-94d2-6250257bf5c0 lvol 150 00:41:05.587 09:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=1319cf03-d2f7-42e2-b1b4-a68077b51cb9 00:41:05.587 09:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:41:05.587 09:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:41:05.587 [2024-12-09 09:58:40.982791] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:41:05.587 [2024-12-09 09:58:40.982951] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:41:05.587 true 00:41:05.587 09:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 59ae59ce-c5b7-4b65-94d2-6250257bf5c0 00:41:05.587 09:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:41:05.847 09:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:41:05.847 09:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:41:06.106 09:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1319cf03-d2f7-42e2-b1b4-a68077b51cb9 00:41:06.106 09:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:06.367 [2024-12-09 09:58:41.627318] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:06.367 09:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:41:06.367 09:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3109822 00:41:06.367 09:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:41:06.367 09:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:41:06.367 09:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3109822 /var/tmp/bdevperf.sock 00:41:06.367 09:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3109822 ']' 00:41:06.367 09:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:41:06.367 09:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:06.367 09:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:41:06.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:41:06.367 09:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:06.367 09:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:41:06.627 [2024-12-09 09:58:41.861303] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:41:06.627 [2024-12-09 09:58:41.861374] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3109822 ] 00:41:06.627 [2024-12-09 09:58:41.953571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:06.627 [2024-12-09 09:58:41.970517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:07.197 09:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:07.197 09:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:41:07.197 09:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:41:07.456 Nvme0n1 00:41:07.457 09:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:41:07.717 [ 00:41:07.717 { 00:41:07.717 "name": "Nvme0n1", 00:41:07.717 "aliases": [ 00:41:07.717 "1319cf03-d2f7-42e2-b1b4-a68077b51cb9" 00:41:07.717 ], 00:41:07.717 "product_name": "NVMe disk", 00:41:07.717 "block_size": 4096, 00:41:07.717 "num_blocks": 38912, 00:41:07.717 "uuid": "1319cf03-d2f7-42e2-b1b4-a68077b51cb9", 00:41:07.717 "numa_id": 0, 00:41:07.717 "assigned_rate_limits": { 00:41:07.717 "rw_ios_per_sec": 0, 00:41:07.717 "rw_mbytes_per_sec": 0, 00:41:07.717 "r_mbytes_per_sec": 0, 00:41:07.717 "w_mbytes_per_sec": 0 00:41:07.717 }, 00:41:07.717 "claimed": false, 00:41:07.717 "zoned": false, 00:41:07.717 "supported_io_types": { 00:41:07.717 "read": true, 00:41:07.717 "write": true, 00:41:07.717 "unmap": true, 00:41:07.717 "flush": true, 00:41:07.717 "reset": true, 00:41:07.717 "nvme_admin": true, 00:41:07.717 "nvme_io": true, 00:41:07.717 "nvme_io_md": false, 00:41:07.717 "write_zeroes": true, 00:41:07.717 "zcopy": false, 00:41:07.717 "get_zone_info": false, 00:41:07.717 "zone_management": false, 00:41:07.717 "zone_append": false, 00:41:07.717 "compare": true, 00:41:07.717 "compare_and_write": true, 00:41:07.717 "abort": true, 00:41:07.717 "seek_hole": false, 00:41:07.717 "seek_data": false, 00:41:07.717 "copy": true, 00:41:07.717 "nvme_iov_md": false 00:41:07.717 }, 00:41:07.717 "memory_domains": [ 00:41:07.717 { 00:41:07.717 "dma_device_id": "system", 00:41:07.717 "dma_device_type": 1 00:41:07.717 } 00:41:07.717 ], 00:41:07.717 "driver_specific": { 00:41:07.717 "nvme": [ 00:41:07.717 { 00:41:07.717 "trid": { 00:41:07.717 "trtype": "TCP", 00:41:07.717 "adrfam": "IPv4", 00:41:07.717 "traddr": "10.0.0.2", 00:41:07.717 "trsvcid": "4420", 00:41:07.717 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:41:07.717 }, 00:41:07.717 "ctrlr_data": { 00:41:07.717 "cntlid": 1, 00:41:07.717 "vendor_id": "0x8086", 00:41:07.717 "model_number": "SPDK bdev Controller", 00:41:07.717 "serial_number": "SPDK0", 00:41:07.717 "firmware_revision": "25.01", 00:41:07.717 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:07.717 "oacs": { 00:41:07.717 "security": 0, 00:41:07.717 "format": 0, 00:41:07.717 "firmware": 0, 00:41:07.717 "ns_manage": 0 00:41:07.717 }, 00:41:07.717 "multi_ctrlr": true, 00:41:07.717 "ana_reporting": false 00:41:07.717 }, 00:41:07.717 "vs": { 00:41:07.717 "nvme_version": "1.3" 00:41:07.717 }, 00:41:07.717 "ns_data": { 00:41:07.717 "id": 1, 00:41:07.717 "can_share": true 00:41:07.717 } 00:41:07.717 } 00:41:07.717 ], 00:41:07.717 "mp_policy": "active_passive" 00:41:07.717 } 00:41:07.717 } 00:41:07.717 ] 00:41:07.717 09:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3109915 00:41:07.717 09:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:41:07.717 09:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:41:07.717 Running I/O for 10 seconds... 00:41:09.101 Latency(us) 00:41:09.101 [2024-12-09T08:58:44.554Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:09.101 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:09.101 Nvme0n1 : 1.00 17553.00 68.57 0.00 0.00 0.00 0.00 0.00 00:41:09.101 [2024-12-09T08:58:44.554Z] =================================================================================================================== 00:41:09.101 [2024-12-09T08:58:44.554Z] Total : 17553.00 68.57 0.00 0.00 0.00 0.00 0.00 00:41:09.101 00:41:09.673 09:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 59ae59ce-c5b7-4b65-94d2-6250257bf5c0 00:41:09.934 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:09.934 Nvme0n1 : 2.00 17857.00 69.75 0.00 0.00 0.00 0.00 0.00 00:41:09.934 [2024-12-09T08:58:45.387Z] =================================================================================================================== 00:41:09.934 [2024-12-09T08:58:45.387Z] Total : 17857.00 69.75 0.00 0.00 0.00 0.00 0.00 00:41:09.934 00:41:09.934 true 00:41:09.934 09:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 59ae59ce-c5b7-4b65-94d2-6250257bf5c0 00:41:09.934 09:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:41:10.199 09:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:41:10.199 09:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:41:10.199 09:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3109915 00:41:10.769 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:10.769 Nvme0n1 : 3.00 17958.33 70.15 0.00 0.00 0.00 0.00 0.00 00:41:10.769 [2024-12-09T08:58:46.222Z] =================================================================================================================== 00:41:10.769 [2024-12-09T08:58:46.222Z] Total : 17958.33 70.15 0.00 0.00 0.00 0.00 0.00 00:41:10.769 00:41:11.711 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:11.711 Nvme0n1 : 4.00 18009.00 70.35 0.00 0.00 0.00 0.00 0.00 00:41:11.712 [2024-12-09T08:58:47.165Z] =================================================================================================================== 00:41:11.712 [2024-12-09T08:58:47.165Z] Total : 18009.00 70.35 0.00 0.00 0.00 0.00 0.00 00:41:11.712 00:41:13.095 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:13.095 Nvme0n1 : 5.00 18928.40 73.94 0.00 0.00 0.00 0.00 0.00 00:41:13.095 [2024-12-09T08:58:48.548Z] =================================================================================================================== 00:41:13.095 [2024-12-09T08:58:48.548Z] Total : 18928.40 73.94 0.00 0.00 0.00 0.00 0.00 00:41:13.095 00:41:14.037 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:14.037 Nvme0n1 : 6.00 20049.33 78.32 0.00 0.00 0.00 0.00 0.00 00:41:14.037 [2024-12-09T08:58:49.490Z] =================================================================================================================== 00:41:14.037 [2024-12-09T08:58:49.490Z] Total : 20049.33 78.32 0.00 0.00 0.00 0.00 0.00 00:41:14.037 00:41:14.979 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:14.979 Nvme0n1 : 7.00 20836.71 81.39 0.00 0.00 0.00 0.00 0.00 00:41:14.979 [2024-12-09T08:58:50.432Z] =================================================================================================================== 00:41:14.979 [2024-12-09T08:58:50.432Z] Total : 20836.71 81.39 0.00 0.00 0.00 0.00 0.00 00:41:14.979 00:41:15.919 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:15.920 Nvme0n1 : 8.00 21438.88 83.75 0.00 0.00 0.00 0.00 0.00 00:41:15.920 [2024-12-09T08:58:51.373Z] =================================================================================================================== 00:41:15.920 [2024-12-09T08:58:51.373Z] Total : 21438.88 83.75 0.00 0.00 0.00 0.00 0.00 00:41:15.920 00:41:16.861 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:16.861 Nvme0n1 : 9.00 21907.22 85.58 0.00 0.00 0.00 0.00 0.00 00:41:16.861 [2024-12-09T08:58:52.314Z] =================================================================================================================== 00:41:16.861 [2024-12-09T08:58:52.314Z] Total : 21907.22 85.58 0.00 0.00 0.00 0.00 0.00 00:41:16.861 00:41:17.802 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:17.802 Nvme0n1 : 10.00 22281.90 87.04 0.00 0.00 0.00 0.00 0.00 00:41:17.802 [2024-12-09T08:58:53.255Z] =================================================================================================================== 00:41:17.802 [2024-12-09T08:58:53.255Z] Total : 22281.90 87.04 0.00 0.00 0.00 0.00 0.00 00:41:17.802 00:41:17.802 00:41:17.802 Latency(us) 00:41:17.802 [2024-12-09T08:58:53.255Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:17.802 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:41:17.802 Nvme0n1 : 10.00 22285.99 87.05 0.00 0.00 5741.33 2416.64 29054.29 00:41:17.802 [2024-12-09T08:58:53.255Z] =================================================================================================================== 00:41:17.802 [2024-12-09T08:58:53.255Z] Total : 22285.99 87.05 0.00 0.00 5741.33 2416.64 29054.29 00:41:17.802 { 00:41:17.802 "results": [ 00:41:17.802 { 00:41:17.802 "job": "Nvme0n1", 00:41:17.802 "core_mask": "0x2", 00:41:17.802 "workload": "randwrite", 00:41:17.802 "status": "finished", 00:41:17.802 "queue_depth": 128, 00:41:17.802 "io_size": 4096, 00:41:17.802 "runtime": 10.003908, 00:41:17.802 "iops": 22285.990634859896, 00:41:17.802 "mibps": 87.05465091742147, 00:41:17.802 "io_failed": 0, 00:41:17.802 "io_timeout": 0, 00:41:17.802 "avg_latency_us": 5741.330468915631, 00:41:17.802 "min_latency_us": 2416.64, 00:41:17.802 "max_latency_us": 29054.293333333335 00:41:17.802 } 00:41:17.802 ], 00:41:17.802 "core_count": 1 00:41:17.802 } 00:41:17.802 09:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3109822 00:41:17.802 09:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 3109822 ']' 00:41:17.802 09:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 3109822 00:41:17.802 09:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:41:17.802 09:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:17.802 09:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3109822 00:41:18.063 09:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:41:18.063 09:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:41:18.063 09:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3109822' 00:41:18.063 killing process with pid 3109822 00:41:18.063 09:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 3109822 00:41:18.063 Received shutdown signal, test time was about 10.000000 seconds 00:41:18.063 00:41:18.063 Latency(us) 00:41:18.063 [2024-12-09T08:58:53.516Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:18.063 [2024-12-09T08:58:53.516Z] =================================================================================================================== 00:41:18.063 [2024-12-09T08:58:53.516Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:41:18.063 09:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 3109822 00:41:18.063 09:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:41:18.323 09:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:18.323 09:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 59ae59ce-c5b7-4b65-94d2-6250257bf5c0 00:41:18.323 09:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:41:18.583 09:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:41:18.583 09:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:41:18.583 09:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3106404 00:41:18.583 09:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3106404 00:41:18.583 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3106404 Killed "${NVMF_APP[@]}" "$@" 00:41:18.583 09:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:41:18.583 09:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:41:18.583 09:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:18.583 09:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:18.583 09:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:41:18.583 09:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=3111947 00:41:18.583 09:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 3111947 00:41:18.583 09:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3111947 ']' 00:41:18.583 09:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:18.583 09:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:18.583 09:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:18.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:18.583 09:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:18.583 09:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:41:18.583 09:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:41:18.583 [2024-12-09 09:58:53.985044] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:18.583 [2024-12-09 09:58:53.986411] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:41:18.583 [2024-12-09 09:58:53.986464] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:18.843 [2024-12-09 09:58:54.077500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:18.843 [2024-12-09 09:58:54.092555] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:18.843 [2024-12-09 09:58:54.092584] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:18.843 [2024-12-09 09:58:54.092590] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:18.843 [2024-12-09 09:58:54.092594] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:18.843 [2024-12-09 09:58:54.092599] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:18.843 [2024-12-09 09:58:54.093055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:18.843 [2024-12-09 09:58:54.138352] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:18.843 [2024-12-09 09:58:54.138536] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:19.428 09:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:19.428 09:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:41:19.428 09:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:19.428 09:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:19.428 09:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:41:19.428 09:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:19.428 09:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:41:19.689 [2024-12-09 09:58:54.955171] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:41:19.689 [2024-12-09 09:58:54.955408] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:41:19.689 [2024-12-09 09:58:54.955496] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:41:19.689 09:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:41:19.689 09:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 1319cf03-d2f7-42e2-b1b4-a68077b51cb9 00:41:19.689 09:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=1319cf03-d2f7-42e2-b1b4-a68077b51cb9 00:41:19.689 09:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:41:19.689 09:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:41:19.689 09:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:41:19.689 09:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:41:19.689 09:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:41:19.950 09:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 1319cf03-d2f7-42e2-b1b4-a68077b51cb9 -t 2000 00:41:19.950 [ 00:41:19.950 { 00:41:19.950 "name": "1319cf03-d2f7-42e2-b1b4-a68077b51cb9", 00:41:19.950 "aliases": [ 00:41:19.950 "lvs/lvol" 00:41:19.950 ], 00:41:19.950 "product_name": "Logical Volume", 00:41:19.950 "block_size": 4096, 00:41:19.950 "num_blocks": 38912, 00:41:19.950 "uuid": "1319cf03-d2f7-42e2-b1b4-a68077b51cb9", 00:41:19.950 "assigned_rate_limits": { 00:41:19.950 "rw_ios_per_sec": 0, 00:41:19.950 "rw_mbytes_per_sec": 0, 00:41:19.950 "r_mbytes_per_sec": 0, 00:41:19.950 "w_mbytes_per_sec": 0 00:41:19.950 }, 00:41:19.950 "claimed": false, 00:41:19.950 "zoned": false, 00:41:19.950 "supported_io_types": { 00:41:19.950 "read": true, 00:41:19.950 "write": true, 00:41:19.950 "unmap": true, 00:41:19.950 "flush": false, 00:41:19.950 "reset": true, 00:41:19.950 "nvme_admin": false, 00:41:19.950 "nvme_io": false, 00:41:19.950 "nvme_io_md": false, 00:41:19.950 "write_zeroes": true, 00:41:19.950 "zcopy": false, 00:41:19.950 "get_zone_info": false, 00:41:19.950 "zone_management": false, 00:41:19.950 "zone_append": false, 00:41:19.950 "compare": false, 00:41:19.950 "compare_and_write": false, 00:41:19.950 "abort": false, 00:41:19.950 "seek_hole": true, 00:41:19.950 "seek_data": true, 00:41:19.950 "copy": false, 00:41:19.950 "nvme_iov_md": false 00:41:19.950 }, 00:41:19.950 "driver_specific": { 00:41:19.950 "lvol": { 00:41:19.950 "lvol_store_uuid": "59ae59ce-c5b7-4b65-94d2-6250257bf5c0", 00:41:19.950 "base_bdev": "aio_bdev", 00:41:19.950 "thin_provision": false, 00:41:19.950 "num_allocated_clusters": 38, 00:41:19.950 "snapshot": false, 00:41:19.950 "clone": false, 00:41:19.950 "esnap_clone": false 00:41:19.950 } 00:41:19.950 } 00:41:19.950 } 00:41:19.950 ] 00:41:19.950 09:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:41:19.950 09:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 59ae59ce-c5b7-4b65-94d2-6250257bf5c0 00:41:19.950 09:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:41:20.211 09:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:41:20.211 09:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 59ae59ce-c5b7-4b65-94d2-6250257bf5c0 00:41:20.211 09:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:41:20.211 09:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:41:20.211 09:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:41:20.472 [2024-12-09 09:58:55.805524] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:41:20.472 09:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 59ae59ce-c5b7-4b65-94d2-6250257bf5c0 00:41:20.472 09:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:41:20.472 09:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 59ae59ce-c5b7-4b65-94d2-6250257bf5c0 00:41:20.472 09:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:20.472 09:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:20.472 09:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:20.472 09:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:20.472 09:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:20.472 09:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:41:20.472 09:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:20.472 09:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:41:20.472 09:58:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 59ae59ce-c5b7-4b65-94d2-6250257bf5c0 00:41:20.733 request: 00:41:20.733 { 00:41:20.733 "uuid": "59ae59ce-c5b7-4b65-94d2-6250257bf5c0", 00:41:20.733 "method": "bdev_lvol_get_lvstores", 00:41:20.733 "req_id": 1 00:41:20.733 } 00:41:20.733 Got JSON-RPC error response 00:41:20.733 response: 00:41:20.733 { 00:41:20.733 "code": -19, 00:41:20.733 "message": "No such device" 00:41:20.733 } 00:41:20.733 09:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:41:20.733 09:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:41:20.733 09:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:41:20.733 09:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:41:20.733 09:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:41:20.994 aio_bdev 00:41:20.994 09:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 1319cf03-d2f7-42e2-b1b4-a68077b51cb9 00:41:20.994 09:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=1319cf03-d2f7-42e2-b1b4-a68077b51cb9 00:41:20.994 09:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:41:20.994 09:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:41:20.994 09:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:41:20.994 09:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:41:20.994 09:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:41:20.994 09:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 1319cf03-d2f7-42e2-b1b4-a68077b51cb9 -t 2000 00:41:21.254 [ 00:41:21.254 { 00:41:21.254 "name": "1319cf03-d2f7-42e2-b1b4-a68077b51cb9", 00:41:21.254 "aliases": [ 00:41:21.254 "lvs/lvol" 00:41:21.254 ], 00:41:21.254 "product_name": "Logical Volume", 00:41:21.254 "block_size": 4096, 00:41:21.254 "num_blocks": 38912, 00:41:21.254 "uuid": "1319cf03-d2f7-42e2-b1b4-a68077b51cb9", 00:41:21.254 "assigned_rate_limits": { 00:41:21.255 "rw_ios_per_sec": 0, 00:41:21.255 "rw_mbytes_per_sec": 0, 00:41:21.255 "r_mbytes_per_sec": 0, 00:41:21.255 "w_mbytes_per_sec": 0 00:41:21.255 }, 00:41:21.255 "claimed": false, 00:41:21.255 "zoned": false, 00:41:21.255 "supported_io_types": { 00:41:21.255 "read": true, 00:41:21.255 "write": true, 00:41:21.255 "unmap": true, 00:41:21.255 "flush": false, 00:41:21.255 "reset": true, 00:41:21.255 "nvme_admin": false, 00:41:21.255 "nvme_io": false, 00:41:21.255 "nvme_io_md": false, 00:41:21.255 "write_zeroes": true, 00:41:21.255 "zcopy": false, 00:41:21.255 "get_zone_info": false, 00:41:21.255 "zone_management": false, 00:41:21.255 "zone_append": false, 00:41:21.255 "compare": false, 00:41:21.255 "compare_and_write": false, 00:41:21.255 "abort": false, 00:41:21.255 "seek_hole": true, 00:41:21.255 "seek_data": true, 00:41:21.255 "copy": false, 00:41:21.255 "nvme_iov_md": false 00:41:21.255 }, 00:41:21.255 "driver_specific": { 00:41:21.255 "lvol": { 00:41:21.255 "lvol_store_uuid": "59ae59ce-c5b7-4b65-94d2-6250257bf5c0", 00:41:21.255 "base_bdev": "aio_bdev", 00:41:21.255 "thin_provision": false, 00:41:21.255 "num_allocated_clusters": 38, 00:41:21.255 "snapshot": false, 00:41:21.255 "clone": false, 00:41:21.255 "esnap_clone": false 00:41:21.255 } 00:41:21.255 } 00:41:21.255 } 00:41:21.255 ] 00:41:21.255 09:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:41:21.255 09:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 59ae59ce-c5b7-4b65-94d2-6250257bf5c0 00:41:21.255 09:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:41:21.255 09:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:41:21.255 09:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 59ae59ce-c5b7-4b65-94d2-6250257bf5c0 00:41:21.255 09:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:41:21.516 09:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:41:21.516 09:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 1319cf03-d2f7-42e2-b1b4-a68077b51cb9 00:41:21.777 09:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 59ae59ce-c5b7-4b65-94d2-6250257bf5c0 00:41:21.778 09:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:41:22.039 09:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:41:22.039 00:41:22.039 real 0m17.315s 00:41:22.039 user 0m34.730s 00:41:22.040 sys 0m3.495s 00:41:22.040 09:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:22.040 09:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:41:22.040 ************************************ 00:41:22.040 END TEST lvs_grow_dirty 00:41:22.040 ************************************ 00:41:22.040 09:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:41:22.040 09:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:41:22.040 09:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:41:22.040 09:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:41:22.040 09:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:41:22.040 09:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:41:22.040 09:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:41:22.040 09:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:41:22.040 09:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:41:22.040 nvmf_trace.0 00:41:22.302 09:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:41:22.302 09:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:41:22.302 09:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:22.302 09:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:41:22.302 09:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:22.302 09:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:41:22.302 09:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:22.302 09:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:22.302 rmmod nvme_tcp 00:41:22.302 rmmod nvme_fabrics 00:41:22.302 rmmod nvme_keyring 00:41:22.302 09:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:22.302 09:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:41:22.302 09:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:41:22.302 09:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 3111947 ']' 00:41:22.302 09:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 3111947 00:41:22.302 09:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 3111947 ']' 00:41:22.302 09:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 3111947 00:41:22.302 09:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:41:22.302 09:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:22.302 09:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3111947 00:41:22.302 09:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:22.302 09:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:22.302 09:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3111947' 00:41:22.302 killing process with pid 3111947 00:41:22.302 09:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 3111947 00:41:22.302 09:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 3111947 00:41:22.564 09:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:22.564 09:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:22.564 09:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:22.564 09:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:41:22.564 09:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:41:22.564 09:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:22.564 09:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:41:22.564 09:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:22.564 09:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:22.564 09:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:22.564 09:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:22.564 09:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:24.481 09:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:24.481 00:41:24.481 real 0m43.339s 00:41:24.481 user 0m52.265s 00:41:24.481 sys 0m10.586s 00:41:24.481 09:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:24.481 09:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:41:24.481 ************************************ 00:41:24.481 END TEST nvmf_lvs_grow 00:41:24.481 ************************************ 00:41:24.481 09:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:41:24.481 09:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:41:24.481 09:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:24.481 09:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:24.743 ************************************ 00:41:24.743 START TEST nvmf_bdev_io_wait 00:41:24.743 ************************************ 00:41:24.743 09:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:41:24.743 * Looking for test storage... 00:41:24.743 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:24.743 09:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:41:24.743 09:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:41:24.743 09:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:41:24.743 09:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:41:24.743 09:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:24.743 09:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:24.743 09:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:24.743 09:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:41:24.743 09:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:41:24.743 09:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:41:24.743 09:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:41:24.743 09:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:41:24.743 09:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:41:24.743 09:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:41:24.743 09:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:24.743 09:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:41:24.743 09:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:41:24.743 09:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:24.743 09:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:24.743 09:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:41:24.743 09:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:41:24.743 09:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:24.743 09:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:41:24.743 09:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:41:24.743 09:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:41:24.743 09:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:41:24.743 09:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:24.743 09:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:41:24.743 09:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:41:24.743 09:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:24.743 09:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:24.743 09:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:41:24.743 09:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:24.743 09:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:41:24.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:24.743 --rc genhtml_branch_coverage=1 00:41:24.743 --rc genhtml_function_coverage=1 00:41:24.743 --rc genhtml_legend=1 00:41:24.743 --rc geninfo_all_blocks=1 00:41:24.743 --rc geninfo_unexecuted_blocks=1 00:41:24.743 00:41:24.743 ' 00:41:24.743 09:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:41:24.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:24.743 --rc genhtml_branch_coverage=1 00:41:24.743 --rc genhtml_function_coverage=1 00:41:24.743 --rc genhtml_legend=1 00:41:24.743 --rc geninfo_all_blocks=1 00:41:24.743 --rc geninfo_unexecuted_blocks=1 00:41:24.743 00:41:24.743 ' 00:41:24.743 09:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:41:24.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:24.743 --rc genhtml_branch_coverage=1 00:41:24.743 --rc genhtml_function_coverage=1 00:41:24.743 --rc genhtml_legend=1 00:41:24.743 --rc geninfo_all_blocks=1 00:41:24.743 --rc geninfo_unexecuted_blocks=1 00:41:24.743 00:41:24.743 ' 00:41:24.743 09:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:41:24.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:24.743 --rc genhtml_branch_coverage=1 00:41:24.743 --rc genhtml_function_coverage=1 00:41:24.743 --rc genhtml_legend=1 00:41:24.743 --rc geninfo_all_blocks=1 00:41:24.743 --rc geninfo_unexecuted_blocks=1 00:41:24.743 00:41:24.743 ' 00:41:24.743 09:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:24.743 09:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:41:24.743 09:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:24.743 09:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:24.743 09:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:24.743 09:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:24.743 09:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:24.743 09:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:24.743 09:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:24.743 09:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:24.743 09:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:24.743 09:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:24.743 09:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:41:24.743 09:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:41:24.743 09:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:24.743 09:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:24.744 09:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:24.744 09:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:24.744 09:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:24.744 09:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:41:25.005 09:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:25.006 09:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:25.006 09:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:25.006 09:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:25.006 09:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:25.006 09:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:25.006 09:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:41:25.006 09:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:25.006 09:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:41:25.006 09:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:25.006 09:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:25.006 09:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:25.006 09:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:25.006 09:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:25.006 09:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:25.006 09:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:25.006 09:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:25.006 09:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:25.006 09:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:25.006 09:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:41:25.006 09:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:41:25.006 09:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:41:25.006 09:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:25.006 09:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:25.006 09:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:25.006 09:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:25.006 09:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:25.006 09:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:25.006 09:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:25.006 09:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:25.006 09:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:25.006 09:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:25.006 09:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:41:25.006 09:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:31.604 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:31.604 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:41:31.604 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:31.604 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:31.604 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:31.604 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:31.604 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:31.604 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:41:31.604 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:31.604 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:41:31.604 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:41:31.604 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:41:31.604 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:41:31.604 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:41:31.604 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:41:31.604 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:31.604 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:31.604 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:31.604 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:31.604 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:31.604 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:31.604 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:31.604 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:31.604 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:31.604 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:31.604 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:31.604 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:31.604 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:31.604 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:31.604 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:31.604 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:31.604 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:31.604 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:31.604 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:31.604 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:41:31.604 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:41:31.604 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:31.604 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:31.604 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:31.604 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:31.604 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:31.604 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:31.604 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:41:31.604 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:41:31.604 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:31.605 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:31.605 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:31.605 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:31.605 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:31.605 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:31.605 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:31.605 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:31.605 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:31.605 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:31.605 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:31.605 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:31.605 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:31.605 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:31.605 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:31.605 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:41:31.605 Found net devices under 0000:4b:00.0: cvl_0_0 00:41:31.605 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:31.605 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:31.605 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:31.605 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:31.605 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:31.605 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:31.605 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:31.605 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:31.605 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:41:31.605 Found net devices under 0000:4b:00.1: cvl_0_1 00:41:31.605 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:31.605 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:31.605 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:41:31.605 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:31.605 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:31.605 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:31.605 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:31.605 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:31.605 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:31.605 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:31.605 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:31.605 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:31.605 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:31.605 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:31.605 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:31.605 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:31.605 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:31.605 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:31.605 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:31.605 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:31.605 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:31.605 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:31.605 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:31.605 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:31.605 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:31.605 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:31.605 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:31.605 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:31.605 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:31.605 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:31.605 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.641 ms 00:41:31.605 00:41:31.605 --- 10.0.0.2 ping statistics --- 00:41:31.605 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:31.605 rtt min/avg/max/mdev = 0.641/0.641/0.641/0.000 ms 00:41:31.605 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:31.605 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:31.605 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.304 ms 00:41:31.605 00:41:31.605 --- 10.0.0.1 ping statistics --- 00:41:31.605 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:31.605 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:41:31.605 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:31.605 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:41:31.605 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:41:31.605 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:31.605 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:31.605 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:31.605 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:31.605 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:31.605 09:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:31.605 09:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:41:31.605 09:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:31.605 09:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:31.605 09:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:31.605 09:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=3116780 00:41:31.605 09:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 3116780 00:41:31.605 09:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:41:31.605 09:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 3116780 ']' 00:41:31.605 09:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:31.605 09:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:31.605 09:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:31.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:31.605 09:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:31.605 09:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:31.866 [2024-12-09 09:59:07.104022] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:31.866 [2024-12-09 09:59:07.105151] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:41:31.866 [2024-12-09 09:59:07.105200] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:31.866 [2024-12-09 09:59:07.207673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:31.866 [2024-12-09 09:59:07.237938] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:31.866 [2024-12-09 09:59:07.237988] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:31.866 [2024-12-09 09:59:07.237997] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:31.866 [2024-12-09 09:59:07.238004] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:31.866 [2024-12-09 09:59:07.238010] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:31.866 [2024-12-09 09:59:07.239857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:31.866 [2024-12-09 09:59:07.239984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:41:31.866 [2024-12-09 09:59:07.240149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:31.866 [2024-12-09 09:59:07.240149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:41:31.866 [2024-12-09 09:59:07.240485] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:32.809 09:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:32.809 09:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:41:32.809 09:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:32.809 09:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:32.809 09:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:32.809 09:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:32.809 09:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:41:32.809 09:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:32.809 09:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:32.809 09:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:32.809 09:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:41:32.809 09:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:32.809 09:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:32.809 [2024-12-09 09:59:08.009633] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:41:32.809 [2024-12-09 09:59:08.010090] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:32.809 [2024-12-09 09:59:08.010762] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:41:32.809 [2024-12-09 09:59:08.010875] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:41:32.809 09:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:32.809 09:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:41:32.809 09:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:32.809 09:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:32.809 [2024-12-09 09:59:08.020721] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:32.809 09:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:32.809 09:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:41:32.809 09:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:32.809 09:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:32.809 Malloc0 00:41:32.809 09:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:32.809 09:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:41:32.809 09:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:32.809 09:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:32.809 09:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:32.809 09:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:41:32.809 09:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:32.809 09:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:32.809 09:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:32.809 09:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:32.809 09:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:32.809 09:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:32.809 [2024-12-09 09:59:08.089258] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:32.809 09:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:32.809 09:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3116984 00:41:32.809 09:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3116986 00:41:32.809 09:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:41:32.809 09:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:41:32.809 09:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:41:32.809 09:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:41:32.809 09:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:32.810 09:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:32.810 { 00:41:32.810 "params": { 00:41:32.810 "name": "Nvme$subsystem", 00:41:32.810 "trtype": "$TEST_TRANSPORT", 00:41:32.810 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:32.810 "adrfam": "ipv4", 00:41:32.810 "trsvcid": "$NVMF_PORT", 00:41:32.810 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:32.810 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:32.810 "hdgst": ${hdgst:-false}, 00:41:32.810 "ddgst": ${ddgst:-false} 00:41:32.810 }, 00:41:32.810 "method": "bdev_nvme_attach_controller" 00:41:32.810 } 00:41:32.810 EOF 00:41:32.810 )") 00:41:32.810 09:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3116988 00:41:32.810 09:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:41:32.810 09:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:41:32.810 09:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:41:32.810 09:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:41:32.810 09:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:32.810 09:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3116991 00:41:32.810 09:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:32.810 { 00:41:32.810 "params": { 00:41:32.810 "name": "Nvme$subsystem", 00:41:32.810 "trtype": "$TEST_TRANSPORT", 00:41:32.810 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:32.810 "adrfam": "ipv4", 00:41:32.810 "trsvcid": "$NVMF_PORT", 00:41:32.810 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:32.810 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:32.810 "hdgst": ${hdgst:-false}, 00:41:32.810 "ddgst": ${ddgst:-false} 00:41:32.810 }, 00:41:32.810 "method": "bdev_nvme_attach_controller" 00:41:32.810 } 00:41:32.810 EOF 00:41:32.810 )") 00:41:32.810 09:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:41:32.810 09:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:41:32.810 09:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:41:32.810 09:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:41:32.810 09:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:41:32.810 09:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:41:32.810 09:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:32.810 09:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:32.810 { 00:41:32.810 "params": { 00:41:32.810 "name": "Nvme$subsystem", 00:41:32.810 "trtype": "$TEST_TRANSPORT", 00:41:32.810 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:32.810 "adrfam": "ipv4", 00:41:32.810 "trsvcid": "$NVMF_PORT", 00:41:32.810 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:32.810 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:32.810 "hdgst": ${hdgst:-false}, 00:41:32.810 "ddgst": ${ddgst:-false} 00:41:32.810 }, 00:41:32.810 "method": "bdev_nvme_attach_controller" 00:41:32.810 } 00:41:32.810 EOF 00:41:32.810 )") 00:41:32.810 09:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:41:32.810 09:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:41:32.810 09:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:41:32.810 09:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:41:32.810 09:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:41:32.810 09:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:32.810 09:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:32.810 { 00:41:32.810 "params": { 00:41:32.810 "name": "Nvme$subsystem", 00:41:32.810 "trtype": "$TEST_TRANSPORT", 00:41:32.810 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:32.810 "adrfam": "ipv4", 00:41:32.810 "trsvcid": "$NVMF_PORT", 00:41:32.810 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:32.810 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:32.810 "hdgst": ${hdgst:-false}, 00:41:32.810 "ddgst": ${ddgst:-false} 00:41:32.810 }, 00:41:32.810 "method": "bdev_nvme_attach_controller" 00:41:32.810 } 00:41:32.810 EOF 00:41:32.810 )") 00:41:32.810 09:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:41:32.810 09:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3116984 00:41:32.810 09:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:41:32.810 09:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:41:32.810 09:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:41:32.810 09:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:41:32.810 09:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:41:32.810 09:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:32.810 "params": { 00:41:32.810 "name": "Nvme1", 00:41:32.810 "trtype": "tcp", 00:41:32.810 "traddr": "10.0.0.2", 00:41:32.810 "adrfam": "ipv4", 00:41:32.810 "trsvcid": "4420", 00:41:32.810 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:32.810 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:32.810 "hdgst": false, 00:41:32.810 "ddgst": false 00:41:32.810 }, 00:41:32.810 "method": "bdev_nvme_attach_controller" 00:41:32.810 }' 00:41:32.810 09:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:41:32.810 09:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:41:32.810 09:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:32.810 "params": { 00:41:32.810 "name": "Nvme1", 00:41:32.810 "trtype": "tcp", 00:41:32.810 "traddr": "10.0.0.2", 00:41:32.810 "adrfam": "ipv4", 00:41:32.810 "trsvcid": "4420", 00:41:32.810 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:32.810 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:32.810 "hdgst": false, 00:41:32.810 "ddgst": false 00:41:32.810 }, 00:41:32.810 "method": "bdev_nvme_attach_controller" 00:41:32.810 }' 00:41:32.810 09:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:41:32.810 09:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:32.811 "params": { 00:41:32.811 "name": "Nvme1", 00:41:32.811 "trtype": "tcp", 00:41:32.811 "traddr": "10.0.0.2", 00:41:32.811 "adrfam": "ipv4", 00:41:32.811 "trsvcid": "4420", 00:41:32.811 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:32.811 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:32.811 "hdgst": false, 00:41:32.811 "ddgst": false 00:41:32.811 }, 00:41:32.811 "method": "bdev_nvme_attach_controller" 00:41:32.811 }' 00:41:32.811 09:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:41:32.811 09:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:32.811 "params": { 00:41:32.811 "name": "Nvme1", 00:41:32.811 "trtype": "tcp", 00:41:32.811 "traddr": "10.0.0.2", 00:41:32.811 "adrfam": "ipv4", 00:41:32.811 "trsvcid": "4420", 00:41:32.811 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:32.811 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:32.811 "hdgst": false, 00:41:32.811 "ddgst": false 00:41:32.811 }, 00:41:32.811 "method": "bdev_nvme_attach_controller" 00:41:32.811 }' 00:41:32.811 [2024-12-09 09:59:08.145282] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:41:32.811 [2024-12-09 09:59:08.145339] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:41:32.811 [2024-12-09 09:59:08.146995] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:41:32.811 [2024-12-09 09:59:08.147044] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:41:32.811 [2024-12-09 09:59:08.147761] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:41:32.811 [2024-12-09 09:59:08.147808] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:41:32.811 [2024-12-09 09:59:08.148754] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:41:32.811 [2024-12-09 09:59:08.148800] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:41:33.071 [2024-12-09 09:59:08.302700] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:33.071 [2024-12-09 09:59:08.315282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:41:33.071 [2024-12-09 09:59:08.323844] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:33.071 [2024-12-09 09:59:08.334733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:41:33.071 [2024-12-09 09:59:08.368611] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:33.071 [2024-12-09 09:59:08.380125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:41:33.071 [2024-12-09 09:59:08.415814] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:33.071 [2024-12-09 09:59:08.427442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:41:33.333 Running I/O for 1 seconds... 00:41:33.333 Running I/O for 1 seconds... 00:41:33.333 Running I/O for 1 seconds... 00:41:33.333 Running I/O for 1 seconds... 00:41:34.276 181008.00 IOPS, 707.06 MiB/s 00:41:34.276 Latency(us) 00:41:34.276 [2024-12-09T08:59:09.729Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:34.276 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:41:34.276 Nvme1n1 : 1.00 180649.82 705.66 0.00 0.00 704.55 296.96 1966.08 00:41:34.276 [2024-12-09T08:59:09.729Z] =================================================================================================================== 00:41:34.276 [2024-12-09T08:59:09.729Z] Total : 180649.82 705.66 0.00 0.00 704.55 296.96 1966.08 00:41:34.276 7686.00 IOPS, 30.02 MiB/s 00:41:34.276 Latency(us) 00:41:34.276 [2024-12-09T08:59:09.729Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:34.276 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:41:34.276 Nvme1n1 : 1.02 7680.13 30.00 0.00 0.00 16501.81 2921.81 25995.95 00:41:34.276 [2024-12-09T08:59:09.729Z] =================================================================================================================== 00:41:34.276 [2024-12-09T08:59:09.729Z] Total : 7680.13 30.00 0.00 0.00 16501.81 2921.81 25995.95 00:41:34.276 13216.00 IOPS, 51.62 MiB/s [2024-12-09T08:59:09.729Z] 7504.00 IOPS, 29.31 MiB/s 00:41:34.276 Latency(us) 00:41:34.276 [2024-12-09T08:59:09.729Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:34.276 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:41:34.276 Nvme1n1 : 1.01 13255.71 51.78 0.00 0.00 9620.41 4751.36 15291.73 00:41:34.276 [2024-12-09T08:59:09.729Z] =================================================================================================================== 00:41:34.276 [2024-12-09T08:59:09.729Z] Total : 13255.71 51.78 0.00 0.00 9620.41 4751.36 15291.73 00:41:34.276 00:41:34.276 Latency(us) 00:41:34.276 [2024-12-09T08:59:09.729Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:34.276 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:41:34.276 Nvme1n1 : 1.01 7615.06 29.75 0.00 0.00 16760.25 4751.36 30365.01 00:41:34.276 [2024-12-09T08:59:09.729Z] =================================================================================================================== 00:41:34.276 [2024-12-09T08:59:09.729Z] Total : 7615.06 29.75 0.00 0.00 16760.25 4751.36 30365.01 00:41:34.537 09:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3116986 00:41:34.537 09:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3116988 00:41:34.537 09:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3116991 00:41:34.537 09:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:34.537 09:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:34.537 09:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:34.537 09:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:34.537 09:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:41:34.537 09:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:41:34.537 09:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:34.537 09:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:41:34.537 09:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:34.537 09:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:41:34.537 09:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:34.537 09:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:34.537 rmmod nvme_tcp 00:41:34.537 rmmod nvme_fabrics 00:41:34.537 rmmod nvme_keyring 00:41:34.538 09:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:34.538 09:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:41:34.538 09:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:41:34.538 09:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 3116780 ']' 00:41:34.538 09:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 3116780 00:41:34.538 09:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 3116780 ']' 00:41:34.538 09:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 3116780 00:41:34.538 09:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:41:34.538 09:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:34.538 09:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3116780 00:41:34.538 09:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:34.538 09:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:34.538 09:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3116780' 00:41:34.538 killing process with pid 3116780 00:41:34.538 09:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 3116780 00:41:34.538 09:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 3116780 00:41:34.800 09:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:34.800 09:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:34.800 09:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:34.800 09:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:41:34.800 09:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:41:34.800 09:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:34.800 09:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:41:34.800 09:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:34.800 09:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:34.800 09:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:34.800 09:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:34.800 09:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:36.714 09:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:36.714 00:41:36.714 real 0m12.174s 00:41:36.714 user 0m14.802s 00:41:36.714 sys 0m6.889s 00:41:36.714 09:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:36.714 09:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:41:36.714 ************************************ 00:41:36.714 END TEST nvmf_bdev_io_wait 00:41:36.714 ************************************ 00:41:36.989 09:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:41:36.990 09:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:41:36.990 09:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:36.990 09:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:36.990 ************************************ 00:41:36.990 START TEST nvmf_queue_depth 00:41:36.990 ************************************ 00:41:36.990 09:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:41:36.990 * Looking for test storage... 00:41:36.990 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:36.990 09:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:41:36.990 09:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:41:36.990 09:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:41:36.990 09:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:41:36.990 09:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:36.990 09:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:36.990 09:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:36.990 09:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:41:36.990 09:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:41:36.990 09:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:41:36.990 09:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:41:36.990 09:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:41:36.990 09:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:41:36.990 09:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:41:36.990 09:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:36.990 09:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:41:36.990 09:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:41:36.990 09:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:36.990 09:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:36.990 09:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:41:36.990 09:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:41:36.990 09:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:36.990 09:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:41:36.990 09:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:41:36.990 09:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:41:36.990 09:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:41:36.990 09:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:36.990 09:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:41:36.990 09:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:41:36.990 09:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:36.990 09:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:36.990 09:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:41:36.990 09:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:36.990 09:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:41:36.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:36.990 --rc genhtml_branch_coverage=1 00:41:36.990 --rc genhtml_function_coverage=1 00:41:36.990 --rc genhtml_legend=1 00:41:36.990 --rc geninfo_all_blocks=1 00:41:36.990 --rc geninfo_unexecuted_blocks=1 00:41:36.990 00:41:36.990 ' 00:41:36.990 09:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:41:36.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:36.990 --rc genhtml_branch_coverage=1 00:41:36.990 --rc genhtml_function_coverage=1 00:41:36.990 --rc genhtml_legend=1 00:41:36.990 --rc geninfo_all_blocks=1 00:41:36.990 --rc geninfo_unexecuted_blocks=1 00:41:36.990 00:41:36.990 ' 00:41:36.990 09:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:41:36.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:36.990 --rc genhtml_branch_coverage=1 00:41:36.990 --rc genhtml_function_coverage=1 00:41:36.990 --rc genhtml_legend=1 00:41:36.990 --rc geninfo_all_blocks=1 00:41:36.990 --rc geninfo_unexecuted_blocks=1 00:41:36.990 00:41:36.990 ' 00:41:36.990 09:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:41:36.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:36.990 --rc genhtml_branch_coverage=1 00:41:36.990 --rc genhtml_function_coverage=1 00:41:36.990 --rc genhtml_legend=1 00:41:36.990 --rc geninfo_all_blocks=1 00:41:36.990 --rc geninfo_unexecuted_blocks=1 00:41:36.990 00:41:36.990 ' 00:41:36.990 09:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:36.990 09:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:41:36.990 09:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:36.990 09:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:36.990 09:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:36.990 09:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:36.990 09:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:36.990 09:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:36.990 09:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:36.990 09:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:36.990 09:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:36.990 09:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:37.252 09:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:41:37.252 09:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:41:37.252 09:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:37.252 09:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:37.252 09:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:37.252 09:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:37.252 09:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:37.252 09:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:41:37.252 09:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:37.252 09:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:37.252 09:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:37.252 09:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:37.252 09:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:37.253 09:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:37.253 09:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:41:37.253 09:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:37.253 09:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:41:37.253 09:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:37.253 09:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:37.253 09:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:37.253 09:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:37.253 09:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:37.253 09:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:37.253 09:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:37.253 09:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:37.253 09:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:37.253 09:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:37.253 09:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:41:37.253 09:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:41:37.253 09:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:41:37.253 09:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:41:37.253 09:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:37.253 09:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:37.253 09:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:37.253 09:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:37.253 09:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:37.253 09:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:37.253 09:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:37.253 09:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:37.253 09:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:37.253 09:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:37.253 09:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:41:37.253 09:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:43.842 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:43.843 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:41:43.843 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:43.843 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:43.843 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:43.843 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:43.843 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:43.843 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:41:43.843 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:43.843 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:41:43.843 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:41:43.843 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:41:43.843 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:41:43.843 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:41:43.843 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:41:43.843 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:43.843 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:43.843 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:43.843 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:43.843 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:43.843 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:43.843 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:43.843 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:43.843 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:43.843 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:43.843 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:43.843 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:43.843 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:43.843 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:43.843 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:43.843 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:43.843 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:43.843 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:43.843 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:43.843 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:41:43.843 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:41:43.843 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:43.843 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:43.843 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:43.843 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:43.843 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:43.843 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:43.843 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:41:43.843 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:41:43.843 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:43.843 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:43.843 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:43.843 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:43.843 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:43.843 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:43.843 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:43.843 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:43.843 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:43.843 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:43.843 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:43.843 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:43.843 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:43.843 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:43.843 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:43.843 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:41:43.843 Found net devices under 0000:4b:00.0: cvl_0_0 00:41:43.843 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:43.843 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:43.843 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:43.843 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:43.843 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:43.843 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:43.843 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:43.843 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:43.843 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:41:43.843 Found net devices under 0000:4b:00.1: cvl_0_1 00:41:43.843 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:43.843 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:43.843 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:41:43.843 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:43.843 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:43.843 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:43.843 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:43.843 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:43.843 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:43.843 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:43.843 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:43.843 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:43.843 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:43.843 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:43.843 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:43.843 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:43.843 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:43.843 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:43.843 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:43.843 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:43.843 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:44.104 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:44.104 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:44.104 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:44.104 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:44.104 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:44.104 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:44.104 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:44.104 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:44.104 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:44.104 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.585 ms 00:41:44.104 00:41:44.104 --- 10.0.0.2 ping statistics --- 00:41:44.104 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:44.104 rtt min/avg/max/mdev = 0.585/0.585/0.585/0.000 ms 00:41:44.104 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:44.104 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:44.104 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.325 ms 00:41:44.104 00:41:44.104 --- 10.0.0.1 ping statistics --- 00:41:44.104 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:44.104 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:41:44.104 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:44.104 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:41:44.104 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:41:44.104 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:44.104 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:44.104 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:44.104 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:44.104 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:44.104 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:44.366 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:41:44.366 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:44.366 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:44.366 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:44.366 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=3121360 00:41:44.366 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 3121360 00:41:44.366 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3121360 ']' 00:41:44.366 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:41:44.366 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:44.366 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:44.366 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:44.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:44.366 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:44.366 09:59:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:44.366 [2024-12-09 09:59:19.647327] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:44.366 [2024-12-09 09:59:19.648477] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:41:44.366 [2024-12-09 09:59:19.648529] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:44.366 [2024-12-09 09:59:19.749635] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:44.366 [2024-12-09 09:59:19.776535] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:44.366 [2024-12-09 09:59:19.776585] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:44.366 [2024-12-09 09:59:19.776594] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:44.366 [2024-12-09 09:59:19.776602] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:44.366 [2024-12-09 09:59:19.776608] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:44.366 [2024-12-09 09:59:19.777330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:44.628 [2024-12-09 09:59:19.846275] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:44.628 [2024-12-09 09:59:19.846539] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:45.215 09:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:45.215 09:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:41:45.215 09:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:45.215 09:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:45.215 09:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:45.215 09:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:45.215 09:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:41:45.215 09:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:45.215 09:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:45.215 [2024-12-09 09:59:20.514192] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:45.215 09:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:45.215 09:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:41:45.215 09:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:45.215 09:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:45.215 Malloc0 00:41:45.215 09:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:45.215 09:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:41:45.215 09:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:45.215 09:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:45.215 09:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:45.215 09:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:41:45.215 09:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:45.215 09:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:45.215 09:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:45.215 09:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:45.215 09:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:45.215 09:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:45.215 [2024-12-09 09:59:20.594363] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:45.215 09:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:45.215 09:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3121691 00:41:45.215 09:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:41:45.215 09:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:41:45.215 09:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3121691 /var/tmp/bdevperf.sock 00:41:45.215 09:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3121691 ']' 00:41:45.215 09:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:41:45.215 09:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:45.215 09:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:41:45.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:41:45.215 09:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:45.215 09:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:45.215 [2024-12-09 09:59:20.650741] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:41:45.215 [2024-12-09 09:59:20.650807] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3121691 ] 00:41:45.591 [2024-12-09 09:59:20.740434] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:45.591 [2024-12-09 09:59:20.768347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:46.183 09:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:46.183 09:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:41:46.183 09:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:41:46.183 09:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:46.183 09:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:46.183 NVMe0n1 00:41:46.183 09:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:46.183 09:59:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:41:46.443 Running I/O for 10 seconds... 00:41:48.333 8074.00 IOPS, 31.54 MiB/s [2024-12-09T08:59:24.732Z] 8192.00 IOPS, 32.00 MiB/s [2024-12-09T08:59:25.674Z] 8404.33 IOPS, 32.83 MiB/s [2024-12-09T08:59:27.056Z] 8684.75 IOPS, 33.92 MiB/s [2024-12-09T08:59:27.997Z] 9514.80 IOPS, 37.17 MiB/s [2024-12-09T08:59:28.939Z] 10123.00 IOPS, 39.54 MiB/s [2024-12-09T08:59:29.892Z] 10548.00 IOPS, 41.20 MiB/s [2024-12-09T08:59:30.833Z] 10886.75 IOPS, 42.53 MiB/s [2024-12-09T08:59:31.778Z] 11138.00 IOPS, 43.51 MiB/s [2024-12-09T08:59:31.778Z] 11320.10 IOPS, 44.22 MiB/s 00:41:56.325 Latency(us) 00:41:56.325 [2024-12-09T08:59:31.778Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:56.325 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:41:56.325 Verification LBA range: start 0x0 length 0x4000 00:41:56.325 NVMe0n1 : 10.05 11358.71 44.37 0.00 0.00 89794.04 17803.95 83449.17 00:41:56.325 [2024-12-09T08:59:31.778Z] =================================================================================================================== 00:41:56.325 [2024-12-09T08:59:31.778Z] Total : 11358.71 44.37 0.00 0.00 89794.04 17803.95 83449.17 00:41:56.325 { 00:41:56.325 "results": [ 00:41:56.325 { 00:41:56.325 "job": "NVMe0n1", 00:41:56.325 "core_mask": "0x1", 00:41:56.325 "workload": "verify", 00:41:56.325 "status": "finished", 00:41:56.325 "verify_range": { 00:41:56.325 "start": 0, 00:41:56.325 "length": 16384 00:41:56.325 }, 00:41:56.325 "queue_depth": 1024, 00:41:56.325 "io_size": 4096, 00:41:56.325 "runtime": 10.051495, 00:41:56.325 "iops": 11358.708331447213, 00:41:56.325 "mibps": 44.369954419715675, 00:41:56.325 "io_failed": 0, 00:41:56.325 "io_timeout": 0, 00:41:56.325 "avg_latency_us": 89794.03800861858, 00:41:56.325 "min_latency_us": 17803.946666666667, 00:41:56.325 "max_latency_us": 83449.17333333334 00:41:56.325 } 00:41:56.325 ], 00:41:56.325 "core_count": 1 00:41:56.325 } 00:41:56.325 09:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3121691 00:41:56.325 09:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3121691 ']' 00:41:56.325 09:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3121691 00:41:56.325 09:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:41:56.325 09:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:56.325 09:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3121691 00:41:56.584 09:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:56.585 09:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:56.585 09:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3121691' 00:41:56.585 killing process with pid 3121691 00:41:56.585 09:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3121691 00:41:56.585 Received shutdown signal, test time was about 10.000000 seconds 00:41:56.585 00:41:56.585 Latency(us) 00:41:56.585 [2024-12-09T08:59:32.038Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:41:56.585 [2024-12-09T08:59:32.038Z] =================================================================================================================== 00:41:56.585 [2024-12-09T08:59:32.038Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:41:56.585 09:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3121691 00:41:56.585 09:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:41:56.585 09:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:41:56.585 09:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:56.585 09:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:41:56.585 09:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:56.585 09:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:41:56.585 09:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:56.585 09:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:56.585 rmmod nvme_tcp 00:41:56.585 rmmod nvme_fabrics 00:41:56.585 rmmod nvme_keyring 00:41:56.585 09:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:56.585 09:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:41:56.585 09:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:41:56.585 09:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 3121360 ']' 00:41:56.585 09:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 3121360 00:41:56.585 09:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3121360 ']' 00:41:56.585 09:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3121360 00:41:56.585 09:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:41:56.585 09:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:56.585 09:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3121360 00:41:56.845 09:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:41:56.845 09:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:41:56.845 09:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3121360' 00:41:56.845 killing process with pid 3121360 00:41:56.845 09:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3121360 00:41:56.845 09:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3121360 00:41:56.845 09:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:56.845 09:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:56.845 09:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:56.845 09:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:41:56.845 09:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:41:56.845 09:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:56.845 09:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:41:56.845 09:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:56.845 09:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:56.845 09:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:56.845 09:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:56.845 09:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:59.391 09:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:59.391 00:41:59.391 real 0m22.036s 00:41:59.391 user 0m24.403s 00:41:59.391 sys 0m7.210s 00:41:59.391 09:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:59.391 09:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:41:59.391 ************************************ 00:41:59.391 END TEST nvmf_queue_depth 00:41:59.391 ************************************ 00:41:59.392 09:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:41:59.392 09:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:41:59.392 09:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:59.392 09:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:59.392 ************************************ 00:41:59.392 START TEST nvmf_target_multipath 00:41:59.392 ************************************ 00:41:59.392 09:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:41:59.392 * Looking for test storage... 00:41:59.392 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:59.392 09:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:41:59.392 09:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:41:59.392 09:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:41:59.392 09:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:41:59.392 09:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:59.392 09:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:59.392 09:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:59.392 09:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:41:59.392 09:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:41:59.392 09:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:41:59.392 09:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:41:59.392 09:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:41:59.392 09:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:41:59.392 09:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:41:59.392 09:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:59.392 09:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:41:59.392 09:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:41:59.392 09:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:59.392 09:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:59.392 09:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:41:59.392 09:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:41:59.392 09:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:59.392 09:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:41:59.392 09:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:41:59.392 09:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:41:59.392 09:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:41:59.392 09:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:59.392 09:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:41:59.392 09:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:41:59.392 09:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:59.392 09:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:59.392 09:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:41:59.392 09:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:59.392 09:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:41:59.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:59.392 --rc genhtml_branch_coverage=1 00:41:59.392 --rc genhtml_function_coverage=1 00:41:59.392 --rc genhtml_legend=1 00:41:59.392 --rc geninfo_all_blocks=1 00:41:59.392 --rc geninfo_unexecuted_blocks=1 00:41:59.392 00:41:59.392 ' 00:41:59.392 09:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:41:59.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:59.392 --rc genhtml_branch_coverage=1 00:41:59.392 --rc genhtml_function_coverage=1 00:41:59.392 --rc genhtml_legend=1 00:41:59.392 --rc geninfo_all_blocks=1 00:41:59.392 --rc geninfo_unexecuted_blocks=1 00:41:59.392 00:41:59.392 ' 00:41:59.392 09:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:41:59.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:59.392 --rc genhtml_branch_coverage=1 00:41:59.392 --rc genhtml_function_coverage=1 00:41:59.392 --rc genhtml_legend=1 00:41:59.392 --rc geninfo_all_blocks=1 00:41:59.392 --rc geninfo_unexecuted_blocks=1 00:41:59.392 00:41:59.392 ' 00:41:59.392 09:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:41:59.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:59.392 --rc genhtml_branch_coverage=1 00:41:59.392 --rc genhtml_function_coverage=1 00:41:59.392 --rc genhtml_legend=1 00:41:59.392 --rc geninfo_all_blocks=1 00:41:59.392 --rc geninfo_unexecuted_blocks=1 00:41:59.392 00:41:59.392 ' 00:41:59.392 09:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:59.392 09:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:41:59.392 09:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:59.392 09:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:59.392 09:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:59.392 09:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:59.392 09:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:59.392 09:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:59.392 09:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:59.392 09:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:59.392 09:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:59.392 09:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:59.392 09:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:41:59.392 09:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:41:59.392 09:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:59.392 09:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:59.392 09:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:59.392 09:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:59.392 09:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:59.392 09:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:41:59.392 09:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:59.392 09:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:59.392 09:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:59.392 09:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:59.392 09:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:59.393 09:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:59.393 09:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:41:59.393 09:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:59.393 09:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:41:59.393 09:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:59.393 09:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:59.393 09:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:59.393 09:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:59.393 09:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:59.393 09:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:59.393 09:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:59.393 09:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:59.393 09:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:59.393 09:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:59.393 09:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:41:59.393 09:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:41:59.393 09:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:41:59.393 09:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:41:59.393 09:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:41:59.393 09:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:59.393 09:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:59.393 09:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:59.393 09:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:59.393 09:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:59.393 09:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:59.393 09:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:59.393 09:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:59.393 09:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:59.393 09:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:59.393 09:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:41:59.393 09:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:42:05.992 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:05.992 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:42:05.992 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:05.992 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:05.992 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:05.992 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:05.992 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:05.992 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:42:05.992 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:05.992 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:42:05.992 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:42:05.992 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:42:05.992 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:42:05.992 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:42:05.992 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:42:05.992 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:05.992 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:05.992 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:05.992 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:05.992 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:05.992 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:05.992 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:05.992 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:42:05.992 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:05.992 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:05.992 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:05.992 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:05.992 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:42:05.992 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:42:05.992 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:42:05.992 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:42:05.992 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:42:05.992 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:42:05.992 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:05.992 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:42:05.992 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:42:05.992 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:05.992 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:05.992 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:05.992 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:05.992 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:05.992 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:05.992 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:42:05.992 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:42:05.992 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:05.992 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:05.992 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:05.992 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:05.992 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:05.992 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:42:05.992 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:42:05.992 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:42:05.992 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:05.992 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:05.992 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:05.992 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:05.992 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:05.992 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:05.992 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:05.992 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:42:05.992 Found net devices under 0000:4b:00.0: cvl_0_0 00:42:05.992 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:05.992 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:05.992 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:05.992 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:05.992 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:05.992 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:05.992 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:05.992 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:05.992 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:42:05.992 Found net devices under 0000:4b:00.1: cvl_0_1 00:42:05.992 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:05.992 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:42:05.992 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:42:05.992 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:42:05.992 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:42:05.992 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:42:05.992 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:05.992 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:05.992 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:05.992 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:05.992 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:05.992 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:05.992 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:05.992 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:05.992 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:05.993 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:05.993 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:05.993 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:05.993 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:05.993 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:05.993 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:05.993 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:05.993 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:06.253 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:06.253 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:06.253 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:06.253 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:06.253 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:06.253 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:06.253 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:06.253 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.608 ms 00:42:06.253 00:42:06.253 --- 10.0.0.2 ping statistics --- 00:42:06.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:06.253 rtt min/avg/max/mdev = 0.608/0.608/0.608/0.000 ms 00:42:06.253 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:06.253 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:06.253 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.259 ms 00:42:06.253 00:42:06.253 --- 10.0.0.1 ping statistics --- 00:42:06.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:06.253 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:42:06.253 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:06.254 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:42:06.254 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:42:06.254 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:06.254 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:42:06.254 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:42:06.254 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:06.254 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:42:06.254 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:42:06.254 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:42:06.254 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:42:06.254 only one NIC for nvmf test 00:42:06.254 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:42:06.254 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:06.254 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:42:06.254 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:06.254 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:42:06.254 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:06.254 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:06.254 rmmod nvme_tcp 00:42:06.254 rmmod nvme_fabrics 00:42:06.254 rmmod nvme_keyring 00:42:06.254 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:06.254 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:42:06.254 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:42:06.254 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:42:06.254 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:42:06.254 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:06.254 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:06.254 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:42:06.254 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:06.254 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:42:06.254 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:42:06.515 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:06.515 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:06.515 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:06.515 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:06.515 09:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:08.428 09:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:08.428 09:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:42:08.428 09:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:42:08.428 09:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:08.428 09:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:42:08.428 09:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:08.428 09:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:42:08.428 09:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:08.428 09:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:08.428 09:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:08.428 09:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:42:08.428 09:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:42:08.428 09:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:42:08.428 09:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:42:08.428 09:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:08.428 09:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:08.428 09:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:42:08.428 09:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:42:08.429 09:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:08.429 09:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:42:08.429 09:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:08.429 09:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:08.429 09:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:08.429 09:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:08.429 09:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:08.429 09:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:08.429 00:42:08.429 real 0m9.495s 00:42:08.429 user 0m2.125s 00:42:08.429 sys 0m5.282s 00:42:08.429 09:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:08.429 09:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:42:08.429 ************************************ 00:42:08.429 END TEST nvmf_target_multipath 00:42:08.429 ************************************ 00:42:08.429 09:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:42:08.429 09:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:42:08.429 09:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:08.429 09:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:42:08.690 ************************************ 00:42:08.690 START TEST nvmf_zcopy 00:42:08.690 ************************************ 00:42:08.690 09:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:42:08.690 * Looking for test storage... 00:42:08.690 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:08.690 09:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:42:08.690 09:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:42:08.690 09:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:42:08.690 09:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:42:08.690 09:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:08.690 09:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:08.690 09:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:08.690 09:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:42:08.690 09:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:42:08.690 09:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:42:08.690 09:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:42:08.690 09:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:42:08.690 09:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:42:08.690 09:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:42:08.690 09:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:08.690 09:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:42:08.690 09:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:42:08.690 09:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:08.690 09:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:08.690 09:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:42:08.690 09:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:42:08.690 09:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:08.690 09:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:42:08.690 09:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:42:08.690 09:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:42:08.690 09:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:42:08.690 09:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:08.690 09:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:42:08.690 09:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:42:08.690 09:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:08.690 09:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:08.690 09:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:42:08.690 09:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:08.690 09:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:42:08.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:08.690 --rc genhtml_branch_coverage=1 00:42:08.690 --rc genhtml_function_coverage=1 00:42:08.690 --rc genhtml_legend=1 00:42:08.690 --rc geninfo_all_blocks=1 00:42:08.690 --rc geninfo_unexecuted_blocks=1 00:42:08.690 00:42:08.690 ' 00:42:08.690 09:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:42:08.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:08.690 --rc genhtml_branch_coverage=1 00:42:08.690 --rc genhtml_function_coverage=1 00:42:08.690 --rc genhtml_legend=1 00:42:08.690 --rc geninfo_all_blocks=1 00:42:08.690 --rc geninfo_unexecuted_blocks=1 00:42:08.690 00:42:08.690 ' 00:42:08.690 09:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:42:08.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:08.690 --rc genhtml_branch_coverage=1 00:42:08.690 --rc genhtml_function_coverage=1 00:42:08.690 --rc genhtml_legend=1 00:42:08.690 --rc geninfo_all_blocks=1 00:42:08.690 --rc geninfo_unexecuted_blocks=1 00:42:08.690 00:42:08.690 ' 00:42:08.690 09:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:42:08.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:08.690 --rc genhtml_branch_coverage=1 00:42:08.690 --rc genhtml_function_coverage=1 00:42:08.690 --rc genhtml_legend=1 00:42:08.690 --rc geninfo_all_blocks=1 00:42:08.690 --rc geninfo_unexecuted_blocks=1 00:42:08.691 00:42:08.691 ' 00:42:08.691 09:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:08.691 09:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:42:08.691 09:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:08.691 09:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:08.691 09:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:08.691 09:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:08.691 09:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:08.691 09:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:08.691 09:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:08.691 09:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:08.691 09:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:08.691 09:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:08.691 09:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:42:08.691 09:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:42:08.691 09:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:08.691 09:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:08.691 09:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:08.691 09:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:08.691 09:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:08.691 09:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:42:08.691 09:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:08.691 09:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:08.691 09:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:08.691 09:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:08.691 09:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:08.691 09:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:08.691 09:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:42:08.691 09:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:08.691 09:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:42:08.691 09:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:08.691 09:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:08.691 09:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:08.691 09:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:08.691 09:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:08.691 09:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:42:08.691 09:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:42:08.691 09:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:08.691 09:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:08.691 09:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:08.691 09:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:42:08.691 09:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:42:08.691 09:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:08.952 09:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:42:08.952 09:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:42:08.952 09:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:42:08.952 09:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:08.952 09:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:08.952 09:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:08.952 09:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:42:08.952 09:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:42:08.952 09:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:42:08.952 09:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:17.109 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:17.109 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:42:17.109 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:17.109 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:17.109 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:17.109 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:17.109 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:17.109 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:42:17.109 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:17.109 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:42:17.109 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:42:17.109 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:42:17.109 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:42:17.109 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:42:17.109 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:42:17.109 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:17.109 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:17.109 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:17.109 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:17.109 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:17.110 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:17.110 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:17.110 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:42:17.110 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:17.110 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:17.110 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:17.110 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:17.110 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:42:17.110 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:42:17.110 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:42:17.110 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:42:17.110 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:42:17.110 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:42:17.110 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:17.110 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:42:17.110 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:42:17.110 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:17.110 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:17.110 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:17.110 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:17.110 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:17.110 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:17.110 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:42:17.110 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:42:17.110 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:17.110 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:17.110 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:17.110 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:17.110 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:17.110 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:42:17.110 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:42:17.110 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:42:17.110 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:17.110 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:17.110 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:17.110 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:17.110 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:17.110 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:17.110 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:17.110 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:42:17.110 Found net devices under 0000:4b:00.0: cvl_0_0 00:42:17.110 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:17.110 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:17.110 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:17.110 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:17.110 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:17.110 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:17.110 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:17.110 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:17.110 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:42:17.110 Found net devices under 0000:4b:00.1: cvl_0_1 00:42:17.110 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:17.110 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:42:17.110 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:42:17.110 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:42:17.110 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:42:17.110 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:42:17.110 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:17.110 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:17.110 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:17.110 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:17.110 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:17.110 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:17.110 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:17.110 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:17.110 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:17.110 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:17.110 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:17.110 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:17.110 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:17.110 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:17.110 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:17.110 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:17.110 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:17.110 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:17.110 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:17.110 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:17.110 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:17.110 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:17.110 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:17.110 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:17.110 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.679 ms 00:42:17.110 00:42:17.110 --- 10.0.0.2 ping statistics --- 00:42:17.110 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:17.110 rtt min/avg/max/mdev = 0.679/0.679/0.679/0.000 ms 00:42:17.110 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:17.110 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:17.110 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.226 ms 00:42:17.110 00:42:17.110 --- 10.0.0.1 ping statistics --- 00:42:17.110 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:17.110 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:42:17.110 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:17.110 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:42:17.110 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:42:17.110 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:17.110 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:42:17.110 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:42:17.110 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:17.110 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:42:17.110 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:42:17.110 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:42:17.110 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:42:17.110 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:17.111 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:17.111 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=3131975 00:42:17.111 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 3131975 00:42:17.111 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:42:17.111 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 3131975 ']' 00:42:17.111 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:17.111 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:17.111 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:17.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:17.111 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:17.111 09:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:17.111 [2024-12-09 09:59:51.441126] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:42:17.111 [2024-12-09 09:59:51.442065] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:42:17.111 [2024-12-09 09:59:51.442104] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:17.111 [2024-12-09 09:59:51.537809] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:17.111 [2024-12-09 09:59:51.554598] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:17.111 [2024-12-09 09:59:51.554634] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:17.111 [2024-12-09 09:59:51.554647] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:17.111 [2024-12-09 09:59:51.554654] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:17.111 [2024-12-09 09:59:51.554661] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:17.111 [2024-12-09 09:59:51.555192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:17.111 [2024-12-09 09:59:51.604224] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:42:17.111 [2024-12-09 09:59:51.604464] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:42:17.111 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:17.111 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:42:17.111 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:42:17.111 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:17.111 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:17.111 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:17.111 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:42:17.111 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:42:17.111 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:17.111 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:17.111 [2024-12-09 09:59:52.271940] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:17.111 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:17.111 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:42:17.111 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:17.111 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:17.111 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:17.111 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:17.111 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:17.111 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:17.111 [2024-12-09 09:59:52.300183] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:17.111 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:17.111 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:42:17.111 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:17.111 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:17.111 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:17.111 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:42:17.111 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:17.111 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:17.111 malloc0 00:42:17.111 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:17.111 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:42:17.111 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:17.111 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:17.111 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:17.111 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:42:17.111 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:42:17.111 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:42:17.111 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:42:17.111 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:17.111 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:17.111 { 00:42:17.111 "params": { 00:42:17.111 "name": "Nvme$subsystem", 00:42:17.111 "trtype": "$TEST_TRANSPORT", 00:42:17.111 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:17.111 "adrfam": "ipv4", 00:42:17.111 "trsvcid": "$NVMF_PORT", 00:42:17.111 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:17.111 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:17.111 "hdgst": ${hdgst:-false}, 00:42:17.111 "ddgst": ${ddgst:-false} 00:42:17.111 }, 00:42:17.111 "method": "bdev_nvme_attach_controller" 00:42:17.111 } 00:42:17.111 EOF 00:42:17.111 )") 00:42:17.111 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:42:17.111 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:42:17.111 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:42:17.111 09:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:17.111 "params": { 00:42:17.111 "name": "Nvme1", 00:42:17.111 "trtype": "tcp", 00:42:17.111 "traddr": "10.0.0.2", 00:42:17.111 "adrfam": "ipv4", 00:42:17.111 "trsvcid": "4420", 00:42:17.111 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:17.111 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:17.111 "hdgst": false, 00:42:17.111 "ddgst": false 00:42:17.111 }, 00:42:17.111 "method": "bdev_nvme_attach_controller" 00:42:17.111 }' 00:42:17.111 [2024-12-09 09:59:52.410906] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:42:17.111 [2024-12-09 09:59:52.410957] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3132061 ] 00:42:17.111 [2024-12-09 09:59:52.498087] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:17.111 [2024-12-09 09:59:52.516590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:17.372 Running I/O for 10 seconds... 00:42:19.257 6421.00 IOPS, 50.16 MiB/s [2024-12-09T08:59:56.106Z] 6567.00 IOPS, 51.30 MiB/s [2024-12-09T08:59:57.048Z] 6616.00 IOPS, 51.69 MiB/s [2024-12-09T08:59:57.989Z] 6613.00 IOPS, 51.66 MiB/s [2024-12-09T08:59:58.933Z] 6633.60 IOPS, 51.83 MiB/s [2024-12-09T08:59:59.877Z] 6635.67 IOPS, 51.84 MiB/s [2024-12-09T09:00:00.820Z] 6912.57 IOPS, 54.00 MiB/s [2024-12-09T09:00:01.762Z] 7262.38 IOPS, 56.74 MiB/s [2024-12-09T09:00:03.145Z] 7534.33 IOPS, 58.86 MiB/s [2024-12-09T09:00:03.145Z] 7748.20 IOPS, 60.53 MiB/s 00:42:27.692 Latency(us) 00:42:27.692 [2024-12-09T09:00:03.145Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:27.692 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:42:27.692 Verification LBA range: start 0x0 length 0x1000 00:42:27.692 Nvme1n1 : 10.01 7751.91 60.56 0.00 0.00 16463.82 2280.11 27088.21 00:42:27.692 [2024-12-09T09:00:03.145Z] =================================================================================================================== 00:42:27.692 [2024-12-09T09:00:03.145Z] Total : 7751.91 60.56 0.00 0.00 16463.82 2280.11 27088.21 00:42:27.692 10:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3134168 00:42:27.692 10:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:42:27.692 10:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:27.692 10:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:42:27.692 10:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:42:27.692 10:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:42:27.692 10:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:42:27.692 10:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:27.692 10:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:27.692 { 00:42:27.692 "params": { 00:42:27.692 "name": "Nvme$subsystem", 00:42:27.692 "trtype": "$TEST_TRANSPORT", 00:42:27.692 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:27.692 "adrfam": "ipv4", 00:42:27.692 "trsvcid": "$NVMF_PORT", 00:42:27.692 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:27.692 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:27.692 "hdgst": ${hdgst:-false}, 00:42:27.692 "ddgst": ${ddgst:-false} 00:42:27.692 }, 00:42:27.692 "method": "bdev_nvme_attach_controller" 00:42:27.692 } 00:42:27.692 EOF 00:42:27.692 )") 00:42:27.692 10:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:42:27.692 [2024-12-09 10:00:02.831515] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.692 [2024-12-09 10:00:02.831546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.692 10:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:42:27.692 10:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:42:27.692 10:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:27.692 "params": { 00:42:27.692 "name": "Nvme1", 00:42:27.692 "trtype": "tcp", 00:42:27.692 "traddr": "10.0.0.2", 00:42:27.692 "adrfam": "ipv4", 00:42:27.692 "trsvcid": "4420", 00:42:27.692 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:27.692 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:27.692 "hdgst": false, 00:42:27.692 "ddgst": false 00:42:27.692 }, 00:42:27.692 "method": "bdev_nvme_attach_controller" 00:42:27.692 }' 00:42:27.692 [2024-12-09 10:00:02.843478] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.692 [2024-12-09 10:00:02.843488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.692 [2024-12-09 10:00:02.855476] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.692 [2024-12-09 10:00:02.855484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.692 [2024-12-09 10:00:02.867476] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.692 [2024-12-09 10:00:02.867484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.692 [2024-12-09 10:00:02.874531] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:42:27.692 [2024-12-09 10:00:02.874579] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3134168 ] 00:42:27.692 [2024-12-09 10:00:02.879476] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.692 [2024-12-09 10:00:02.879485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.692 [2024-12-09 10:00:02.891475] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.692 [2024-12-09 10:00:02.891483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.692 [2024-12-09 10:00:02.903476] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.692 [2024-12-09 10:00:02.903484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.692 [2024-12-09 10:00:02.915475] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.692 [2024-12-09 10:00:02.915483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.692 [2024-12-09 10:00:02.927475] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.692 [2024-12-09 10:00:02.927483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.692 [2024-12-09 10:00:02.939475] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.692 [2024-12-09 10:00:02.939483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.692 [2024-12-09 10:00:02.951476] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.692 [2024-12-09 10:00:02.951483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.692 [2024-12-09 10:00:02.955659] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:27.692 [2024-12-09 10:00:02.963477] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.692 [2024-12-09 10:00:02.963486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.692 [2024-12-09 10:00:02.971337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:27.692 [2024-12-09 10:00:02.975475] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.692 [2024-12-09 10:00:02.975485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.692 [2024-12-09 10:00:02.987482] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.692 [2024-12-09 10:00:02.987493] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.692 [2024-12-09 10:00:02.999481] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.692 [2024-12-09 10:00:02.999494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.692 [2024-12-09 10:00:03.011477] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.692 [2024-12-09 10:00:03.011488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.692 [2024-12-09 10:00:03.023477] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.692 [2024-12-09 10:00:03.023486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.692 [2024-12-09 10:00:03.035487] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.692 [2024-12-09 10:00:03.035503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.692 [2024-12-09 10:00:03.047479] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.692 [2024-12-09 10:00:03.047490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.692 [2024-12-09 10:00:03.059488] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.692 [2024-12-09 10:00:03.059499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.692 [2024-12-09 10:00:03.071478] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.692 [2024-12-09 10:00:03.071489] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.692 [2024-12-09 10:00:03.083478] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.692 [2024-12-09 10:00:03.083488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.692 [2024-12-09 10:00:03.095483] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.692 [2024-12-09 10:00:03.095499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.692 Running I/O for 5 seconds... 00:42:27.692 [2024-12-09 10:00:03.110409] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.692 [2024-12-09 10:00:03.110426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.692 [2024-12-09 10:00:03.123761] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.692 [2024-12-09 10:00:03.123778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.692 [2024-12-09 10:00:03.138664] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.692 [2024-12-09 10:00:03.138680] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.953 [2024-12-09 10:00:03.151765] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.953 [2024-12-09 10:00:03.151780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.953 [2024-12-09 10:00:03.166685] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.953 [2024-12-09 10:00:03.166700] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.953 [2024-12-09 10:00:03.179822] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.953 [2024-12-09 10:00:03.179838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.953 [2024-12-09 10:00:03.195301] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.953 [2024-12-09 10:00:03.195316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.953 [2024-12-09 10:00:03.208307] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.953 [2024-12-09 10:00:03.208323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.953 [2024-12-09 10:00:03.222375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.953 [2024-12-09 10:00:03.222391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.953 [2024-12-09 10:00:03.235797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.953 [2024-12-09 10:00:03.235812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.953 [2024-12-09 10:00:03.250300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.953 [2024-12-09 10:00:03.250316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.953 [2024-12-09 10:00:03.262965] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.953 [2024-12-09 10:00:03.262980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.953 [2024-12-09 10:00:03.276391] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.953 [2024-12-09 10:00:03.276406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.953 [2024-12-09 10:00:03.290781] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.953 [2024-12-09 10:00:03.290797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.953 [2024-12-09 10:00:03.303565] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.953 [2024-12-09 10:00:03.303580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.953 [2024-12-09 10:00:03.316826] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.953 [2024-12-09 10:00:03.316842] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.953 [2024-12-09 10:00:03.331094] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.953 [2024-12-09 10:00:03.331109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.953 [2024-12-09 10:00:03.344031] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.953 [2024-12-09 10:00:03.344049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.953 [2024-12-09 10:00:03.359185] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.953 [2024-12-09 10:00:03.359200] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.953 [2024-12-09 10:00:03.371967] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.953 [2024-12-09 10:00:03.371981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.953 [2024-12-09 10:00:03.386889] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.953 [2024-12-09 10:00:03.386904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:27.954 [2024-12-09 10:00:03.399819] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:27.954 [2024-12-09 10:00:03.399834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.214 [2024-12-09 10:00:03.414381] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.214 [2024-12-09 10:00:03.414397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.214 [2024-12-09 10:00:03.427063] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.214 [2024-12-09 10:00:03.427078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.214 [2024-12-09 10:00:03.440488] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.214 [2024-12-09 10:00:03.440504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.214 [2024-12-09 10:00:03.454872] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.214 [2024-12-09 10:00:03.454887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.214 [2024-12-09 10:00:03.467623] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.214 [2024-12-09 10:00:03.467642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.214 [2024-12-09 10:00:03.480823] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.214 [2024-12-09 10:00:03.480839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.214 [2024-12-09 10:00:03.494194] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.214 [2024-12-09 10:00:03.494210] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.215 [2024-12-09 10:00:03.507014] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.215 [2024-12-09 10:00:03.507030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.215 [2024-12-09 10:00:03.520348] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.215 [2024-12-09 10:00:03.520363] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.215 [2024-12-09 10:00:03.534708] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.215 [2024-12-09 10:00:03.534724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.215 [2024-12-09 10:00:03.547462] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.215 [2024-12-09 10:00:03.547478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.215 [2024-12-09 10:00:03.561067] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.215 [2024-12-09 10:00:03.561082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.215 [2024-12-09 10:00:03.574870] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.215 [2024-12-09 10:00:03.574886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.215 [2024-12-09 10:00:03.587662] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.215 [2024-12-09 10:00:03.587677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.215 [2024-12-09 10:00:03.600516] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.215 [2024-12-09 10:00:03.600535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.215 [2024-12-09 10:00:03.614531] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.215 [2024-12-09 10:00:03.614547] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.215 [2024-12-09 10:00:03.627600] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.215 [2024-12-09 10:00:03.627617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.215 [2024-12-09 10:00:03.640529] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.215 [2024-12-09 10:00:03.640544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.215 [2024-12-09 10:00:03.654607] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.215 [2024-12-09 10:00:03.654623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.475 [2024-12-09 10:00:03.667608] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.475 [2024-12-09 10:00:03.667624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.475 [2024-12-09 10:00:03.680446] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.475 [2024-12-09 10:00:03.680461] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.475 [2024-12-09 10:00:03.694441] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.475 [2024-12-09 10:00:03.694456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.476 [2024-12-09 10:00:03.707279] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.476 [2024-12-09 10:00:03.707294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.476 [2024-12-09 10:00:03.720126] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.476 [2024-12-09 10:00:03.720142] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.476 [2024-12-09 10:00:03.734808] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.476 [2024-12-09 10:00:03.734824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.476 [2024-12-09 10:00:03.747645] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.476 [2024-12-09 10:00:03.747660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.476 [2024-12-09 10:00:03.760004] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.476 [2024-12-09 10:00:03.760019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.476 [2024-12-09 10:00:03.774588] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.476 [2024-12-09 10:00:03.774604] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.476 [2024-12-09 10:00:03.787662] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.476 [2024-12-09 10:00:03.787678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.476 [2024-12-09 10:00:03.800870] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.476 [2024-12-09 10:00:03.800885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.476 [2024-12-09 10:00:03.815538] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.476 [2024-12-09 10:00:03.815554] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.476 [2024-12-09 10:00:03.828426] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.476 [2024-12-09 10:00:03.828442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.476 [2024-12-09 10:00:03.842593] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.476 [2024-12-09 10:00:03.842608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.476 [2024-12-09 10:00:03.855650] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.476 [2024-12-09 10:00:03.855669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.476 [2024-12-09 10:00:03.868520] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.476 [2024-12-09 10:00:03.868536] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.476 [2024-12-09 10:00:03.883132] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.476 [2024-12-09 10:00:03.883147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.476 [2024-12-09 10:00:03.896252] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.476 [2024-12-09 10:00:03.896266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.476 [2024-12-09 10:00:03.910524] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.476 [2024-12-09 10:00:03.910540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.476 [2024-12-09 10:00:03.923720] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.476 [2024-12-09 10:00:03.923736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.737 [2024-12-09 10:00:03.936554] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.737 [2024-12-09 10:00:03.936570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.737 [2024-12-09 10:00:03.950919] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.737 [2024-12-09 10:00:03.950934] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.737 [2024-12-09 10:00:03.963581] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.737 [2024-12-09 10:00:03.963597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.737 [2024-12-09 10:00:03.976269] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.737 [2024-12-09 10:00:03.976284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.737 [2024-12-09 10:00:03.991200] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.737 [2024-12-09 10:00:03.991216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.737 [2024-12-09 10:00:04.004079] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.737 [2024-12-09 10:00:04.004094] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.737 [2024-12-09 10:00:04.018715] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.737 [2024-12-09 10:00:04.018730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.737 [2024-12-09 10:00:04.031669] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.737 [2024-12-09 10:00:04.031685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.737 [2024-12-09 10:00:04.044560] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.737 [2024-12-09 10:00:04.044576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.737 [2024-12-09 10:00:04.058697] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.737 [2024-12-09 10:00:04.058713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.737 [2024-12-09 10:00:04.071765] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.737 [2024-12-09 10:00:04.071780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.737 [2024-12-09 10:00:04.086326] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.737 [2024-12-09 10:00:04.086342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.737 [2024-12-09 10:00:04.099332] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.737 [2024-12-09 10:00:04.099347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.737 19134.00 IOPS, 149.48 MiB/s [2024-12-09T09:00:04.190Z] [2024-12-09 10:00:04.112527] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.737 [2024-12-09 10:00:04.112542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.737 [2024-12-09 10:00:04.126470] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.737 [2024-12-09 10:00:04.126486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.737 [2024-12-09 10:00:04.139320] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.737 [2024-12-09 10:00:04.139335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.737 [2024-12-09 10:00:04.152271] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.737 [2024-12-09 10:00:04.152286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.737 [2024-12-09 10:00:04.166486] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.737 [2024-12-09 10:00:04.166502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.737 [2024-12-09 10:00:04.179550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.737 [2024-12-09 10:00:04.179565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.998 [2024-12-09 10:00:04.192650] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.998 [2024-12-09 10:00:04.192666] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.998 [2024-12-09 10:00:04.206635] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.998 [2024-12-09 10:00:04.206655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.998 [2024-12-09 10:00:04.219542] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.998 [2024-12-09 10:00:04.219558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.998 [2024-12-09 10:00:04.232440] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.998 [2024-12-09 10:00:04.232455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.998 [2024-12-09 10:00:04.246547] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.998 [2024-12-09 10:00:04.246562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.998 [2024-12-09 10:00:04.259553] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.998 [2024-12-09 10:00:04.259568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.998 [2024-12-09 10:00:04.272851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.998 [2024-12-09 10:00:04.272866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.998 [2024-12-09 10:00:04.287003] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.998 [2024-12-09 10:00:04.287018] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.998 [2024-12-09 10:00:04.300119] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.998 [2024-12-09 10:00:04.300133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.998 [2024-12-09 10:00:04.314787] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.998 [2024-12-09 10:00:04.314802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.998 [2024-12-09 10:00:04.327509] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.998 [2024-12-09 10:00:04.327524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.998 [2024-12-09 10:00:04.340237] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.998 [2024-12-09 10:00:04.340252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.998 [2024-12-09 10:00:04.354741] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.998 [2024-12-09 10:00:04.354756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.998 [2024-12-09 10:00:04.367457] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.998 [2024-12-09 10:00:04.367473] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.998 [2024-12-09 10:00:04.380227] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.998 [2024-12-09 10:00:04.380242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.998 [2024-12-09 10:00:04.394905] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.998 [2024-12-09 10:00:04.394921] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.998 [2024-12-09 10:00:04.408089] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.998 [2024-12-09 10:00:04.408104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.998 [2024-12-09 10:00:04.422316] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.998 [2024-12-09 10:00:04.422331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.998 [2024-12-09 10:00:04.435236] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.998 [2024-12-09 10:00:04.435251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:28.998 [2024-12-09 10:00:04.448566] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:28.998 [2024-12-09 10:00:04.448581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.259 [2024-12-09 10:00:04.463096] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.259 [2024-12-09 10:00:04.463111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.259 [2024-12-09 10:00:04.476220] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.259 [2024-12-09 10:00:04.476235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.259 [2024-12-09 10:00:04.490728] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.259 [2024-12-09 10:00:04.490743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.259 [2024-12-09 10:00:04.503428] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.259 [2024-12-09 10:00:04.503443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.259 [2024-12-09 10:00:04.516323] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.259 [2024-12-09 10:00:04.516338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.259 [2024-12-09 10:00:04.530973] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.259 [2024-12-09 10:00:04.530988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.259 [2024-12-09 10:00:04.544226] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.259 [2024-12-09 10:00:04.544241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.259 [2024-12-09 10:00:04.558709] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.259 [2024-12-09 10:00:04.558724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.259 [2024-12-09 10:00:04.571623] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.259 [2024-12-09 10:00:04.571643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.259 [2024-12-09 10:00:04.584461] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.259 [2024-12-09 10:00:04.584475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.259 [2024-12-09 10:00:04.598730] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.259 [2024-12-09 10:00:04.598745] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.259 [2024-12-09 10:00:04.611689] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.259 [2024-12-09 10:00:04.611705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.259 [2024-12-09 10:00:04.624342] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.259 [2024-12-09 10:00:04.624358] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.259 [2024-12-09 10:00:04.638673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.259 [2024-12-09 10:00:04.638689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.260 [2024-12-09 10:00:04.651482] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.260 [2024-12-09 10:00:04.651497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.260 [2024-12-09 10:00:04.664288] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.260 [2024-12-09 10:00:04.664303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.260 [2024-12-09 10:00:04.678688] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.260 [2024-12-09 10:00:04.678703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.260 [2024-12-09 10:00:04.691464] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.260 [2024-12-09 10:00:04.691479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.260 [2024-12-09 10:00:04.704593] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.260 [2024-12-09 10:00:04.704608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.520 [2024-12-09 10:00:04.718896] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.520 [2024-12-09 10:00:04.718911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.520 [2024-12-09 10:00:04.731665] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.520 [2024-12-09 10:00:04.731681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.520 [2024-12-09 10:00:04.744591] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.520 [2024-12-09 10:00:04.744605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.520 [2024-12-09 10:00:04.758484] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.520 [2024-12-09 10:00:04.758499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.520 [2024-12-09 10:00:04.771419] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.520 [2024-12-09 10:00:04.771435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.520 [2024-12-09 10:00:04.784247] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.520 [2024-12-09 10:00:04.784261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.520 [2024-12-09 10:00:04.798879] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.520 [2024-12-09 10:00:04.798894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.520 [2024-12-09 10:00:04.812025] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.520 [2024-12-09 10:00:04.812040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.520 [2024-12-09 10:00:04.826781] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.520 [2024-12-09 10:00:04.826803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.520 [2024-12-09 10:00:04.839789] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.520 [2024-12-09 10:00:04.839803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.520 [2024-12-09 10:00:04.854278] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.520 [2024-12-09 10:00:04.854293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.520 [2024-12-09 10:00:04.867476] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.520 [2024-12-09 10:00:04.867495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.520 [2024-12-09 10:00:04.880597] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.520 [2024-12-09 10:00:04.880613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.520 [2024-12-09 10:00:04.894939] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.520 [2024-12-09 10:00:04.894955] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.520 [2024-12-09 10:00:04.907905] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.520 [2024-12-09 10:00:04.907920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.521 [2024-12-09 10:00:04.922614] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.521 [2024-12-09 10:00:04.922629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.521 [2024-12-09 10:00:04.935816] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.521 [2024-12-09 10:00:04.935831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.521 [2024-12-09 10:00:04.950929] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.521 [2024-12-09 10:00:04.950944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.521 [2024-12-09 10:00:04.963476] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.521 [2024-12-09 10:00:04.963491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.781 [2024-12-09 10:00:04.976527] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.781 [2024-12-09 10:00:04.976542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.781 [2024-12-09 10:00:04.990786] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.781 [2024-12-09 10:00:04.990801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.781 [2024-12-09 10:00:05.003836] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.781 [2024-12-09 10:00:05.003851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.781 [2024-12-09 10:00:05.018744] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.781 [2024-12-09 10:00:05.018760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.781 [2024-12-09 10:00:05.031790] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.781 [2024-12-09 10:00:05.031805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.781 [2024-12-09 10:00:05.046409] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.781 [2024-12-09 10:00:05.046424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.781 [2024-12-09 10:00:05.059230] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.781 [2024-12-09 10:00:05.059245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.781 [2024-12-09 10:00:05.071863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.781 [2024-12-09 10:00:05.071878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.781 [2024-12-09 10:00:05.086622] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.781 [2024-12-09 10:00:05.086642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.781 [2024-12-09 10:00:05.099693] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.781 [2024-12-09 10:00:05.099708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.781 19167.00 IOPS, 149.74 MiB/s [2024-12-09T09:00:05.234Z] [2024-12-09 10:00:05.112210] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.781 [2024-12-09 10:00:05.112225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.781 [2024-12-09 10:00:05.126969] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.781 [2024-12-09 10:00:05.126987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.781 [2024-12-09 10:00:05.140093] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.781 [2024-12-09 10:00:05.140107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.781 [2024-12-09 10:00:05.154659] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.781 [2024-12-09 10:00:05.154675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.781 [2024-12-09 10:00:05.167465] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.781 [2024-12-09 10:00:05.167480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.781 [2024-12-09 10:00:05.180515] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.781 [2024-12-09 10:00:05.180530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.781 [2024-12-09 10:00:05.194739] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.781 [2024-12-09 10:00:05.194754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.781 [2024-12-09 10:00:05.207887] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.781 [2024-12-09 10:00:05.207902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:29.781 [2024-12-09 10:00:05.222485] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:29.782 [2024-12-09 10:00:05.222501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.043 [2024-12-09 10:00:05.235498] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.043 [2024-12-09 10:00:05.235514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.043 [2024-12-09 10:00:05.248659] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.043 [2024-12-09 10:00:05.248674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.043 [2024-12-09 10:00:05.262804] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.043 [2024-12-09 10:00:05.262820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.043 [2024-12-09 10:00:05.275836] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.043 [2024-12-09 10:00:05.275851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.043 [2024-12-09 10:00:05.290134] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.043 [2024-12-09 10:00:05.290149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.043 [2024-12-09 10:00:05.303082] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.043 [2024-12-09 10:00:05.303098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.043 [2024-12-09 10:00:05.316176] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.043 [2024-12-09 10:00:05.316191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.043 [2024-12-09 10:00:05.330584] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.043 [2024-12-09 10:00:05.330599] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.043 [2024-12-09 10:00:05.343420] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.043 [2024-12-09 10:00:05.343437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.043 [2024-12-09 10:00:05.356462] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.043 [2024-12-09 10:00:05.356477] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.043 [2024-12-09 10:00:05.370597] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.043 [2024-12-09 10:00:05.370613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.043 [2024-12-09 10:00:05.383583] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.043 [2024-12-09 10:00:05.383602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.043 [2024-12-09 10:00:05.396440] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.043 [2024-12-09 10:00:05.396456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.043 [2024-12-09 10:00:05.410660] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.043 [2024-12-09 10:00:05.410675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.043 [2024-12-09 10:00:05.423502] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.043 [2024-12-09 10:00:05.423517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.043 [2024-12-09 10:00:05.436423] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.043 [2024-12-09 10:00:05.436439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.043 [2024-12-09 10:00:05.450882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.043 [2024-12-09 10:00:05.450898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.043 [2024-12-09 10:00:05.464135] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.043 [2024-12-09 10:00:05.464151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.043 [2024-12-09 10:00:05.479051] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.043 [2024-12-09 10:00:05.479066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.043 [2024-12-09 10:00:05.491696] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.043 [2024-12-09 10:00:05.491711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.305 [2024-12-09 10:00:05.504615] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.305 [2024-12-09 10:00:05.504631] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.305 [2024-12-09 10:00:05.518497] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.305 [2024-12-09 10:00:05.518513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.305 [2024-12-09 10:00:05.531487] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.305 [2024-12-09 10:00:05.531503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.305 [2024-12-09 10:00:05.544504] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.305 [2024-12-09 10:00:05.544519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.305 [2024-12-09 10:00:05.558390] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.305 [2024-12-09 10:00:05.558406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.305 [2024-12-09 10:00:05.571290] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.305 [2024-12-09 10:00:05.571305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.305 [2024-12-09 10:00:05.584506] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.305 [2024-12-09 10:00:05.584521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.305 [2024-12-09 10:00:05.598798] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.305 [2024-12-09 10:00:05.598814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.305 [2024-12-09 10:00:05.611615] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.305 [2024-12-09 10:00:05.611631] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.305 [2024-12-09 10:00:05.624749] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.305 [2024-12-09 10:00:05.624765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.305 [2024-12-09 10:00:05.638431] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.305 [2024-12-09 10:00:05.638447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.305 [2024-12-09 10:00:05.651074] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.305 [2024-12-09 10:00:05.651091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.305 [2024-12-09 10:00:05.664424] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.305 [2024-12-09 10:00:05.664440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.305 [2024-12-09 10:00:05.678863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.305 [2024-12-09 10:00:05.678878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.305 [2024-12-09 10:00:05.691807] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.305 [2024-12-09 10:00:05.691822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.305 [2024-12-09 10:00:05.706494] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.305 [2024-12-09 10:00:05.706510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.305 [2024-12-09 10:00:05.719582] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.305 [2024-12-09 10:00:05.719598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.305 [2024-12-09 10:00:05.732360] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.305 [2024-12-09 10:00:05.732375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.305 [2024-12-09 10:00:05.746699] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.305 [2024-12-09 10:00:05.746715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.567 [2024-12-09 10:00:05.759576] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.567 [2024-12-09 10:00:05.759592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.567 [2024-12-09 10:00:05.772448] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.567 [2024-12-09 10:00:05.772464] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.567 [2024-12-09 10:00:05.787013] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.567 [2024-12-09 10:00:05.787029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.567 [2024-12-09 10:00:05.800118] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.567 [2024-12-09 10:00:05.800134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.567 [2024-12-09 10:00:05.814350] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.567 [2024-12-09 10:00:05.814366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.567 [2024-12-09 10:00:05.827287] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.567 [2024-12-09 10:00:05.827303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.567 [2024-12-09 10:00:05.839789] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.567 [2024-12-09 10:00:05.839804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.567 [2024-12-09 10:00:05.854244] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.567 [2024-12-09 10:00:05.854260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.567 [2024-12-09 10:00:05.867264] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.567 [2024-12-09 10:00:05.867279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.567 [2024-12-09 10:00:05.880124] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.567 [2024-12-09 10:00:05.880139] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.567 [2024-12-09 10:00:05.894405] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.567 [2024-12-09 10:00:05.894421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.567 [2024-12-09 10:00:05.907648] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.567 [2024-12-09 10:00:05.907664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.567 [2024-12-09 10:00:05.920242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.567 [2024-12-09 10:00:05.920257] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.567 [2024-12-09 10:00:05.934595] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.567 [2024-12-09 10:00:05.934610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.567 [2024-12-09 10:00:05.947605] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.567 [2024-12-09 10:00:05.947620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.567 [2024-12-09 10:00:05.960192] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.567 [2024-12-09 10:00:05.960207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.567 [2024-12-09 10:00:05.975168] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.567 [2024-12-09 10:00:05.975184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.567 [2024-12-09 10:00:05.988305] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.567 [2024-12-09 10:00:05.988321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.567 [2024-12-09 10:00:06.002343] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.567 [2024-12-09 10:00:06.002359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.567 [2024-12-09 10:00:06.015491] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.567 [2024-12-09 10:00:06.015507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.828 [2024-12-09 10:00:06.028868] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.828 [2024-12-09 10:00:06.028884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.828 [2024-12-09 10:00:06.042655] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.828 [2024-12-09 10:00:06.042671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.828 [2024-12-09 10:00:06.055699] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.828 [2024-12-09 10:00:06.055713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.828 [2024-12-09 10:00:06.068500] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.828 [2024-12-09 10:00:06.068515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.828 [2024-12-09 10:00:06.082875] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.828 [2024-12-09 10:00:06.082891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.828 [2024-12-09 10:00:06.095727] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.828 [2024-12-09 10:00:06.095743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.828 [2024-12-09 10:00:06.108217] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.828 [2024-12-09 10:00:06.108232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.828 19159.67 IOPS, 149.68 MiB/s [2024-12-09T09:00:06.281Z] [2024-12-09 10:00:06.122429] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.828 [2024-12-09 10:00:06.122444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.828 [2024-12-09 10:00:06.135403] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.828 [2024-12-09 10:00:06.135423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.828 [2024-12-09 10:00:06.147897] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.828 [2024-12-09 10:00:06.147911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.828 [2024-12-09 10:00:06.162298] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.828 [2024-12-09 10:00:06.162314] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.828 [2024-12-09 10:00:06.174934] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.828 [2024-12-09 10:00:06.174950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.828 [2024-12-09 10:00:06.187878] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.828 [2024-12-09 10:00:06.187893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.828 [2024-12-09 10:00:06.202466] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.828 [2024-12-09 10:00:06.202481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.828 [2024-12-09 10:00:06.215517] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.828 [2024-12-09 10:00:06.215532] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.828 [2024-12-09 10:00:06.228630] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.828 [2024-12-09 10:00:06.228648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.828 [2024-12-09 10:00:06.242647] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.828 [2024-12-09 10:00:06.242662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.828 [2024-12-09 10:00:06.255524] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.828 [2024-12-09 10:00:06.255539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:30.828 [2024-12-09 10:00:06.268188] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:30.828 [2024-12-09 10:00:06.268203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.089 [2024-12-09 10:00:06.282453] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.089 [2024-12-09 10:00:06.282469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.089 [2024-12-09 10:00:06.295641] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.089 [2024-12-09 10:00:06.295657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.089 [2024-12-09 10:00:06.308331] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.089 [2024-12-09 10:00:06.308346] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.089 [2024-12-09 10:00:06.323214] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.089 [2024-12-09 10:00:06.323229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.090 [2024-12-09 10:00:06.336171] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.090 [2024-12-09 10:00:06.336185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.090 [2024-12-09 10:00:06.350567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.090 [2024-12-09 10:00:06.350582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.090 [2024-12-09 10:00:06.363516] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.090 [2024-12-09 10:00:06.363532] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.090 [2024-12-09 10:00:06.376478] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.090 [2024-12-09 10:00:06.376494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.090 [2024-12-09 10:00:06.390263] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.090 [2024-12-09 10:00:06.390281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.090 [2024-12-09 10:00:06.403281] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.090 [2024-12-09 10:00:06.403297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.090 [2024-12-09 10:00:06.416227] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.090 [2024-12-09 10:00:06.416242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.090 [2024-12-09 10:00:06.430880] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.090 [2024-12-09 10:00:06.430895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.090 [2024-12-09 10:00:06.443809] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.090 [2024-12-09 10:00:06.443824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.090 [2024-12-09 10:00:06.458651] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.090 [2024-12-09 10:00:06.458666] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.090 [2024-12-09 10:00:06.471558] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.090 [2024-12-09 10:00:06.471573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.090 [2024-12-09 10:00:06.484346] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.090 [2024-12-09 10:00:06.484360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.090 [2024-12-09 10:00:06.499313] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.090 [2024-12-09 10:00:06.499328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.090 [2024-12-09 10:00:06.512404] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.090 [2024-12-09 10:00:06.512419] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.090 [2024-12-09 10:00:06.526584] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.090 [2024-12-09 10:00:06.526599] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.090 [2024-12-09 10:00:06.539513] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.090 [2024-12-09 10:00:06.539528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.350 [2024-12-09 10:00:06.552426] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.351 [2024-12-09 10:00:06.552441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.351 [2024-12-09 10:00:06.566372] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.351 [2024-12-09 10:00:06.566387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.351 [2024-12-09 10:00:06.579656] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.351 [2024-12-09 10:00:06.579670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.351 [2024-12-09 10:00:06.592326] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.351 [2024-12-09 10:00:06.592340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.351 [2024-12-09 10:00:06.607006] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.351 [2024-12-09 10:00:06.607021] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.351 [2024-12-09 10:00:06.619809] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.351 [2024-12-09 10:00:06.619824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.351 [2024-12-09 10:00:06.634889] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.351 [2024-12-09 10:00:06.634905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.351 [2024-12-09 10:00:06.647781] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.351 [2024-12-09 10:00:06.647800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.351 [2024-12-09 10:00:06.662437] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.351 [2024-12-09 10:00:06.662453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.351 [2024-12-09 10:00:06.675223] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.351 [2024-12-09 10:00:06.675239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.351 [2024-12-09 10:00:06.687956] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.351 [2024-12-09 10:00:06.687971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.351 [2024-12-09 10:00:06.702547] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.351 [2024-12-09 10:00:06.702562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.351 [2024-12-09 10:00:06.715298] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.351 [2024-12-09 10:00:06.715313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.351 [2024-12-09 10:00:06.728700] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.351 [2024-12-09 10:00:06.728715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.351 [2024-12-09 10:00:06.742777] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.351 [2024-12-09 10:00:06.742792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.351 [2024-12-09 10:00:06.755672] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.351 [2024-12-09 10:00:06.755687] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.351 [2024-12-09 10:00:06.768367] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.351 [2024-12-09 10:00:06.768382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.351 [2024-12-09 10:00:06.782763] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.351 [2024-12-09 10:00:06.782777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.351 [2024-12-09 10:00:06.795675] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.351 [2024-12-09 10:00:06.795690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.612 [2024-12-09 10:00:06.808555] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.612 [2024-12-09 10:00:06.808570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.612 [2024-12-09 10:00:06.822673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.612 [2024-12-09 10:00:06.822688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.612 [2024-12-09 10:00:06.835694] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.612 [2024-12-09 10:00:06.835709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.612 [2024-12-09 10:00:06.848382] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.612 [2024-12-09 10:00:06.848396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.612 [2024-12-09 10:00:06.862724] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.612 [2024-12-09 10:00:06.862739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.612 [2024-12-09 10:00:06.875780] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.612 [2024-12-09 10:00:06.875795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.612 [2024-12-09 10:00:06.890433] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.612 [2024-12-09 10:00:06.890448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.612 [2024-12-09 10:00:06.903365] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.612 [2024-12-09 10:00:06.903383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.612 [2024-12-09 10:00:06.916245] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.612 [2024-12-09 10:00:06.916260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.612 [2024-12-09 10:00:06.930707] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.612 [2024-12-09 10:00:06.930722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.612 [2024-12-09 10:00:06.943921] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.612 [2024-12-09 10:00:06.943935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.612 [2024-12-09 10:00:06.958875] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.612 [2024-12-09 10:00:06.958891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.612 [2024-12-09 10:00:06.972084] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.612 [2024-12-09 10:00:06.972099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.612 [2024-12-09 10:00:06.986926] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.612 [2024-12-09 10:00:06.986941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.612 [2024-12-09 10:00:07.000017] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.612 [2024-12-09 10:00:07.000032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.612 [2024-12-09 10:00:07.014636] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.612 [2024-12-09 10:00:07.014655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.612 [2024-12-09 10:00:07.027724] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.612 [2024-12-09 10:00:07.027740] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.612 [2024-12-09 10:00:07.040418] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.612 [2024-12-09 10:00:07.040433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.612 [2024-12-09 10:00:07.054842] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.612 [2024-12-09 10:00:07.054858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.873 [2024-12-09 10:00:07.067863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.873 [2024-12-09 10:00:07.067879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.873 [2024-12-09 10:00:07.082490] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.873 [2024-12-09 10:00:07.082506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.873 [2024-12-09 10:00:07.095673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.873 [2024-12-09 10:00:07.095688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.873 [2024-12-09 10:00:07.108685] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.873 [2024-12-09 10:00:07.108700] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.873 19160.50 IOPS, 149.69 MiB/s [2024-12-09T09:00:07.326Z] [2024-12-09 10:00:07.122697] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.873 [2024-12-09 10:00:07.122713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.873 [2024-12-09 10:00:07.135699] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.873 [2024-12-09 10:00:07.135714] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.873 [2024-12-09 10:00:07.148540] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.873 [2024-12-09 10:00:07.148556] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.873 [2024-12-09 10:00:07.162572] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.873 [2024-12-09 10:00:07.162588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.873 [2024-12-09 10:00:07.175650] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.873 [2024-12-09 10:00:07.175666] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.873 [2024-12-09 10:00:07.188485] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.873 [2024-12-09 10:00:07.188501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.873 [2024-12-09 10:00:07.202893] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.873 [2024-12-09 10:00:07.202910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.873 [2024-12-09 10:00:07.215600] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.873 [2024-12-09 10:00:07.215616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.873 [2024-12-09 10:00:07.228334] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.873 [2024-12-09 10:00:07.228350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.873 [2024-12-09 10:00:07.242847] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.873 [2024-12-09 10:00:07.242863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.873 [2024-12-09 10:00:07.255792] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.873 [2024-12-09 10:00:07.255808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.873 [2024-12-09 10:00:07.270957] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.873 [2024-12-09 10:00:07.270973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.873 [2024-12-09 10:00:07.283586] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.873 [2024-12-09 10:00:07.283601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.873 [2024-12-09 10:00:07.296921] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.873 [2024-12-09 10:00:07.296936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.873 [2024-12-09 10:00:07.310724] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.873 [2024-12-09 10:00:07.310739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:31.873 [2024-12-09 10:00:07.323598] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:31.873 [2024-12-09 10:00:07.323613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.134 [2024-12-09 10:00:07.336281] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.134 [2024-12-09 10:00:07.336299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.134 [2024-12-09 10:00:07.350371] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.134 [2024-12-09 10:00:07.350387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.134 [2024-12-09 10:00:07.363176] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.134 [2024-12-09 10:00:07.363191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.134 [2024-12-09 10:00:07.376644] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.134 [2024-12-09 10:00:07.376660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.134 [2024-12-09 10:00:07.390698] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.134 [2024-12-09 10:00:07.390714] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.134 [2024-12-09 10:00:07.403647] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.134 [2024-12-09 10:00:07.403662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.134 [2024-12-09 10:00:07.416378] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.134 [2024-12-09 10:00:07.416393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.134 [2024-12-09 10:00:07.430672] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.134 [2024-12-09 10:00:07.430688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.134 [2024-12-09 10:00:07.443945] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.134 [2024-12-09 10:00:07.443959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.134 [2024-12-09 10:00:07.458524] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.134 [2024-12-09 10:00:07.458540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.134 [2024-12-09 10:00:07.471475] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.134 [2024-12-09 10:00:07.471490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.134 [2024-12-09 10:00:07.484657] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.134 [2024-12-09 10:00:07.484673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.134 [2024-12-09 10:00:07.498498] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.134 [2024-12-09 10:00:07.498513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.134 [2024-12-09 10:00:07.511282] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.134 [2024-12-09 10:00:07.511298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.134 [2024-12-09 10:00:07.524164] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.134 [2024-12-09 10:00:07.524180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.134 [2024-12-09 10:00:07.538562] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.134 [2024-12-09 10:00:07.538578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.134 [2024-12-09 10:00:07.551551] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.134 [2024-12-09 10:00:07.551567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.134 [2024-12-09 10:00:07.564927] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.134 [2024-12-09 10:00:07.564942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.134 [2024-12-09 10:00:07.579017] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.134 [2024-12-09 10:00:07.579032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.394 [2024-12-09 10:00:07.591744] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.394 [2024-12-09 10:00:07.591760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.394 [2024-12-09 10:00:07.604523] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.394 [2024-12-09 10:00:07.604538] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.394 [2024-12-09 10:00:07.618750] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.394 [2024-12-09 10:00:07.618767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.394 [2024-12-09 10:00:07.631855] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.394 [2024-12-09 10:00:07.631870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.394 [2024-12-09 10:00:07.646513] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.394 [2024-12-09 10:00:07.646529] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.394 [2024-12-09 10:00:07.659566] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.394 [2024-12-09 10:00:07.659582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.394 [2024-12-09 10:00:07.672158] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.394 [2024-12-09 10:00:07.672172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.394 [2024-12-09 10:00:07.687201] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.394 [2024-12-09 10:00:07.687216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.394 [2024-12-09 10:00:07.700187] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.394 [2024-12-09 10:00:07.700202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.394 [2024-12-09 10:00:07.714634] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.394 [2024-12-09 10:00:07.714658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.394 [2024-12-09 10:00:07.727712] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.394 [2024-12-09 10:00:07.727727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.394 [2024-12-09 10:00:07.740364] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.394 [2024-12-09 10:00:07.740379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.394 [2024-12-09 10:00:07.754882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.394 [2024-12-09 10:00:07.754897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.394 [2024-12-09 10:00:07.767690] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.394 [2024-12-09 10:00:07.767706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.394 [2024-12-09 10:00:07.780963] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.394 [2024-12-09 10:00:07.780978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.394 [2024-12-09 10:00:07.794943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.394 [2024-12-09 10:00:07.794959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.395 [2024-12-09 10:00:07.807952] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.395 [2024-12-09 10:00:07.807967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.395 [2024-12-09 10:00:07.823092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.395 [2024-12-09 10:00:07.823108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.395 [2024-12-09 10:00:07.836358] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.395 [2024-12-09 10:00:07.836373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.655 [2024-12-09 10:00:07.850645] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.655 [2024-12-09 10:00:07.850661] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.655 [2024-12-09 10:00:07.863575] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.655 [2024-12-09 10:00:07.863591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.655 [2024-12-09 10:00:07.876496] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.655 [2024-12-09 10:00:07.876511] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.655 [2024-12-09 10:00:07.890466] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.655 [2024-12-09 10:00:07.890482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.655 [2024-12-09 10:00:07.903341] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.655 [2024-12-09 10:00:07.903356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.655 [2024-12-09 10:00:07.916674] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.655 [2024-12-09 10:00:07.916696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.655 [2024-12-09 10:00:07.931115] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.655 [2024-12-09 10:00:07.931130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.655 [2024-12-09 10:00:07.944097] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.655 [2024-12-09 10:00:07.944112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.655 [2024-12-09 10:00:07.958848] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.655 [2024-12-09 10:00:07.958863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.655 [2024-12-09 10:00:07.972032] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.655 [2024-12-09 10:00:07.972047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.655 [2024-12-09 10:00:07.986868] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.655 [2024-12-09 10:00:07.986883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.655 [2024-12-09 10:00:08.000084] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.655 [2024-12-09 10:00:08.000098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.655 [2024-12-09 10:00:08.014191] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.655 [2024-12-09 10:00:08.014207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.655 [2024-12-09 10:00:08.027405] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.655 [2024-12-09 10:00:08.027420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.655 [2024-12-09 10:00:08.041072] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.655 [2024-12-09 10:00:08.041087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.656 [2024-12-09 10:00:08.055028] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.656 [2024-12-09 10:00:08.055043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.656 [2024-12-09 10:00:08.068180] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.656 [2024-12-09 10:00:08.068194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.656 [2024-12-09 10:00:08.082515] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.656 [2024-12-09 10:00:08.082530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.656 [2024-12-09 10:00:08.095425] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.656 [2024-12-09 10:00:08.095440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.917 [2024-12-09 10:00:08.108422] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.917 [2024-12-09 10:00:08.108437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.917 19166.20 IOPS, 149.74 MiB/s [2024-12-09T09:00:08.370Z] [2024-12-09 10:00:08.119483] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.917 [2024-12-09 10:00:08.119499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.917 00:42:32.917 Latency(us) 00:42:32.917 [2024-12-09T09:00:08.370Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:42:32.917 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:42:32.917 Nvme1n1 : 5.01 19167.78 149.75 0.00 0.00 6671.71 2457.60 11195.73 00:42:32.917 [2024-12-09T09:00:08.370Z] =================================================================================================================== 00:42:32.917 [2024-12-09T09:00:08.370Z] Total : 19167.78 149.75 0.00 0.00 6671.71 2457.60 11195.73 00:42:32.917 [2024-12-09 10:00:08.131480] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.917 [2024-12-09 10:00:08.131497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.917 [2024-12-09 10:00:08.143484] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.917 [2024-12-09 10:00:08.143494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.917 [2024-12-09 10:00:08.155494] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.917 [2024-12-09 10:00:08.155506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.917 [2024-12-09 10:00:08.167480] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.917 [2024-12-09 10:00:08.167490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.917 [2024-12-09 10:00:08.179477] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.917 [2024-12-09 10:00:08.179487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.917 [2024-12-09 10:00:08.191476] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.917 [2024-12-09 10:00:08.191484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.917 [2024-12-09 10:00:08.203479] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.917 [2024-12-09 10:00:08.203490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.917 [2024-12-09 10:00:08.215478] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:42:32.917 [2024-12-09 10:00:08.215488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:32.917 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3134168) - No such process 00:42:32.917 10:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3134168 00:42:32.917 10:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:42:32.917 10:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:32.917 10:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:32.917 10:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:32.917 10:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:42:32.917 10:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:32.917 10:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:32.917 delay0 00:42:32.917 10:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:32.917 10:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:42:32.917 10:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:32.917 10:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:32.917 10:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:32.917 10:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:42:33.177 [2024-12-09 10:00:08.380998] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:42:39.764 Initializing NVMe Controllers 00:42:39.764 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:42:39.764 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:42:39.764 Initialization complete. Launching workers. 00:42:39.764 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 4849 00:42:39.764 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 5136, failed to submit 33 00:42:39.764 success 4979, unsuccessful 157, failed 0 00:42:39.764 10:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:42:39.764 10:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:42:39.764 10:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:39.764 10:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:42:39.764 10:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:39.764 10:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:42:39.764 10:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:39.764 10:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:39.764 rmmod nvme_tcp 00:42:39.764 rmmod nvme_fabrics 00:42:39.764 rmmod nvme_keyring 00:42:39.764 10:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:39.764 10:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:42:39.764 10:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:42:39.764 10:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 3131975 ']' 00:42:39.764 10:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 3131975 00:42:39.764 10:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 3131975 ']' 00:42:39.764 10:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 3131975 00:42:39.764 10:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:42:39.764 10:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:39.764 10:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3131975 00:42:39.764 10:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:42:39.764 10:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:42:39.764 10:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3131975' 00:42:39.764 killing process with pid 3131975 00:42:39.764 10:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 3131975 00:42:39.764 10:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 3131975 00:42:39.764 10:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:42:39.764 10:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:39.764 10:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:39.764 10:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:42:39.764 10:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:42:39.764 10:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:39.764 10:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:42:40.023 10:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:40.023 10:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:40.023 10:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:40.023 10:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:40.023 10:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:42.062 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:42.062 00:42:42.062 real 0m33.379s 00:42:42.062 user 0m42.747s 00:42:42.062 sys 0m11.904s 00:42:42.062 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:42.062 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:42:42.062 ************************************ 00:42:42.062 END TEST nvmf_zcopy 00:42:42.062 ************************************ 00:42:42.062 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:42:42.062 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:42:42.062 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:42.062 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:42:42.062 ************************************ 00:42:42.062 START TEST nvmf_nmic 00:42:42.062 ************************************ 00:42:42.062 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:42:42.062 * Looking for test storage... 00:42:42.062 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:42.062 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:42:42.062 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:42:42.062 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:42:42.324 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:42:42.324 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:42.324 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:42.324 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:42.324 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:42:42.324 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:42:42.324 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:42:42.324 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:42:42.324 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:42:42.324 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:42:42.324 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:42:42.324 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:42.324 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:42:42.324 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:42:42.324 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:42.324 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:42.324 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:42:42.324 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:42:42.324 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:42.324 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:42:42.324 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:42:42.324 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:42:42.324 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:42:42.324 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:42.324 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:42:42.324 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:42:42.324 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:42.324 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:42.324 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:42:42.324 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:42.324 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:42:42.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:42.324 --rc genhtml_branch_coverage=1 00:42:42.324 --rc genhtml_function_coverage=1 00:42:42.324 --rc genhtml_legend=1 00:42:42.324 --rc geninfo_all_blocks=1 00:42:42.324 --rc geninfo_unexecuted_blocks=1 00:42:42.324 00:42:42.324 ' 00:42:42.324 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:42:42.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:42.324 --rc genhtml_branch_coverage=1 00:42:42.324 --rc genhtml_function_coverage=1 00:42:42.324 --rc genhtml_legend=1 00:42:42.324 --rc geninfo_all_blocks=1 00:42:42.324 --rc geninfo_unexecuted_blocks=1 00:42:42.324 00:42:42.324 ' 00:42:42.324 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:42:42.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:42.324 --rc genhtml_branch_coverage=1 00:42:42.324 --rc genhtml_function_coverage=1 00:42:42.324 --rc genhtml_legend=1 00:42:42.324 --rc geninfo_all_blocks=1 00:42:42.324 --rc geninfo_unexecuted_blocks=1 00:42:42.324 00:42:42.324 ' 00:42:42.324 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:42:42.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:42.324 --rc genhtml_branch_coverage=1 00:42:42.324 --rc genhtml_function_coverage=1 00:42:42.324 --rc genhtml_legend=1 00:42:42.324 --rc geninfo_all_blocks=1 00:42:42.324 --rc geninfo_unexecuted_blocks=1 00:42:42.324 00:42:42.324 ' 00:42:42.324 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:42.324 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:42:42.324 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:42.324 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:42.324 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:42.324 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:42.324 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:42.324 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:42.324 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:42.324 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:42.324 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:42.324 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:42.324 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:42:42.324 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:42:42.325 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:42.325 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:42.325 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:42.325 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:42.325 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:42.325 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:42:42.325 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:42.325 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:42.325 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:42.325 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:42.325 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:42.325 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:42.325 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:42:42.325 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:42.325 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:42:42.325 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:42.325 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:42.325 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:42.325 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:42.325 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:42.325 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:42:42.325 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:42:42.325 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:42.325 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:42.325 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:42.325 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:42:42.325 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:42:42.325 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:42:42.325 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:42:42.325 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:42.325 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:42:42.325 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:42:42.325 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:42:42.325 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:42.325 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:42.325 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:42.325 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:42:42.325 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:42:42.325 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:42:42.325 10:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:50.466 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:50.466 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:42:50.466 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:50.466 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:50.466 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:50.466 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:50.466 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:50.466 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:42:50.466 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:50.466 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:42:50.466 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:42:50.466 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:42:50.466 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:42:50.466 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:42:50.466 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:42:50.466 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:50.466 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:50.466 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:50.466 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:50.466 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:50.466 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:50.466 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:50.466 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:42:50.466 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:50.466 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:50.466 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:50.466 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:50.466 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:42:50.466 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:42:50.466 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:42:50.466 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:42:50.466 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:42:50.466 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:42:50.466 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:50.466 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:42:50.466 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:42:50.466 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:50.466 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:50.466 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:50.466 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:50.466 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:50.466 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:50.466 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:42:50.466 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:42:50.466 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:50.466 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:50.466 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:50.466 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:50.466 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:50.466 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:42:50.466 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:42:50.466 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:42:50.466 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:50.467 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:50.467 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:50.467 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:50.467 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:50.467 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:50.467 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:50.467 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:42:50.467 Found net devices under 0000:4b:00.0: cvl_0_0 00:42:50.467 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:50.467 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:50.467 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:50.467 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:50.467 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:50.467 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:50.467 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:50.467 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:50.467 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:42:50.467 Found net devices under 0000:4b:00.1: cvl_0_1 00:42:50.467 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:50.467 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:42:50.467 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:42:50.467 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:42:50.467 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:42:50.467 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:42:50.467 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:50.467 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:50.467 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:50.467 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:50.467 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:50.467 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:50.467 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:50.467 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:50.467 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:50.467 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:50.467 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:50.467 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:50.467 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:50.467 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:50.467 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:50.467 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:50.467 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:50.467 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:50.467 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:50.467 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:50.467 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:50.467 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:50.467 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:50.467 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:50.467 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.605 ms 00:42:50.467 00:42:50.467 --- 10.0.0.2 ping statistics --- 00:42:50.467 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:50.467 rtt min/avg/max/mdev = 0.605/0.605/0.605/0.000 ms 00:42:50.467 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:50.467 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:50.467 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.262 ms 00:42:50.467 00:42:50.467 --- 10.0.0.1 ping statistics --- 00:42:50.467 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:50.467 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:42:50.467 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:50.467 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:42:50.467 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:42:50.467 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:50.467 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:42:50.467 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:42:50.467 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:50.467 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:42:50.467 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:42:50.467 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:42:50.467 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:42:50.467 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:50.467 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:50.467 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=3140971 00:42:50.467 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 3140971 00:42:50.467 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:42:50.467 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 3140971 ']' 00:42:50.467 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:50.467 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:50.467 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:50.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:50.467 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:50.467 10:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:50.467 [2024-12-09 10:00:24.926956] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:42:50.467 [2024-12-09 10:00:24.927963] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:42:50.467 [2024-12-09 10:00:24.928004] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:50.467 [2024-12-09 10:00:24.995542] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:42:50.467 [2024-12-09 10:00:25.018125] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:50.467 [2024-12-09 10:00:25.018166] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:50.467 [2024-12-09 10:00:25.018173] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:50.467 [2024-12-09 10:00:25.018177] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:50.467 [2024-12-09 10:00:25.018182] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:50.467 [2024-12-09 10:00:25.022662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:50.467 [2024-12-09 10:00:25.022809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:42:50.467 [2024-12-09 10:00:25.023063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:42:50.467 [2024-12-09 10:00:25.023065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:50.467 [2024-12-09 10:00:25.080453] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:42:50.467 [2024-12-09 10:00:25.080529] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:42:50.467 [2024-12-09 10:00:25.080973] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:42:50.467 [2024-12-09 10:00:25.081308] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:42:50.467 [2024-12-09 10:00:25.081383] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:42:50.467 10:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:50.467 10:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:42:50.467 10:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:42:50.467 10:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:50.468 10:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:50.468 10:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:50.468 10:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:42:50.468 10:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:50.468 10:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:50.468 [2024-12-09 10:00:25.152055] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:50.468 10:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:50.468 10:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:42:50.468 10:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:50.468 10:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:50.468 Malloc0 00:42:50.468 10:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:50.468 10:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:42:50.468 10:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:50.468 10:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:50.468 10:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:50.468 10:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:42:50.468 10:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:50.468 10:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:50.468 10:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:50.468 10:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:50.468 10:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:50.468 10:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:50.468 [2024-12-09 10:00:25.244341] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:50.468 10:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:50.468 10:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:42:50.468 test case1: single bdev can't be used in multiple subsystems 00:42:50.468 10:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:42:50.468 10:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:50.468 10:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:50.468 10:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:50.468 10:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:42:50.468 10:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:50.468 10:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:50.468 10:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:50.468 10:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:42:50.468 10:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:42:50.468 10:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:50.468 10:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:50.468 [2024-12-09 10:00:25.279619] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:42:50.468 [2024-12-09 10:00:25.279650] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:42:50.468 [2024-12-09 10:00:25.279660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:42:50.468 request: 00:42:50.468 { 00:42:50.468 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:42:50.468 "namespace": { 00:42:50.468 "bdev_name": "Malloc0", 00:42:50.468 "no_auto_visible": false, 00:42:50.468 "hide_metadata": false 00:42:50.468 }, 00:42:50.468 "method": "nvmf_subsystem_add_ns", 00:42:50.468 "req_id": 1 00:42:50.468 } 00:42:50.468 Got JSON-RPC error response 00:42:50.468 response: 00:42:50.468 { 00:42:50.468 "code": -32602, 00:42:50.468 "message": "Invalid parameters" 00:42:50.468 } 00:42:50.468 10:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:42:50.468 10:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:42:50.468 10:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:42:50.468 10:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:42:50.468 Adding namespace failed - expected result. 00:42:50.468 10:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:42:50.468 test case2: host connect to nvmf target in multiple paths 00:42:50.468 10:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:42:50.468 10:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:50.468 10:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:50.468 [2024-12-09 10:00:25.291749] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:42:50.468 10:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:50.468 10:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:42:50.468 10:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:42:50.727 10:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:42:50.727 10:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:42:50.727 10:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:42:50.727 10:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:42:50.727 10:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:42:53.271 10:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:42:53.271 10:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:42:53.271 10:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:42:53.271 10:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:42:53.271 10:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:42:53.271 10:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:42:53.271 10:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:42:53.271 [global] 00:42:53.271 thread=1 00:42:53.271 invalidate=1 00:42:53.271 rw=write 00:42:53.271 time_based=1 00:42:53.271 runtime=1 00:42:53.271 ioengine=libaio 00:42:53.271 direct=1 00:42:53.271 bs=4096 00:42:53.271 iodepth=1 00:42:53.271 norandommap=0 00:42:53.271 numjobs=1 00:42:53.271 00:42:53.271 verify_dump=1 00:42:53.271 verify_backlog=512 00:42:53.271 verify_state_save=0 00:42:53.271 do_verify=1 00:42:53.271 verify=crc32c-intel 00:42:53.271 [job0] 00:42:53.271 filename=/dev/nvme0n1 00:42:53.271 Could not set queue depth (nvme0n1) 00:42:53.271 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:42:53.271 fio-3.35 00:42:53.271 Starting 1 thread 00:42:54.214 00:42:54.214 job0: (groupid=0, jobs=1): err= 0: pid=3141841: Mon Dec 9 10:00:29 2024 00:42:54.214 read: IOPS=17, BW=71.0KiB/s (72.7kB/s)(72.0KiB/1014msec) 00:42:54.214 slat (nsec): min=25690, max=26997, avg=26321.78, stdev=333.39 00:42:54.214 clat (usec): min=1055, max=42074, avg=39614.37, stdev=9626.43 00:42:54.214 lat (usec): min=1081, max=42101, avg=39640.69, stdev=9626.42 00:42:54.214 clat percentiles (usec): 00:42:54.214 | 1.00th=[ 1057], 5.00th=[ 1057], 10.00th=[41157], 20.00th=[41681], 00:42:54.214 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:42:54.214 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:42:54.214 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:42:54.214 | 99.99th=[42206] 00:42:54.214 write: IOPS=504, BW=2020KiB/s (2068kB/s)(2048KiB/1014msec); 0 zone resets 00:42:54.214 slat (usec): min=9, max=29671, avg=88.18, stdev=1310.02 00:42:54.214 clat (usec): min=252, max=772, avg=491.13, stdev=91.42 00:42:54.214 lat (usec): min=262, max=30287, avg=579.31, stdev=1318.84 00:42:54.214 clat percentiles (usec): 00:42:54.214 | 1.00th=[ 314], 5.00th=[ 343], 10.00th=[ 367], 20.00th=[ 433], 00:42:54.214 | 30.00th=[ 441], 40.00th=[ 449], 50.00th=[ 469], 60.00th=[ 519], 00:42:54.214 | 70.00th=[ 537], 80.00th=[ 562], 90.00th=[ 619], 95.00th=[ 652], 00:42:54.214 | 99.00th=[ 717], 99.50th=[ 742], 99.90th=[ 775], 99.95th=[ 775], 00:42:54.214 | 99.99th=[ 775] 00:42:54.214 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:42:54.214 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:42:54.214 lat (usec) : 500=55.47%, 750=40.75%, 1000=0.38% 00:42:54.214 lat (msec) : 2=0.19%, 50=3.21% 00:42:54.214 cpu : usr=0.59%, sys=1.68%, ctx=534, majf=0, minf=1 00:42:54.214 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:54.214 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:54.214 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:54.214 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:54.214 latency : target=0, window=0, percentile=100.00%, depth=1 00:42:54.214 00:42:54.214 Run status group 0 (all jobs): 00:42:54.215 READ: bw=71.0KiB/s (72.7kB/s), 71.0KiB/s-71.0KiB/s (72.7kB/s-72.7kB/s), io=72.0KiB (73.7kB), run=1014-1014msec 00:42:54.215 WRITE: bw=2020KiB/s (2068kB/s), 2020KiB/s-2020KiB/s (2068kB/s-2068kB/s), io=2048KiB (2097kB), run=1014-1014msec 00:42:54.215 00:42:54.215 Disk stats (read/write): 00:42:54.215 nvme0n1: ios=41/512, merge=0/0, ticks=1580/245, in_queue=1825, util=98.70% 00:42:54.215 10:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:42:54.476 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:42:54.476 10:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:42:54.476 10:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:42:54.476 10:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:42:54.476 10:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:42:54.476 10:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:42:54.476 10:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:42:54.476 10:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:42:54.476 10:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:42:54.476 10:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:42:54.477 10:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:54.477 10:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:42:54.477 10:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:54.477 10:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:42:54.477 10:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:54.477 10:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:54.477 rmmod nvme_tcp 00:42:54.477 rmmod nvme_fabrics 00:42:54.477 rmmod nvme_keyring 00:42:54.477 10:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:54.477 10:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:42:54.477 10:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:42:54.477 10:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 3140971 ']' 00:42:54.477 10:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 3140971 00:42:54.477 10:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 3140971 ']' 00:42:54.477 10:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 3140971 00:42:54.477 10:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:42:54.477 10:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:54.739 10:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3140971 00:42:54.739 10:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:54.739 10:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:54.739 10:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3140971' 00:42:54.739 killing process with pid 3140971 00:42:54.739 10:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 3140971 00:42:54.739 10:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 3140971 00:42:54.739 10:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:42:54.739 10:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:54.739 10:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:54.739 10:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:42:54.739 10:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:42:54.739 10:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:42:54.739 10:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:54.739 10:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:54.739 10:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:54.739 10:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:54.739 10:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:54.739 10:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:57.282 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:57.282 00:42:57.282 real 0m14.827s 00:42:57.282 user 0m31.595s 00:42:57.282 sys 0m7.352s 00:42:57.282 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:57.282 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:42:57.282 ************************************ 00:42:57.282 END TEST nvmf_nmic 00:42:57.282 ************************************ 00:42:57.282 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:42:57.282 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:42:57.282 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:57.282 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:42:57.282 ************************************ 00:42:57.282 START TEST nvmf_fio_target 00:42:57.282 ************************************ 00:42:57.282 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:42:57.282 * Looking for test storage... 00:42:57.282 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:57.282 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:42:57.282 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:42:57.282 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:42:57.282 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:42:57.282 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:57.282 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:57.282 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:57.282 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:42:57.282 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:42:57.283 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:42:57.283 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:42:57.283 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:42:57.283 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:42:57.283 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:42:57.283 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:57.283 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:42:57.283 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:42:57.283 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:57.283 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:57.283 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:42:57.283 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:42:57.283 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:57.283 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:42:57.283 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:42:57.283 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:42:57.283 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:42:57.283 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:57.283 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:42:57.283 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:42:57.283 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:57.283 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:57.283 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:42:57.283 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:57.283 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:42:57.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:57.283 --rc genhtml_branch_coverage=1 00:42:57.283 --rc genhtml_function_coverage=1 00:42:57.283 --rc genhtml_legend=1 00:42:57.283 --rc geninfo_all_blocks=1 00:42:57.283 --rc geninfo_unexecuted_blocks=1 00:42:57.283 00:42:57.283 ' 00:42:57.283 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:42:57.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:57.283 --rc genhtml_branch_coverage=1 00:42:57.283 --rc genhtml_function_coverage=1 00:42:57.283 --rc genhtml_legend=1 00:42:57.283 --rc geninfo_all_blocks=1 00:42:57.283 --rc geninfo_unexecuted_blocks=1 00:42:57.283 00:42:57.283 ' 00:42:57.283 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:42:57.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:57.283 --rc genhtml_branch_coverage=1 00:42:57.283 --rc genhtml_function_coverage=1 00:42:57.283 --rc genhtml_legend=1 00:42:57.283 --rc geninfo_all_blocks=1 00:42:57.283 --rc geninfo_unexecuted_blocks=1 00:42:57.283 00:42:57.283 ' 00:42:57.283 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:42:57.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:57.283 --rc genhtml_branch_coverage=1 00:42:57.283 --rc genhtml_function_coverage=1 00:42:57.283 --rc genhtml_legend=1 00:42:57.283 --rc geninfo_all_blocks=1 00:42:57.283 --rc geninfo_unexecuted_blocks=1 00:42:57.283 00:42:57.283 ' 00:42:57.283 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:57.283 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:42:57.283 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:57.283 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:57.283 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:57.283 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:57.283 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:57.283 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:57.283 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:57.283 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:57.283 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:57.283 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:57.283 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:42:57.283 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:42:57.283 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:57.283 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:57.283 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:57.283 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:57.283 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:57.283 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:42:57.283 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:57.283 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:57.283 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:57.283 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:57.283 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:57.283 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:57.283 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:42:57.283 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:57.283 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:42:57.283 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:57.283 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:57.283 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:57.283 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:57.283 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:57.283 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:42:57.283 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:42:57.283 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:57.283 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:57.283 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:57.283 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:42:57.283 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:42:57.283 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:42:57.283 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:42:57.284 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:42:57.284 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:57.284 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:42:57.284 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:42:57.284 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:42:57.284 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:57.284 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:42:57.284 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:57.284 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:42:57.284 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:42:57.284 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:42:57.284 10:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:43:05.429 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:05.429 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:43:05.429 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:43:05.429 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:43:05.429 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:43:05.429 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:43:05.429 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:43:05.429 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:43:05.429 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:43:05.429 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:43:05.429 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:43:05.429 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:43:05.429 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:43:05.429 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:43:05.429 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:43:05.429 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:05.429 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:05.429 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:05.429 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:05.430 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:05.430 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:05.430 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:05.430 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:43:05.430 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:05.430 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:05.430 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:05.430 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:05.430 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:43:05.430 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:43:05.430 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:43:05.430 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:43:05.430 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:43:05.430 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:43:05.430 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:05.430 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:43:05.430 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:43:05.430 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:05.430 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:05.430 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:05.430 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:05.430 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:05.430 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:05.430 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:43:05.430 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:43:05.430 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:05.430 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:05.430 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:05.430 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:05.430 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:05.430 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:43:05.430 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:43:05.430 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:43:05.430 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:05.430 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:05.430 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:05.430 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:05.430 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:05.430 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:05.430 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:05.430 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:43:05.430 Found net devices under 0000:4b:00.0: cvl_0_0 00:43:05.430 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:05.430 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:05.430 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:05.430 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:05.430 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:05.430 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:05.430 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:05.430 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:05.430 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:43:05.430 Found net devices under 0000:4b:00.1: cvl_0_1 00:43:05.430 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:05.430 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:43:05.430 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:43:05.430 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:43:05.430 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:43:05.430 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:43:05.430 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:05.430 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:05.430 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:05.430 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:05.430 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:43:05.430 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:05.430 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:05.430 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:43:05.430 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:43:05.430 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:05.430 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:05.430 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:43:05.430 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:43:05.430 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:43:05.430 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:05.430 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:05.430 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:05.430 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:43:05.430 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:05.430 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:05.430 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:05.430 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:43:05.430 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:43:05.430 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:05.430 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.697 ms 00:43:05.430 00:43:05.430 --- 10.0.0.2 ping statistics --- 00:43:05.430 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:05.430 rtt min/avg/max/mdev = 0.697/0.697/0.697/0.000 ms 00:43:05.430 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:05.430 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:05.430 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.328 ms 00:43:05.430 00:43:05.430 --- 10.0.0.1 ping statistics --- 00:43:05.430 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:05.430 rtt min/avg/max/mdev = 0.328/0.328/0.328/0.000 ms 00:43:05.430 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:05.430 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:43:05.430 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:43:05.430 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:05.430 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:43:05.430 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:43:05.430 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:05.431 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:43:05.431 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:43:05.431 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:43:05.431 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:43:05.431 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:05.431 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:43:05.431 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=3146183 00:43:05.431 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 3146183 00:43:05.431 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:43:05.431 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 3146183 ']' 00:43:05.431 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:05.431 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:05.431 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:05.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:05.431 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:05.431 10:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:43:05.431 [2024-12-09 10:00:39.796882] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:43:05.431 [2024-12-09 10:00:39.797971] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:43:05.431 [2024-12-09 10:00:39.798022] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:05.431 [2024-12-09 10:00:39.898788] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:43:05.431 [2024-12-09 10:00:39.927229] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:05.431 [2024-12-09 10:00:39.927282] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:05.431 [2024-12-09 10:00:39.927292] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:05.431 [2024-12-09 10:00:39.927299] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:05.431 [2024-12-09 10:00:39.927306] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:05.431 [2024-12-09 10:00:39.929598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:43:05.431 [2024-12-09 10:00:39.929747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:43:05.431 [2024-12-09 10:00:39.930040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:05.431 [2024-12-09 10:00:39.930036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:43:05.431 [2024-12-09 10:00:39.994976] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:43:05.431 [2024-12-09 10:00:39.994988] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:43:05.431 [2024-12-09 10:00:39.996058] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:43:05.431 [2024-12-09 10:00:39.996568] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:43:05.431 [2024-12-09 10:00:39.996661] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:43:05.431 10:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:05.431 10:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:43:05.431 10:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:43:05.431 10:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:05.431 10:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:43:05.431 10:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:05.431 10:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:43:05.431 [2024-12-09 10:00:40.839123] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:05.693 10:00:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:43:05.693 10:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:43:05.693 10:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:43:05.953 10:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:43:05.953 10:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:43:06.214 10:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:43:06.214 10:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:43:06.475 10:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:43:06.475 10:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:43:06.475 10:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:43:06.737 10:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:43:06.737 10:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:43:06.999 10:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:43:06.999 10:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:43:06.999 10:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:43:06.999 10:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:43:07.259 10:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:43:07.520 10:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:43:07.520 10:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:43:07.520 10:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:43:07.520 10:00:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:43:07.781 10:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:08.041 [2024-12-09 10:00:43.290852] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:08.041 10:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:43:08.302 10:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:43:08.302 10:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:43:08.874 10:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:43:08.874 10:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:43:08.874 10:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:43:08.874 10:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:43:08.874 10:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:43:08.874 10:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:43:10.792 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:43:10.792 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:43:10.792 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:43:10.792 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:43:10.792 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:43:10.792 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:43:10.792 10:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:43:10.792 [global] 00:43:10.792 thread=1 00:43:10.792 invalidate=1 00:43:10.792 rw=write 00:43:10.792 time_based=1 00:43:10.792 runtime=1 00:43:10.792 ioengine=libaio 00:43:10.792 direct=1 00:43:10.792 bs=4096 00:43:10.792 iodepth=1 00:43:10.792 norandommap=0 00:43:10.792 numjobs=1 00:43:10.792 00:43:10.792 verify_dump=1 00:43:10.792 verify_backlog=512 00:43:10.792 verify_state_save=0 00:43:10.792 do_verify=1 00:43:10.792 verify=crc32c-intel 00:43:10.792 [job0] 00:43:10.792 filename=/dev/nvme0n1 00:43:10.792 [job1] 00:43:10.792 filename=/dev/nvme0n2 00:43:10.792 [job2] 00:43:10.792 filename=/dev/nvme0n3 00:43:10.792 [job3] 00:43:10.792 filename=/dev/nvme0n4 00:43:10.792 Could not set queue depth (nvme0n1) 00:43:10.792 Could not set queue depth (nvme0n2) 00:43:10.792 Could not set queue depth (nvme0n3) 00:43:10.792 Could not set queue depth (nvme0n4) 00:43:11.052 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:11.052 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:11.052 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:11.052 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:11.052 fio-3.35 00:43:11.052 Starting 4 threads 00:43:12.435 00:43:12.435 job0: (groupid=0, jobs=1): err= 0: pid=3147759: Mon Dec 9 10:00:47 2024 00:43:12.435 read: IOPS=33, BW=135KiB/s (138kB/s)(136KiB/1011msec) 00:43:12.435 slat (nsec): min=25909, max=47236, avg=26975.62, stdev=3631.78 00:43:12.435 clat (usec): min=660, max=42702, avg=21842.13, stdev=20525.21 00:43:12.435 lat (usec): min=686, max=42728, avg=21869.10, stdev=20524.45 00:43:12.435 clat percentiles (usec): 00:43:12.435 | 1.00th=[ 660], 5.00th=[ 947], 10.00th=[ 979], 20.00th=[ 1057], 00:43:12.435 | 30.00th=[ 1074], 40.00th=[ 1156], 50.00th=[12387], 60.00th=[41681], 00:43:12.435 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:43:12.435 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:43:12.435 | 99.99th=[42730] 00:43:12.435 write: IOPS=506, BW=2026KiB/s (2074kB/s)(2048KiB/1011msec); 0 zone resets 00:43:12.435 slat (nsec): min=9678, max=87384, avg=30021.06, stdev=10854.38 00:43:12.435 clat (usec): min=131, max=959, avg=485.38, stdev=150.03 00:43:12.435 lat (usec): min=142, max=993, avg=515.40, stdev=155.28 00:43:12.435 clat percentiles (usec): 00:43:12.435 | 1.00th=[ 149], 5.00th=[ 273], 10.00th=[ 293], 20.00th=[ 326], 00:43:12.435 | 30.00th=[ 396], 40.00th=[ 433], 50.00th=[ 494], 60.00th=[ 529], 00:43:12.435 | 70.00th=[ 570], 80.00th=[ 611], 90.00th=[ 693], 95.00th=[ 742], 00:43:12.435 | 99.00th=[ 807], 99.50th=[ 816], 99.90th=[ 963], 99.95th=[ 963], 00:43:12.435 | 99.99th=[ 963] 00:43:12.435 bw ( KiB/s): min= 4096, max= 4096, per=43.10%, avg=4096.00, stdev= 0.00, samples=1 00:43:12.435 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:43:12.435 lat (usec) : 250=2.56%, 500=45.79%, 750=41.76%, 1000=4.40% 00:43:12.435 lat (msec) : 2=2.20%, 20=0.18%, 50=3.11% 00:43:12.435 cpu : usr=0.89%, sys=1.39%, ctx=549, majf=0, minf=1 00:43:12.435 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:12.435 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:12.435 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:12.435 issued rwts: total=34,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:12.435 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:12.435 job1: (groupid=0, jobs=1): err= 0: pid=3147760: Mon Dec 9 10:00:47 2024 00:43:12.435 read: IOPS=24, BW=99.2KiB/s (102kB/s)(100KiB/1008msec) 00:43:12.435 slat (nsec): min=10765, max=27005, avg=25104.64, stdev=4180.02 00:43:12.435 clat (usec): min=521, max=42136, avg=27092.70, stdev=20057.04 00:43:12.435 lat (usec): min=548, max=42162, avg=27117.81, stdev=20058.79 00:43:12.435 clat percentiles (usec): 00:43:12.435 | 1.00th=[ 523], 5.00th=[ 734], 10.00th=[ 791], 20.00th=[ 938], 00:43:12.435 | 30.00th=[ 1020], 40.00th=[40633], 50.00th=[41681], 60.00th=[41681], 00:43:12.435 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:43:12.435 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:43:12.435 | 99.99th=[42206] 00:43:12.435 write: IOPS=507, BW=2032KiB/s (2081kB/s)(2048KiB/1008msec); 0 zone resets 00:43:12.435 slat (nsec): min=10218, max=62020, avg=30815.63, stdev=9747.46 00:43:12.435 clat (usec): min=233, max=956, avg=606.55, stdev=115.19 00:43:12.435 lat (usec): min=244, max=991, avg=637.36, stdev=119.32 00:43:12.435 clat percentiles (usec): 00:43:12.435 | 1.00th=[ 359], 5.00th=[ 404], 10.00th=[ 457], 20.00th=[ 502], 00:43:12.435 | 30.00th=[ 553], 40.00th=[ 578], 50.00th=[ 603], 60.00th=[ 635], 00:43:12.435 | 70.00th=[ 685], 80.00th=[ 709], 90.00th=[ 750], 95.00th=[ 783], 00:43:12.435 | 99.00th=[ 832], 99.50th=[ 873], 99.90th=[ 955], 99.95th=[ 955], 00:43:12.435 | 99.99th=[ 955] 00:43:12.435 bw ( KiB/s): min= 4096, max= 4096, per=43.10%, avg=4096.00, stdev= 0.00, samples=1 00:43:12.435 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:43:12.435 lat (usec) : 250=0.19%, 500=18.06%, 750=67.97%, 1000=10.43% 00:43:12.435 lat (msec) : 2=0.37%, 50=2.98% 00:43:12.435 cpu : usr=0.40%, sys=1.89%, ctx=538, majf=0, minf=1 00:43:12.435 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:12.435 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:12.435 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:12.435 issued rwts: total=25,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:12.435 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:12.435 job2: (groupid=0, jobs=1): err= 0: pid=3147762: Mon Dec 9 10:00:47 2024 00:43:12.435 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:43:12.435 slat (nsec): min=7110, max=59875, avg=26182.71, stdev=6695.45 00:43:12.435 clat (usec): min=364, max=41553, avg=978.74, stdev=2533.48 00:43:12.435 lat (usec): min=372, max=41582, avg=1004.92, stdev=2533.76 00:43:12.435 clat percentiles (usec): 00:43:12.435 | 1.00th=[ 461], 5.00th=[ 594], 10.00th=[ 652], 20.00th=[ 693], 00:43:12.435 | 30.00th=[ 742], 40.00th=[ 791], 50.00th=[ 816], 60.00th=[ 840], 00:43:12.435 | 70.00th=[ 881], 80.00th=[ 947], 90.00th=[ 1037], 95.00th=[ 1074], 00:43:12.435 | 99.00th=[ 1205], 99.50th=[ 1483], 99.90th=[41681], 99.95th=[41681], 00:43:12.435 | 99.99th=[41681] 00:43:12.435 write: IOPS=865, BW=3461KiB/s (3544kB/s)(3464KiB/1001msec); 0 zone resets 00:43:12.435 slat (nsec): min=9928, max=70973, avg=33173.53, stdev=10577.58 00:43:12.435 clat (usec): min=129, max=909, avg=515.38, stdev=135.48 00:43:12.435 lat (usec): min=140, max=946, avg=548.55, stdev=139.54 00:43:12.435 clat percentiles (usec): 00:43:12.435 | 1.00th=[ 174], 5.00th=[ 277], 10.00th=[ 338], 20.00th=[ 396], 00:43:12.435 | 30.00th=[ 445], 40.00th=[ 490], 50.00th=[ 523], 60.00th=[ 553], 00:43:12.435 | 70.00th=[ 594], 80.00th=[ 635], 90.00th=[ 693], 95.00th=[ 725], 00:43:12.436 | 99.00th=[ 816], 99.50th=[ 832], 99.90th=[ 906], 99.95th=[ 906], 00:43:12.436 | 99.99th=[ 906] 00:43:12.436 bw ( KiB/s): min= 4096, max= 4096, per=43.10%, avg=4096.00, stdev= 0.00, samples=1 00:43:12.436 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:43:12.436 lat (usec) : 250=1.81%, 500=25.91%, 750=45.21%, 1000=22.28% 00:43:12.436 lat (msec) : 2=4.64%, 50=0.15% 00:43:12.436 cpu : usr=2.90%, sys=4.80%, ctx=1379, majf=0, minf=1 00:43:12.436 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:12.436 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:12.436 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:12.436 issued rwts: total=512,866,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:12.436 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:12.436 job3: (groupid=0, jobs=1): err= 0: pid=3147763: Mon Dec 9 10:00:47 2024 00:43:12.436 read: IOPS=15, BW=63.8KiB/s (65.3kB/s)(64.0KiB/1003msec) 00:43:12.436 slat (nsec): min=26632, max=29087, avg=27130.00, stdev=560.95 00:43:12.436 clat (usec): min=41190, max=42059, avg=41872.25, stdev=222.94 00:43:12.436 lat (usec): min=41218, max=42086, avg=41899.38, stdev=222.75 00:43:12.436 clat percentiles (usec): 00:43:12.436 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:43:12.436 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:43:12.436 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:43:12.436 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:43:12.436 | 99.99th=[42206] 00:43:12.436 write: IOPS=510, BW=2042KiB/s (2091kB/s)(2048KiB/1003msec); 0 zone resets 00:43:12.436 slat (usec): min=10, max=9535, avg=51.67, stdev=420.01 00:43:12.436 clat (usec): min=229, max=974, avg=590.82, stdev=130.05 00:43:12.436 lat (usec): min=241, max=10165, avg=642.48, stdev=441.78 00:43:12.436 clat percentiles (usec): 00:43:12.436 | 1.00th=[ 258], 5.00th=[ 363], 10.00th=[ 429], 20.00th=[ 478], 00:43:12.436 | 30.00th=[ 529], 40.00th=[ 570], 50.00th=[ 594], 60.00th=[ 627], 00:43:12.436 | 70.00th=[ 660], 80.00th=[ 709], 90.00th=[ 758], 95.00th=[ 791], 00:43:12.436 | 99.00th=[ 857], 99.50th=[ 865], 99.90th=[ 979], 99.95th=[ 979], 00:43:12.436 | 99.99th=[ 979] 00:43:12.436 bw ( KiB/s): min= 4096, max= 4096, per=43.10%, avg=4096.00, stdev= 0.00, samples=1 00:43:12.436 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:43:12.436 lat (usec) : 250=0.76%, 500=23.86%, 750=61.93%, 1000=10.42% 00:43:12.436 lat (msec) : 50=3.03% 00:43:12.436 cpu : usr=0.80%, sys=1.60%, ctx=530, majf=0, minf=2 00:43:12.436 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:12.436 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:12.436 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:12.436 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:12.436 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:12.436 00:43:12.436 Run status group 0 (all jobs): 00:43:12.436 READ: bw=2322KiB/s (2378kB/s), 63.8KiB/s-2046KiB/s (65.3kB/s-2095kB/s), io=2348KiB (2404kB), run=1001-1011msec 00:43:12.436 WRITE: bw=9503KiB/s (9732kB/s), 2026KiB/s-3461KiB/s (2074kB/s-3544kB/s), io=9608KiB (9839kB), run=1001-1011msec 00:43:12.436 00:43:12.436 Disk stats (read/write): 00:43:12.436 nvme0n1: ios=51/512, merge=0/0, ticks=1360/227, in_queue=1587, util=84.07% 00:43:12.436 nvme0n2: ios=62/512, merge=0/0, ticks=1010/293, in_queue=1303, util=87.95% 00:43:12.436 nvme0n3: ios=569/571, merge=0/0, ticks=1168/217, in_queue=1385, util=92.18% 00:43:12.436 nvme0n4: ios=71/512, merge=0/0, ticks=1046/295, in_queue=1341, util=94.22% 00:43:12.436 10:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:43:12.436 [global] 00:43:12.436 thread=1 00:43:12.436 invalidate=1 00:43:12.436 rw=randwrite 00:43:12.436 time_based=1 00:43:12.436 runtime=1 00:43:12.436 ioengine=libaio 00:43:12.436 direct=1 00:43:12.436 bs=4096 00:43:12.436 iodepth=1 00:43:12.436 norandommap=0 00:43:12.436 numjobs=1 00:43:12.436 00:43:12.436 verify_dump=1 00:43:12.436 verify_backlog=512 00:43:12.436 verify_state_save=0 00:43:12.436 do_verify=1 00:43:12.436 verify=crc32c-intel 00:43:12.436 [job0] 00:43:12.436 filename=/dev/nvme0n1 00:43:12.436 [job1] 00:43:12.436 filename=/dev/nvme0n2 00:43:12.436 [job2] 00:43:12.436 filename=/dev/nvme0n3 00:43:12.436 [job3] 00:43:12.436 filename=/dev/nvme0n4 00:43:12.436 Could not set queue depth (nvme0n1) 00:43:12.436 Could not set queue depth (nvme0n2) 00:43:12.436 Could not set queue depth (nvme0n3) 00:43:12.436 Could not set queue depth (nvme0n4) 00:43:13.008 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:13.008 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:13.008 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:13.008 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:13.008 fio-3.35 00:43:13.008 Starting 4 threads 00:43:14.394 00:43:14.394 job0: (groupid=0, jobs=1): err= 0: pid=3148198: Mon Dec 9 10:00:49 2024 00:43:14.394 read: IOPS=17, BW=70.4KiB/s (72.1kB/s)(72.0KiB/1023msec) 00:43:14.394 slat (nsec): min=24987, max=25575, avg=25177.94, stdev=159.56 00:43:14.394 clat (usec): min=1112, max=42111, avg=37416.20, stdev=13190.03 00:43:14.394 lat (usec): min=1138, max=42136, avg=37441.38, stdev=13190.03 00:43:14.394 clat percentiles (usec): 00:43:14.394 | 1.00th=[ 1106], 5.00th=[ 1106], 10.00th=[ 1205], 20.00th=[41681], 00:43:14.394 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:43:14.394 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:43:14.394 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:43:14.394 | 99.99th=[42206] 00:43:14.394 write: IOPS=500, BW=2002KiB/s (2050kB/s)(2048KiB/1023msec); 0 zone resets 00:43:14.394 slat (nsec): min=9400, max=48899, avg=27486.55, stdev=8614.36 00:43:14.394 clat (usec): min=269, max=995, avg=645.83, stdev=121.26 00:43:14.394 lat (usec): min=279, max=1026, avg=673.31, stdev=125.30 00:43:14.394 clat percentiles (usec): 00:43:14.394 | 1.00th=[ 375], 5.00th=[ 433], 10.00th=[ 482], 20.00th=[ 545], 00:43:14.394 | 30.00th=[ 586], 40.00th=[ 619], 50.00th=[ 652], 60.00th=[ 693], 00:43:14.394 | 70.00th=[ 725], 80.00th=[ 750], 90.00th=[ 791], 95.00th=[ 832], 00:43:14.394 | 99.00th=[ 914], 99.50th=[ 922], 99.90th=[ 996], 99.95th=[ 996], 00:43:14.394 | 99.99th=[ 996] 00:43:14.394 bw ( KiB/s): min= 4096, max= 4096, per=48.01%, avg=4096.00, stdev= 0.00, samples=1 00:43:14.394 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:43:14.394 lat (usec) : 500=13.21%, 750=64.72%, 1000=18.68% 00:43:14.394 lat (msec) : 2=0.38%, 50=3.02% 00:43:14.394 cpu : usr=1.08%, sys=1.08%, ctx=530, majf=0, minf=1 00:43:14.394 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:14.394 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:14.394 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:14.394 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:14.394 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:14.394 job1: (groupid=0, jobs=1): err= 0: pid=3148214: Mon Dec 9 10:00:49 2024 00:43:14.394 read: IOPS=16, BW=66.9KiB/s (68.5kB/s)(68.0KiB/1017msec) 00:43:14.394 slat (nsec): min=27192, max=27577, avg=27403.47, stdev=129.90 00:43:14.394 clat (usec): min=40800, max=41849, avg=41090.88, stdev=296.57 00:43:14.394 lat (usec): min=40827, max=41876, avg=41118.28, stdev=296.62 00:43:14.394 clat percentiles (usec): 00:43:14.394 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:43:14.394 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:43:14.394 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:43:14.394 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:43:14.394 | 99.99th=[41681] 00:43:14.394 write: IOPS=503, BW=2014KiB/s (2062kB/s)(2048KiB/1017msec); 0 zone resets 00:43:14.394 slat (nsec): min=8356, max=67638, avg=30836.34, stdev=9895.53 00:43:14.394 clat (usec): min=271, max=908, avg=581.39, stdev=117.84 00:43:14.394 lat (usec): min=290, max=942, avg=612.22, stdev=120.45 00:43:14.394 clat percentiles (usec): 00:43:14.394 | 1.00th=[ 343], 5.00th=[ 383], 10.00th=[ 412], 20.00th=[ 482], 00:43:14.394 | 30.00th=[ 515], 40.00th=[ 553], 50.00th=[ 586], 60.00th=[ 619], 00:43:14.394 | 70.00th=[ 644], 80.00th=[ 685], 90.00th=[ 734], 95.00th=[ 775], 00:43:14.394 | 99.00th=[ 848], 99.50th=[ 865], 99.90th=[ 906], 99.95th=[ 906], 00:43:14.394 | 99.99th=[ 906] 00:43:14.394 bw ( KiB/s): min= 4096, max= 4096, per=48.01%, avg=4096.00, stdev= 0.00, samples=1 00:43:14.394 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:43:14.394 lat (usec) : 500=23.82%, 750=65.78%, 1000=7.18% 00:43:14.394 lat (msec) : 50=3.21% 00:43:14.394 cpu : usr=0.79%, sys=2.17%, ctx=531, majf=0, minf=1 00:43:14.395 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:14.395 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:14.395 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:14.395 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:14.395 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:14.395 job2: (groupid=0, jobs=1): err= 0: pid=3148234: Mon Dec 9 10:00:49 2024 00:43:14.395 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:43:14.395 slat (nsec): min=6858, max=60104, avg=25940.72, stdev=4699.90 00:43:14.395 clat (usec): min=700, max=1269, avg=1056.67, stdev=82.43 00:43:14.395 lat (usec): min=726, max=1295, avg=1082.61, stdev=83.33 00:43:14.395 clat percentiles (usec): 00:43:14.395 | 1.00th=[ 807], 5.00th=[ 906], 10.00th=[ 955], 20.00th=[ 996], 00:43:14.395 | 30.00th=[ 1029], 40.00th=[ 1057], 50.00th=[ 1074], 60.00th=[ 1090], 00:43:14.395 | 70.00th=[ 1106], 80.00th=[ 1123], 90.00th=[ 1139], 95.00th=[ 1172], 00:43:14.395 | 99.00th=[ 1205], 99.50th=[ 1237], 99.90th=[ 1270], 99.95th=[ 1270], 00:43:14.395 | 99.99th=[ 1270] 00:43:14.395 write: IOPS=662, BW=2649KiB/s (2713kB/s)(2652KiB/1001msec); 0 zone resets 00:43:14.395 slat (nsec): min=9163, max=82452, avg=29721.21, stdev=8701.72 00:43:14.395 clat (usec): min=269, max=948, avg=628.29, stdev=110.24 00:43:14.395 lat (usec): min=285, max=983, avg=658.02, stdev=113.50 00:43:14.395 clat percentiles (usec): 00:43:14.395 | 1.00th=[ 338], 5.00th=[ 416], 10.00th=[ 486], 20.00th=[ 545], 00:43:14.395 | 30.00th=[ 578], 40.00th=[ 611], 50.00th=[ 635], 60.00th=[ 660], 00:43:14.395 | 70.00th=[ 685], 80.00th=[ 717], 90.00th=[ 758], 95.00th=[ 799], 00:43:14.395 | 99.00th=[ 873], 99.50th=[ 889], 99.90th=[ 947], 99.95th=[ 947], 00:43:14.395 | 99.99th=[ 947] 00:43:14.395 bw ( KiB/s): min= 4096, max= 4096, per=48.01%, avg=4096.00, stdev= 0.00, samples=1 00:43:14.395 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:43:14.395 lat (usec) : 500=7.15%, 750=42.98%, 1000=15.40% 00:43:14.395 lat (msec) : 2=34.47% 00:43:14.395 cpu : usr=2.30%, sys=3.00%, ctx=1175, majf=0, minf=1 00:43:14.395 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:14.395 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:14.395 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:14.395 issued rwts: total=512,663,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:14.395 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:14.395 job3: (groupid=0, jobs=1): err= 0: pid=3148240: Mon Dec 9 10:00:49 2024 00:43:14.395 read: IOPS=16, BW=66.0KiB/s (67.5kB/s)(68.0KiB/1031msec) 00:43:14.395 slat (nsec): min=26493, max=44401, avg=28085.24, stdev=4438.47 00:43:14.395 clat (usec): min=1127, max=42025, avg=39294.00, stdev=9841.70 00:43:14.395 lat (usec): min=1154, max=42051, avg=39322.09, stdev=9841.95 00:43:14.395 clat percentiles (usec): 00:43:14.395 | 1.00th=[ 1123], 5.00th=[ 1123], 10.00th=[41157], 20.00th=[41157], 00:43:14.395 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:43:14.395 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:43:14.395 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:43:14.395 | 99.99th=[42206] 00:43:14.395 write: IOPS=496, BW=1986KiB/s (2034kB/s)(2048KiB/1031msec); 0 zone resets 00:43:14.395 slat (nsec): min=9085, max=51701, avg=29900.85, stdev=8579.36 00:43:14.395 clat (usec): min=296, max=1025, avg=669.29, stdev=126.48 00:43:14.395 lat (usec): min=308, max=1057, avg=699.19, stdev=129.45 00:43:14.395 clat percentiles (usec): 00:43:14.395 | 1.00th=[ 375], 5.00th=[ 465], 10.00th=[ 510], 20.00th=[ 570], 00:43:14.395 | 30.00th=[ 603], 40.00th=[ 644], 50.00th=[ 668], 60.00th=[ 701], 00:43:14.395 | 70.00th=[ 725], 80.00th=[ 766], 90.00th=[ 832], 95.00th=[ 889], 00:43:14.395 | 99.00th=[ 971], 99.50th=[ 988], 99.90th=[ 1029], 99.95th=[ 1029], 00:43:14.395 | 99.99th=[ 1029] 00:43:14.395 bw ( KiB/s): min= 4096, max= 4096, per=48.01%, avg=4096.00, stdev= 0.00, samples=1 00:43:14.395 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:43:14.395 lat (usec) : 500=8.88%, 750=64.65%, 1000=23.06% 00:43:14.395 lat (msec) : 2=0.38%, 50=3.02% 00:43:14.395 cpu : usr=1.17%, sys=1.84%, ctx=529, majf=0, minf=1 00:43:14.395 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:14.395 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:14.395 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:14.395 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:14.395 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:14.395 00:43:14.395 Run status group 0 (all jobs): 00:43:14.395 READ: bw=2188KiB/s (2241kB/s), 66.0KiB/s-2046KiB/s (67.5kB/s-2095kB/s), io=2256KiB (2310kB), run=1001-1031msec 00:43:14.395 WRITE: bw=8532KiB/s (8736kB/s), 1986KiB/s-2649KiB/s (2034kB/s-2713kB/s), io=8796KiB (9007kB), run=1001-1031msec 00:43:14.395 00:43:14.395 Disk stats (read/write): 00:43:14.395 nvme0n1: ios=63/512, merge=0/0, ticks=489/326, in_queue=815, util=85.77% 00:43:14.395 nvme0n2: ios=37/512, merge=0/0, ticks=1449/238, in_queue=1687, util=97.04% 00:43:14.395 nvme0n3: ios=499/512, merge=0/0, ticks=617/311, in_queue=928, util=99.89% 00:43:14.395 nvme0n4: ios=12/512, merge=0/0, ticks=458/286, in_queue=744, util=89.51% 00:43:14.395 10:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:43:14.395 [global] 00:43:14.395 thread=1 00:43:14.395 invalidate=1 00:43:14.395 rw=write 00:43:14.395 time_based=1 00:43:14.395 runtime=1 00:43:14.395 ioengine=libaio 00:43:14.395 direct=1 00:43:14.395 bs=4096 00:43:14.395 iodepth=128 00:43:14.395 norandommap=0 00:43:14.395 numjobs=1 00:43:14.395 00:43:14.395 verify_dump=1 00:43:14.395 verify_backlog=512 00:43:14.395 verify_state_save=0 00:43:14.395 do_verify=1 00:43:14.395 verify=crc32c-intel 00:43:14.395 [job0] 00:43:14.395 filename=/dev/nvme0n1 00:43:14.395 [job1] 00:43:14.395 filename=/dev/nvme0n2 00:43:14.395 [job2] 00:43:14.395 filename=/dev/nvme0n3 00:43:14.395 [job3] 00:43:14.395 filename=/dev/nvme0n4 00:43:14.395 Could not set queue depth (nvme0n1) 00:43:14.395 Could not set queue depth (nvme0n2) 00:43:14.395 Could not set queue depth (nvme0n3) 00:43:14.395 Could not set queue depth (nvme0n4) 00:43:14.657 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:14.657 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:14.657 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:14.657 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:14.657 fio-3.35 00:43:14.657 Starting 4 threads 00:43:16.044 00:43:16.044 job0: (groupid=0, jobs=1): err= 0: pid=3148657: Mon Dec 9 10:00:51 2024 00:43:16.044 read: IOPS=6202, BW=24.2MiB/s (25.4MB/s)(24.4MiB/1007msec) 00:43:16.044 slat (nsec): min=946, max=12002k, avg=77405.47, stdev=622761.62 00:43:16.044 clat (usec): min=2658, max=29740, avg=9685.32, stdev=3452.73 00:43:16.044 lat (usec): min=2663, max=29743, avg=9762.72, stdev=3509.19 00:43:16.044 clat percentiles (usec): 00:43:16.044 | 1.00th=[ 4490], 5.00th=[ 6063], 10.00th=[ 6587], 20.00th=[ 6980], 00:43:16.044 | 30.00th=[ 7701], 40.00th=[ 8356], 50.00th=[ 8848], 60.00th=[10028], 00:43:16.044 | 70.00th=[10683], 80.00th=[11731], 90.00th=[13435], 95.00th=[16188], 00:43:16.044 | 99.00th=[24511], 99.50th=[26084], 99.90th=[29492], 99.95th=[29754], 00:43:16.044 | 99.99th=[29754] 00:43:16.044 write: IOPS=6609, BW=25.8MiB/s (27.1MB/s)(26.0MiB/1007msec); 0 zone resets 00:43:16.044 slat (nsec): min=1687, max=7904.4k, avg=71777.65, stdev=431746.03 00:43:16.044 clat (usec): min=2090, max=29735, avg=10000.20, stdev=5404.35 00:43:16.044 lat (usec): min=2100, max=29744, avg=10071.98, stdev=5439.19 00:43:16.044 clat percentiles (usec): 00:43:16.044 | 1.00th=[ 3326], 5.00th=[ 5145], 10.00th=[ 5604], 20.00th=[ 6063], 00:43:16.044 | 30.00th=[ 6587], 40.00th=[ 6849], 50.00th=[ 7832], 60.00th=[ 8979], 00:43:16.044 | 70.00th=[10290], 80.00th=[14091], 90.00th=[18744], 95.00th=[22152], 00:43:16.044 | 99.00th=[25560], 99.50th=[26346], 99.90th=[27919], 99.95th=[29492], 00:43:16.044 | 99.99th=[29754] 00:43:16.044 bw ( KiB/s): min=24560, max=28488, per=26.64%, avg=26524.00, stdev=2777.52, samples=2 00:43:16.044 iops : min= 6140, max= 7122, avg=6631.00, stdev=694.38, samples=2 00:43:16.044 lat (msec) : 4=1.40%, 10=62.46%, 20=30.58%, 50=5.57% 00:43:16.044 cpu : usr=4.87%, sys=6.36%, ctx=536, majf=0, minf=1 00:43:16.044 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:43:16.044 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:16.044 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:16.044 issued rwts: total=6246,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:16.044 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:16.044 job1: (groupid=0, jobs=1): err= 0: pid=3148658: Mon Dec 9 10:00:51 2024 00:43:16.044 read: IOPS=7649, BW=29.9MiB/s (31.3MB/s)(30.1MiB/1008msec) 00:43:16.044 slat (nsec): min=957, max=9049.6k, avg=58939.13, stdev=457757.73 00:43:16.044 clat (usec): min=2427, max=32912, avg=7950.13, stdev=3071.95 00:43:16.044 lat (usec): min=2434, max=32918, avg=8009.07, stdev=3098.03 00:43:16.044 clat percentiles (usec): 00:43:16.044 | 1.00th=[ 3392], 5.00th=[ 4883], 10.00th=[ 5604], 20.00th=[ 6128], 00:43:16.044 | 30.00th=[ 6521], 40.00th=[ 6849], 50.00th=[ 7242], 60.00th=[ 7832], 00:43:16.044 | 70.00th=[ 8455], 80.00th=[ 9503], 90.00th=[10683], 95.00th=[12387], 00:43:16.044 | 99.00th=[22676], 99.50th=[27657], 99.90th=[32375], 99.95th=[32900], 00:43:16.044 | 99.99th=[32900] 00:43:16.044 write: IOPS=8126, BW=31.7MiB/s (33.3MB/s)(32.0MiB/1008msec); 0 zone resets 00:43:16.044 slat (nsec): min=1638, max=38432k, avg=51773.27, stdev=594296.01 00:43:16.044 clat (usec): min=760, max=44045, avg=7795.60, stdev=6176.53 00:43:16.044 lat (usec): min=770, max=44054, avg=7847.38, stdev=6205.94 00:43:16.044 clat percentiles (usec): 00:43:16.044 | 1.00th=[ 1270], 5.00th=[ 3195], 10.00th=[ 3916], 20.00th=[ 4686], 00:43:16.044 | 30.00th=[ 5407], 40.00th=[ 5866], 50.00th=[ 6325], 60.00th=[ 6652], 00:43:16.044 | 70.00th=[ 7046], 80.00th=[ 8586], 90.00th=[12125], 95.00th=[19792], 00:43:16.044 | 99.00th=[40109], 99.50th=[40109], 99.90th=[44303], 99.95th=[44303], 00:43:16.044 | 99.99th=[44303] 00:43:16.044 bw ( KiB/s): min=31016, max=33744, per=32.53%, avg=32380.00, stdev=1928.99, samples=2 00:43:16.044 iops : min= 7754, max= 8436, avg=8095.00, stdev=482.25, samples=2 00:43:16.044 lat (usec) : 1000=0.29% 00:43:16.044 lat (msec) : 2=0.92%, 4=5.50%, 10=77.88%, 20=12.41%, 50=3.01% 00:43:16.044 cpu : usr=5.76%, sys=9.93%, ctx=492, majf=0, minf=2 00:43:16.044 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:43:16.044 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:16.044 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:16.044 issued rwts: total=7711,8192,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:16.044 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:16.044 job2: (groupid=0, jobs=1): err= 0: pid=3148675: Mon Dec 9 10:00:51 2024 00:43:16.044 read: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec) 00:43:16.044 slat (nsec): min=998, max=14934k, avg=109535.25, stdev=788931.83 00:43:16.044 clat (usec): min=4855, max=48029, avg=14847.88, stdev=7377.64 00:43:16.044 lat (usec): min=4862, max=48929, avg=14957.42, stdev=7428.66 00:43:16.044 clat percentiles (usec): 00:43:16.044 | 1.00th=[ 7242], 5.00th=[ 8029], 10.00th=[ 8225], 20.00th=[ 8848], 00:43:16.044 | 30.00th=[ 9896], 40.00th=[10683], 50.00th=[11469], 60.00th=[13566], 00:43:16.044 | 70.00th=[17171], 80.00th=[21365], 90.00th=[26346], 95.00th=[28705], 00:43:16.044 | 99.00th=[38011], 99.50th=[40109], 99.90th=[46400], 99.95th=[47973], 00:43:16.044 | 99.99th=[47973] 00:43:16.044 write: IOPS=4587, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec); 0 zone resets 00:43:16.044 slat (nsec): min=1669, max=37958k, avg=113182.31, stdev=962680.49 00:43:16.044 clat (usec): min=1268, max=40341, avg=13789.02, stdev=7163.00 00:43:16.044 lat (usec): min=2367, max=40351, avg=13902.20, stdev=7240.52 00:43:16.044 clat percentiles (usec): 00:43:16.044 | 1.00th=[ 4817], 5.00th=[ 5473], 10.00th=[ 7308], 20.00th=[ 8586], 00:43:16.044 | 30.00th=[ 9110], 40.00th=[ 9896], 50.00th=[11994], 60.00th=[13435], 00:43:16.044 | 70.00th=[14877], 80.00th=[19530], 90.00th=[21627], 95.00th=[26608], 00:43:16.044 | 99.00th=[40109], 99.50th=[40109], 99.90th=[40109], 99.95th=[40109], 00:43:16.044 | 99.99th=[40109] 00:43:16.044 bw ( KiB/s): min=16384, max=19440, per=17.99%, avg=17912.00, stdev=2160.92, samples=2 00:43:16.044 iops : min= 4096, max= 4860, avg=4478.00, stdev=540.23, samples=2 00:43:16.044 lat (msec) : 2=0.01%, 4=0.11%, 10=36.59%, 20=41.88%, 50=21.41% 00:43:16.044 cpu : usr=3.39%, sys=5.28%, ctx=265, majf=0, minf=1 00:43:16.044 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:43:16.044 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:16.044 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:16.044 issued rwts: total=4096,4606,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:16.044 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:16.044 job3: (groupid=0, jobs=1): err= 0: pid=3148681: Mon Dec 9 10:00:51 2024 00:43:16.044 read: IOPS=5587, BW=21.8MiB/s (22.9MB/s)(21.9MiB/1002msec) 00:43:16.044 slat (nsec): min=921, max=13253k, avg=105669.17, stdev=673979.17 00:43:16.044 clat (usec): min=1124, max=44163, avg=13014.55, stdev=8031.77 00:43:16.044 lat (usec): min=2992, max=44173, avg=13120.21, stdev=8075.75 00:43:16.044 clat percentiles (usec): 00:43:16.044 | 1.00th=[ 5604], 5.00th=[ 7832], 10.00th=[ 8291], 20.00th=[ 8848], 00:43:16.044 | 30.00th=[ 9372], 40.00th=[ 9634], 50.00th=[ 9765], 60.00th=[10159], 00:43:16.044 | 70.00th=[10945], 80.00th=[15270], 90.00th=[25560], 95.00th=[31589], 00:43:16.044 | 99.00th=[44303], 99.50th=[44303], 99.90th=[44303], 99.95th=[44303], 00:43:16.044 | 99.99th=[44303] 00:43:16.044 write: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec); 0 zone resets 00:43:16.044 slat (nsec): min=1553, max=6676.4k, avg=67831.33, stdev=348114.48 00:43:16.044 clat (usec): min=1276, max=35370, avg=9622.05, stdev=3684.36 00:43:16.044 lat (usec): min=1286, max=35371, avg=9689.88, stdev=3683.79 00:43:16.044 clat percentiles (usec): 00:43:16.044 | 1.00th=[ 3556], 5.00th=[ 5997], 10.00th=[ 7373], 20.00th=[ 8029], 00:43:16.044 | 30.00th=[ 8455], 40.00th=[ 8848], 50.00th=[ 9110], 60.00th=[ 9241], 00:43:16.044 | 70.00th=[ 9503], 80.00th=[10028], 90.00th=[12125], 95.00th=[15270], 00:43:16.045 | 99.00th=[26870], 99.50th=[32637], 99.90th=[35390], 99.95th=[35390], 00:43:16.045 | 99.99th=[35390] 00:43:16.045 bw ( KiB/s): min=20480, max=24576, per=22.63%, avg=22528.00, stdev=2896.31, samples=2 00:43:16.045 iops : min= 5120, max= 6144, avg=5632.00, stdev=724.08, samples=2 00:43:16.045 lat (msec) : 2=0.09%, 4=1.14%, 10=67.42%, 20=22.90%, 50=8.45% 00:43:16.045 cpu : usr=3.40%, sys=5.29%, ctx=590, majf=0, minf=1 00:43:16.045 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:43:16.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:16.045 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:16.045 issued rwts: total=5599,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:16.045 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:16.045 00:43:16.045 Run status group 0 (all jobs): 00:43:16.045 READ: bw=91.7MiB/s (96.1MB/s), 15.9MiB/s-29.9MiB/s (16.7MB/s-31.3MB/s), io=92.4MiB (96.9MB), run=1002-1008msec 00:43:16.045 WRITE: bw=97.2MiB/s (102MB/s), 17.9MiB/s-31.7MiB/s (18.8MB/s-33.3MB/s), io=98.0MiB (103MB), run=1002-1008msec 00:43:16.045 00:43:16.045 Disk stats (read/write): 00:43:16.045 nvme0n1: ios=5148/5343, merge=0/0, ticks=47611/53606, in_queue=101217, util=89.58% 00:43:16.045 nvme0n2: ios=6702/6750, merge=0/0, ticks=49723/42162, in_queue=91885, util=94.09% 00:43:16.045 nvme0n3: ios=3523/3584, merge=0/0, ticks=24170/23643, in_queue=47813, util=100.00% 00:43:16.045 nvme0n4: ios=5005/5120, merge=0/0, ticks=16628/14133, in_queue=30761, util=95.83% 00:43:16.045 10:00:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:43:16.045 [global] 00:43:16.045 thread=1 00:43:16.045 invalidate=1 00:43:16.045 rw=randwrite 00:43:16.045 time_based=1 00:43:16.045 runtime=1 00:43:16.045 ioengine=libaio 00:43:16.045 direct=1 00:43:16.045 bs=4096 00:43:16.045 iodepth=128 00:43:16.045 norandommap=0 00:43:16.045 numjobs=1 00:43:16.045 00:43:16.045 verify_dump=1 00:43:16.045 verify_backlog=512 00:43:16.045 verify_state_save=0 00:43:16.045 do_verify=1 00:43:16.045 verify=crc32c-intel 00:43:16.045 [job0] 00:43:16.045 filename=/dev/nvme0n1 00:43:16.045 [job1] 00:43:16.045 filename=/dev/nvme0n2 00:43:16.045 [job2] 00:43:16.045 filename=/dev/nvme0n3 00:43:16.045 [job3] 00:43:16.045 filename=/dev/nvme0n4 00:43:16.045 Could not set queue depth (nvme0n1) 00:43:16.045 Could not set queue depth (nvme0n2) 00:43:16.045 Could not set queue depth (nvme0n3) 00:43:16.045 Could not set queue depth (nvme0n4) 00:43:16.045 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:16.045 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:16.045 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:16.045 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:43:16.045 fio-3.35 00:43:16.045 Starting 4 threads 00:43:17.432 00:43:17.432 job0: (groupid=0, jobs=1): err= 0: pid=3149080: Mon Dec 9 10:00:52 2024 00:43:17.432 read: IOPS=7187, BW=28.1MiB/s (29.4MB/s)(28.2MiB/1005msec) 00:43:17.432 slat (nsec): min=906, max=10267k, avg=53548.87, stdev=421924.71 00:43:17.432 clat (usec): min=2059, max=19508, avg=7975.03, stdev=3055.14 00:43:17.432 lat (usec): min=2148, max=19533, avg=8028.58, stdev=3075.73 00:43:17.432 clat percentiles (usec): 00:43:17.432 | 1.00th=[ 2573], 5.00th=[ 3621], 10.00th=[ 4621], 20.00th=[ 5604], 00:43:17.432 | 30.00th=[ 6259], 40.00th=[ 6849], 50.00th=[ 7570], 60.00th=[ 8094], 00:43:17.432 | 70.00th=[ 8979], 80.00th=[10159], 90.00th=[12387], 95.00th=[13960], 00:43:17.432 | 99.00th=[17433], 99.50th=[17695], 99.90th=[17957], 99.95th=[18744], 00:43:17.432 | 99.99th=[19530] 00:43:17.432 write: IOPS=7641, BW=29.9MiB/s (31.3MB/s)(30.0MiB/1005msec); 0 zone resets 00:43:17.432 slat (nsec): min=1549, max=7346.4k, avg=65237.87, stdev=424398.40 00:43:17.432 clat (usec): min=574, max=37086, avg=9114.83, stdev=8044.50 00:43:17.432 lat (usec): min=582, max=37095, avg=9180.07, stdev=8094.62 00:43:17.432 clat percentiles (usec): 00:43:17.432 | 1.00th=[ 1139], 5.00th=[ 2040], 10.00th=[ 3458], 20.00th=[ 4621], 00:43:17.433 | 30.00th=[ 5342], 40.00th=[ 6063], 50.00th=[ 6390], 60.00th=[ 7177], 00:43:17.433 | 70.00th=[ 8356], 80.00th=[10552], 90.00th=[21627], 95.00th=[32113], 00:43:17.433 | 99.00th=[35914], 99.50th=[36439], 99.90th=[36963], 99.95th=[36963], 00:43:17.433 | 99.99th=[36963] 00:43:17.433 bw ( KiB/s): min=24424, max=36432, per=34.13%, avg=30428.00, stdev=8490.94, samples=2 00:43:17.433 iops : min= 6106, max= 9108, avg=7607.00, stdev=2122.73, samples=2 00:43:17.433 lat (usec) : 750=0.05%, 1000=0.21% 00:43:17.433 lat (msec) : 2=2.28%, 4=6.99%, 10=69.42%, 20=15.67%, 50=5.39% 00:43:17.433 cpu : usr=4.98%, sys=8.47%, ctx=535, majf=0, minf=2 00:43:17.433 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:43:17.433 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:17.433 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:17.433 issued rwts: total=7223,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:17.433 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:17.433 job1: (groupid=0, jobs=1): err= 0: pid=3149097: Mon Dec 9 10:00:52 2024 00:43:17.433 read: IOPS=7153, BW=27.9MiB/s (29.3MB/s)(28.0MiB/1002msec) 00:43:17.433 slat (nsec): min=946, max=6826.4k, avg=72417.11, stdev=358425.20 00:43:17.433 clat (usec): min=5418, max=19924, avg=9452.63, stdev=2782.77 00:43:17.433 lat (usec): min=5585, max=19931, avg=9525.05, stdev=2786.60 00:43:17.433 clat percentiles (usec): 00:43:17.433 | 1.00th=[ 6128], 5.00th=[ 6783], 10.00th=[ 7046], 20.00th=[ 7570], 00:43:17.433 | 30.00th=[ 7898], 40.00th=[ 8291], 50.00th=[ 8586], 60.00th=[ 9110], 00:43:17.433 | 70.00th=[ 9634], 80.00th=[10421], 90.00th=[13304], 95.00th=[16057], 00:43:17.433 | 99.00th=[19268], 99.50th=[19792], 99.90th=[19792], 99.95th=[20055], 00:43:17.433 | 99.99th=[20055] 00:43:17.433 write: IOPS=7304, BW=28.5MiB/s (29.9MB/s)(28.6MiB/1002msec); 0 zone resets 00:43:17.433 slat (nsec): min=1567, max=7427.7k, avg=61097.12, stdev=314250.19 00:43:17.433 clat (usec): min=1296, max=16277, avg=8002.95, stdev=1999.26 00:43:17.433 lat (usec): min=1534, max=16288, avg=8064.04, stdev=1992.16 00:43:17.433 clat percentiles (usec): 00:43:17.433 | 1.00th=[ 3818], 5.00th=[ 5604], 10.00th=[ 5997], 20.00th=[ 6783], 00:43:17.433 | 30.00th=[ 7111], 40.00th=[ 7439], 50.00th=[ 7767], 60.00th=[ 8029], 00:43:17.433 | 70.00th=[ 8455], 80.00th=[ 8848], 90.00th=[ 9896], 95.00th=[11338], 00:43:17.433 | 99.00th=[15795], 99.50th=[16057], 99.90th=[16188], 99.95th=[16319], 00:43:17.433 | 99.99th=[16319] 00:43:17.433 bw ( KiB/s): min=25336, max=32200, per=32.27%, avg=28768.00, stdev=4853.58, samples=2 00:43:17.433 iops : min= 6334, max= 8050, avg=7192.00, stdev=1213.40, samples=2 00:43:17.433 lat (msec) : 2=0.12%, 4=0.41%, 10=81.94%, 20=17.53% 00:43:17.433 cpu : usr=5.00%, sys=5.99%, ctx=802, majf=0, minf=2 00:43:17.433 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:43:17.433 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:17.433 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:17.433 issued rwts: total=7168,7319,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:17.433 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:17.433 job2: (groupid=0, jobs=1): err= 0: pid=3149119: Mon Dec 9 10:00:52 2024 00:43:17.433 read: IOPS=4576, BW=17.9MiB/s (18.7MB/s)(18.7MiB/1045msec) 00:43:17.433 slat (nsec): min=953, max=16390k, avg=106297.72, stdev=738644.94 00:43:17.433 clat (usec): min=3640, max=75927, avg=14336.26, stdev=9713.75 00:43:17.433 lat (usec): min=3647, max=75933, avg=14442.55, stdev=9755.07 00:43:17.433 clat percentiles (usec): 00:43:17.433 | 1.00th=[ 5866], 5.00th=[ 8291], 10.00th=[ 8848], 20.00th=[ 9765], 00:43:17.433 | 30.00th=[10552], 40.00th=[11338], 50.00th=[11994], 60.00th=[12649], 00:43:17.433 | 70.00th=[13566], 80.00th=[15139], 90.00th=[19530], 95.00th=[27132], 00:43:17.433 | 99.00th=[69731], 99.50th=[71828], 99.90th=[76022], 99.95th=[76022], 00:43:17.433 | 99.99th=[76022] 00:43:17.433 write: IOPS=4899, BW=19.1MiB/s (20.1MB/s)(20.0MiB/1045msec); 0 zone resets 00:43:17.433 slat (nsec): min=1513, max=11325k, avg=91031.13, stdev=561886.98 00:43:17.433 clat (usec): min=3970, max=30103, avg=12445.08, stdev=5019.79 00:43:17.433 lat (usec): min=4535, max=30109, avg=12536.11, stdev=5060.55 00:43:17.433 clat percentiles (usec): 00:43:17.433 | 1.00th=[ 5211], 5.00th=[ 7439], 10.00th=[ 8160], 20.00th=[ 8586], 00:43:17.433 | 30.00th=[ 8979], 40.00th=[ 9503], 50.00th=[10421], 60.00th=[11994], 00:43:17.433 | 70.00th=[13304], 80.00th=[16909], 90.00th=[20579], 95.00th=[22414], 00:43:17.433 | 99.00th=[26608], 99.50th=[26608], 99.90th=[30016], 99.95th=[30016], 00:43:17.433 | 99.99th=[30016] 00:43:17.433 bw ( KiB/s): min=20480, max=20480, per=22.97%, avg=20480.00, stdev= 0.00, samples=2 00:43:17.433 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:43:17.433 lat (msec) : 4=0.13%, 10=32.09%, 20=56.70%, 50=9.81%, 100=1.27% 00:43:17.433 cpu : usr=2.78%, sys=5.94%, ctx=364, majf=0, minf=2 00:43:17.433 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:43:17.433 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:17.433 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:17.433 issued rwts: total=4782,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:17.433 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:17.433 job3: (groupid=0, jobs=1): err= 0: pid=3149125: Mon Dec 9 10:00:52 2024 00:43:17.433 read: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec) 00:43:17.433 slat (nsec): min=956, max=14348k, avg=165428.07, stdev=1018647.42 00:43:17.433 clat (usec): min=3612, max=44990, avg=21605.80, stdev=10398.07 00:43:17.433 lat (usec): min=3615, max=44996, avg=21771.23, stdev=10425.34 00:43:17.433 clat percentiles (usec): 00:43:17.433 | 1.00th=[ 7832], 5.00th=[ 8455], 10.00th=[ 8848], 20.00th=[10290], 00:43:17.433 | 30.00th=[13698], 40.00th=[16909], 50.00th=[20579], 60.00th=[24249], 00:43:17.433 | 70.00th=[27919], 80.00th=[32113], 90.00th=[34866], 95.00th=[40109], 00:43:17.433 | 99.00th=[44827], 99.50th=[44827], 99.90th=[44827], 99.95th=[44827], 00:43:17.433 | 99.99th=[44827] 00:43:17.433 write: IOPS=3153, BW=12.3MiB/s (12.9MB/s)(12.4MiB/1005msec); 0 zone resets 00:43:17.433 slat (nsec): min=1580, max=11971k, avg=149425.00, stdev=959183.01 00:43:17.433 clat (usec): min=652, max=39878, avg=19224.28, stdev=9699.66 00:43:17.433 lat (usec): min=1221, max=43865, avg=19373.71, stdev=9736.00 00:43:17.433 clat percentiles (usec): 00:43:17.433 | 1.00th=[ 4359], 5.00th=[ 5735], 10.00th=[ 8029], 20.00th=[10814], 00:43:17.433 | 30.00th=[12518], 40.00th=[13304], 50.00th=[17171], 60.00th=[22152], 00:43:17.433 | 70.00th=[26084], 80.00th=[28181], 90.00th=[33162], 95.00th=[36439], 00:43:17.433 | 99.00th=[39584], 99.50th=[40109], 99.90th=[40109], 99.95th=[40109], 00:43:17.433 | 99.99th=[40109] 00:43:17.433 bw ( KiB/s): min= 8192, max=16384, per=13.78%, avg=12288.00, stdev=5792.62, samples=2 00:43:17.433 iops : min= 2048, max= 4096, avg=3072.00, stdev=1448.15, samples=2 00:43:17.433 lat (usec) : 750=0.02% 00:43:17.433 lat (msec) : 2=0.03%, 4=0.38%, 10=18.15%, 20=34.69%, 50=46.72% 00:43:17.433 cpu : usr=2.89%, sys=3.59%, ctx=219, majf=0, minf=1 00:43:17.433 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:43:17.433 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:17.433 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:43:17.433 issued rwts: total=3072,3169,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:17.433 latency : target=0, window=0, percentile=100.00%, depth=128 00:43:17.433 00:43:17.433 Run status group 0 (all jobs): 00:43:17.433 READ: bw=83.2MiB/s (87.2MB/s), 11.9MiB/s-28.1MiB/s (12.5MB/s-29.4MB/s), io=86.9MiB (91.1MB), run=1002-1045msec 00:43:17.433 WRITE: bw=87.1MiB/s (91.3MB/s), 12.3MiB/s-29.9MiB/s (12.9MB/s-31.3MB/s), io=91.0MiB (95.4MB), run=1002-1045msec 00:43:17.433 00:43:17.433 Disk stats (read/write): 00:43:17.433 nvme0n1: ios=6440/6656, merge=0/0, ticks=44170/51509, in_queue=95679, util=86.77% 00:43:17.433 nvme0n2: ios=5709/6144, merge=0/0, ticks=13625/11033, in_queue=24658, util=99.90% 00:43:17.433 nvme0n3: ios=3873/4096, merge=0/0, ticks=23857/24084, in_queue=47941, util=88.29% 00:43:17.433 nvme0n4: ios=2594/2784, merge=0/0, ticks=16921/16224, in_queue=33145, util=100.00% 00:43:17.433 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:43:17.433 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3149345 00:43:17.433 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:43:17.433 10:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:43:17.433 [global] 00:43:17.433 thread=1 00:43:17.433 invalidate=1 00:43:17.433 rw=read 00:43:17.433 time_based=1 00:43:17.433 runtime=10 00:43:17.433 ioengine=libaio 00:43:17.433 direct=1 00:43:17.433 bs=4096 00:43:17.433 iodepth=1 00:43:17.433 norandommap=1 00:43:17.433 numjobs=1 00:43:17.433 00:43:17.433 [job0] 00:43:17.433 filename=/dev/nvme0n1 00:43:17.433 [job1] 00:43:17.433 filename=/dev/nvme0n2 00:43:17.433 [job2] 00:43:17.433 filename=/dev/nvme0n3 00:43:17.433 [job3] 00:43:17.433 filename=/dev/nvme0n4 00:43:17.433 Could not set queue depth (nvme0n1) 00:43:17.433 Could not set queue depth (nvme0n2) 00:43:17.433 Could not set queue depth (nvme0n3) 00:43:17.433 Could not set queue depth (nvme0n4) 00:43:18.008 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:18.008 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:18.008 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:18.008 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:43:18.008 fio-3.35 00:43:18.008 Starting 4 threads 00:43:20.556 10:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:43:20.556 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=10080256, buflen=4096 00:43:20.556 fio: pid=3149587, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:43:20.556 10:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:43:20.816 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=13533184, buflen=4096 00:43:20.816 fio: pid=3149580, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:43:20.816 10:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:43:20.816 10:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:43:21.077 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=6963200, buflen=4096 00:43:21.077 fio: pid=3149561, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:43:21.077 10:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:43:21.077 10:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:43:21.077 10:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:43:21.078 10:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:43:21.078 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=11673600, buflen=4096 00:43:21.078 fio: pid=3149564, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:43:21.338 00:43:21.338 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3149561: Mon Dec 9 10:00:56 2024 00:43:21.338 read: IOPS=572, BW=2290KiB/s (2345kB/s)(6800KiB/2969msec) 00:43:21.338 slat (usec): min=6, max=12566, avg=33.85, stdev=304.09 00:43:21.338 clat (usec): min=465, max=42082, avg=1693.36, stdev=5353.24 00:43:21.338 lat (usec): min=492, max=42108, avg=1727.21, stdev=5361.14 00:43:21.338 clat percentiles (usec): 00:43:21.338 | 1.00th=[ 660], 5.00th=[ 742], 10.00th=[ 783], 20.00th=[ 857], 00:43:21.338 | 30.00th=[ 914], 40.00th=[ 963], 50.00th=[ 1004], 60.00th=[ 1029], 00:43:21.338 | 70.00th=[ 1057], 80.00th=[ 1090], 90.00th=[ 1139], 95.00th=[ 1172], 00:43:21.338 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:43:21.338 | 99.99th=[42206] 00:43:21.338 bw ( KiB/s): min= 96, max= 4048, per=19.83%, avg=2577.60, stdev=1913.62, samples=5 00:43:21.338 iops : min= 24, max= 1012, avg=644.40, stdev=478.40, samples=5 00:43:21.338 lat (usec) : 500=0.12%, 750=5.64%, 1000=43.27% 00:43:21.338 lat (msec) : 2=49.09%, 10=0.06%, 50=1.76% 00:43:21.338 cpu : usr=1.01%, sys=2.26%, ctx=1702, majf=0, minf=1 00:43:21.338 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:21.338 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:21.338 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:21.338 issued rwts: total=1701,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:21.338 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:21.338 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3149564: Mon Dec 9 10:00:56 2024 00:43:21.338 read: IOPS=897, BW=3591KiB/s (3677kB/s)(11.1MiB/3175msec) 00:43:21.338 slat (usec): min=6, max=16332, avg=47.95, stdev=561.63 00:43:21.338 clat (usec): min=378, max=41528, avg=1051.88, stdev=764.47 00:43:21.338 lat (usec): min=405, max=41553, avg=1099.84, stdev=949.29 00:43:21.338 clat percentiles (usec): 00:43:21.338 | 1.00th=[ 750], 5.00th=[ 865], 10.00th=[ 922], 20.00th=[ 971], 00:43:21.338 | 30.00th=[ 1004], 40.00th=[ 1020], 50.00th=[ 1045], 60.00th=[ 1074], 00:43:21.338 | 70.00th=[ 1090], 80.00th=[ 1106], 90.00th=[ 1139], 95.00th=[ 1172], 00:43:21.338 | 99.00th=[ 1237], 99.50th=[ 1270], 99.90th=[ 1336], 99.95th=[ 1418], 00:43:21.338 | 99.99th=[41681] 00:43:21.338 bw ( KiB/s): min= 3527, max= 3728, per=28.16%, avg=3659.83, stdev=82.96, samples=6 00:43:21.338 iops : min= 881, max= 932, avg=914.83, stdev=20.98, samples=6 00:43:21.338 lat (usec) : 500=0.04%, 750=1.02%, 1000=27.85% 00:43:21.338 lat (msec) : 2=71.03%, 50=0.04% 00:43:21.338 cpu : usr=1.29%, sys=3.94%, ctx=2855, majf=0, minf=2 00:43:21.338 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:21.338 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:21.338 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:21.338 issued rwts: total=2851,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:21.338 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:21.338 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3149580: Mon Dec 9 10:00:56 2024 00:43:21.338 read: IOPS=1183, BW=4734KiB/s (4847kB/s)(12.9MiB/2792msec) 00:43:21.338 slat (nsec): min=6931, max=61313, avg=23927.71, stdev=7649.54 00:43:21.338 clat (usec): min=379, max=1198, avg=808.72, stdev=89.05 00:43:21.338 lat (usec): min=406, max=1225, avg=832.65, stdev=90.36 00:43:21.338 clat percentiles (usec): 00:43:21.338 | 1.00th=[ 545], 5.00th=[ 644], 10.00th=[ 693], 20.00th=[ 742], 00:43:21.338 | 30.00th=[ 783], 40.00th=[ 807], 50.00th=[ 824], 60.00th=[ 840], 00:43:21.338 | 70.00th=[ 857], 80.00th=[ 873], 90.00th=[ 906], 95.00th=[ 930], 00:43:21.338 | 99.00th=[ 988], 99.50th=[ 1020], 99.90th=[ 1188], 99.95th=[ 1205], 00:43:21.338 | 99.99th=[ 1205] 00:43:21.338 bw ( KiB/s): min= 4624, max= 5024, per=36.76%, avg=4777.60, stdev=173.75, samples=5 00:43:21.338 iops : min= 1156, max= 1256, avg=1194.40, stdev=43.44, samples=5 00:43:21.338 lat (usec) : 500=0.51%, 750=22.09%, 1000=76.61% 00:43:21.338 lat (msec) : 2=0.76% 00:43:21.338 cpu : usr=1.25%, sys=3.22%, ctx=3305, majf=0, minf=2 00:43:21.338 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:21.338 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:21.338 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:21.338 issued rwts: total=3305,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:21.338 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:21.338 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3149587: Mon Dec 9 10:00:56 2024 00:43:21.338 read: IOPS=945, BW=3780KiB/s (3871kB/s)(9844KiB/2604msec) 00:43:21.338 slat (nsec): min=7182, max=59654, avg=26842.92, stdev=3298.37 00:43:21.338 clat (usec): min=480, max=1422, avg=1016.04, stdev=124.26 00:43:21.338 lat (usec): min=506, max=1449, avg=1042.89, stdev=124.55 00:43:21.338 clat percentiles (usec): 00:43:21.338 | 1.00th=[ 693], 5.00th=[ 807], 10.00th=[ 865], 20.00th=[ 922], 00:43:21.338 | 30.00th=[ 963], 40.00th=[ 988], 50.00th=[ 1020], 60.00th=[ 1045], 00:43:21.338 | 70.00th=[ 1074], 80.00th=[ 1106], 90.00th=[ 1172], 95.00th=[ 1221], 00:43:21.338 | 99.00th=[ 1303], 99.50th=[ 1336], 99.90th=[ 1385], 99.95th=[ 1401], 00:43:21.338 | 99.99th=[ 1418] 00:43:21.338 bw ( KiB/s): min= 3720, max= 3928, per=29.35%, avg=3814.40, stdev=80.48, samples=5 00:43:21.338 iops : min= 930, max= 982, avg=953.60, stdev=20.12, samples=5 00:43:21.338 lat (usec) : 500=0.04%, 750=2.56%, 1000=41.27% 00:43:21.338 lat (msec) : 2=56.09% 00:43:21.338 cpu : usr=1.96%, sys=3.50%, ctx=2462, majf=0, minf=2 00:43:21.338 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:21.338 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:21.338 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:21.338 issued rwts: total=2462,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:21.338 latency : target=0, window=0, percentile=100.00%, depth=1 00:43:21.338 00:43:21.338 Run status group 0 (all jobs): 00:43:21.338 READ: bw=12.7MiB/s (13.3MB/s), 2290KiB/s-4734KiB/s (2345kB/s-4847kB/s), io=40.3MiB (42.2MB), run=2604-3175msec 00:43:21.338 00:43:21.338 Disk stats (read/write): 00:43:21.338 nvme0n1: ios=1634/0, merge=0/0, ticks=2614/0, in_queue=2614, util=94.36% 00:43:21.338 nvme0n2: ios=2819/0, merge=0/0, ticks=2692/0, in_queue=2692, util=93.90% 00:43:21.338 nvme0n3: ios=3084/0, merge=0/0, ticks=2419/0, in_queue=2419, util=96.03% 00:43:21.338 nvme0n4: ios=2461/0, merge=0/0, ticks=2257/0, in_queue=2257, util=96.42% 00:43:21.339 10:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:43:21.339 10:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:43:21.598 10:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:43:21.598 10:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:43:21.598 10:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:43:21.598 10:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:43:21.859 10:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:43:21.859 10:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:43:22.119 10:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:43:22.119 10:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 3149345 00:43:22.119 10:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:43:22.119 10:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:43:22.119 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:43:22.119 10:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:43:22.119 10:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:43:22.119 10:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:43:22.119 10:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:43:22.119 10:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:43:22.119 10:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:43:22.119 10:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:43:22.119 10:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:43:22.119 10:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:43:22.119 nvmf hotplug test: fio failed as expected 00:43:22.119 10:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:43:22.390 10:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:43:22.390 10:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:43:22.390 10:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:43:22.390 10:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:43:22.390 10:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:43:22.390 10:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:43:22.390 10:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:43:22.390 10:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:22.390 10:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:43:22.390 10:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:22.390 10:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:22.390 rmmod nvme_tcp 00:43:22.390 rmmod nvme_fabrics 00:43:22.390 rmmod nvme_keyring 00:43:22.390 10:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:22.390 10:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:43:22.390 10:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:43:22.390 10:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 3146183 ']' 00:43:22.390 10:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 3146183 00:43:22.390 10:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 3146183 ']' 00:43:22.390 10:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 3146183 00:43:22.390 10:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:43:22.390 10:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:22.390 10:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3146183 00:43:22.650 10:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:43:22.650 10:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:43:22.650 10:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3146183' 00:43:22.650 killing process with pid 3146183 00:43:22.650 10:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 3146183 00:43:22.650 10:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 3146183 00:43:22.650 10:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:43:22.650 10:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:43:22.650 10:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:43:22.650 10:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:43:22.650 10:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:43:22.650 10:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:43:22.650 10:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:43:22.650 10:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:22.650 10:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:43:22.650 10:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:22.650 10:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:22.650 10:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:25.191 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:43:25.191 00:43:25.191 real 0m27.790s 00:43:25.191 user 2m17.310s 00:43:25.191 sys 0m12.313s 00:43:25.191 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:25.191 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:43:25.191 ************************************ 00:43:25.191 END TEST nvmf_fio_target 00:43:25.191 ************************************ 00:43:25.191 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:43:25.191 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:43:25.191 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:25.191 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:43:25.191 ************************************ 00:43:25.191 START TEST nvmf_bdevio 00:43:25.191 ************************************ 00:43:25.191 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:43:25.191 * Looking for test storage... 00:43:25.191 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:43:25.192 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:43:25.192 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:43:25.192 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:43:25.192 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:43:25.192 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:25.192 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:25.192 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:25.192 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:43:25.192 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:43:25.192 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:43:25.192 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:43:25.192 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:43:25.192 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:43:25.192 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:43:25.192 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:25.192 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:43:25.192 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:43:25.192 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:25.192 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:25.192 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:43:25.192 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:43:25.192 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:25.192 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:43:25.192 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:43:25.192 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:43:25.192 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:43:25.192 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:25.192 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:43:25.192 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:43:25.192 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:25.192 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:25.192 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:43:25.192 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:25.192 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:43:25.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:25.192 --rc genhtml_branch_coverage=1 00:43:25.192 --rc genhtml_function_coverage=1 00:43:25.192 --rc genhtml_legend=1 00:43:25.192 --rc geninfo_all_blocks=1 00:43:25.192 --rc geninfo_unexecuted_blocks=1 00:43:25.192 00:43:25.192 ' 00:43:25.192 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:43:25.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:25.192 --rc genhtml_branch_coverage=1 00:43:25.192 --rc genhtml_function_coverage=1 00:43:25.192 --rc genhtml_legend=1 00:43:25.192 --rc geninfo_all_blocks=1 00:43:25.192 --rc geninfo_unexecuted_blocks=1 00:43:25.192 00:43:25.192 ' 00:43:25.192 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:43:25.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:25.192 --rc genhtml_branch_coverage=1 00:43:25.192 --rc genhtml_function_coverage=1 00:43:25.192 --rc genhtml_legend=1 00:43:25.192 --rc geninfo_all_blocks=1 00:43:25.192 --rc geninfo_unexecuted_blocks=1 00:43:25.192 00:43:25.192 ' 00:43:25.192 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:43:25.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:25.192 --rc genhtml_branch_coverage=1 00:43:25.192 --rc genhtml_function_coverage=1 00:43:25.192 --rc genhtml_legend=1 00:43:25.192 --rc geninfo_all_blocks=1 00:43:25.192 --rc geninfo_unexecuted_blocks=1 00:43:25.192 00:43:25.192 ' 00:43:25.192 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:25.192 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:43:25.192 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:25.192 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:25.192 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:25.192 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:25.192 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:25.192 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:25.192 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:25.192 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:25.192 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:25.192 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:25.192 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:43:25.192 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:43:25.192 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:25.192 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:25.192 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:25.192 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:25.192 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:25.192 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:43:25.192 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:25.192 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:25.192 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:25.192 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:25.192 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:25.192 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:25.192 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:43:25.192 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:25.192 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:43:25.192 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:25.192 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:25.192 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:25.192 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:25.192 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:25.193 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:43:25.193 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:43:25.193 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:25.193 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:25.193 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:25.193 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:43:25.193 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:43:25.193 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:43:25.193 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:43:25.193 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:25.193 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:43:25.193 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:43:25.193 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:43:25.193 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:25.193 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:25.193 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:25.193 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:43:25.193 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:43:25.193 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:43:25.193 10:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:31.783 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:31.783 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:43:31.783 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:43:31.783 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:43:31.783 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:43:31.783 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:43:31.783 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:43:31.783 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:43:31.783 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:43:31.783 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:43:31.783 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:43:31.783 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:43:31.783 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:43:31.783 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:43:31.783 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:43:31.783 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:31.783 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:31.783 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:31.783 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:31.783 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:31.783 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:31.783 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:31.783 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:43:31.783 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:31.783 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:31.783 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:31.783 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:31.783 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:43:31.783 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:43:31.783 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:43:31.783 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:43:31.783 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:43:31.783 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:43:31.783 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:31.783 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:43:31.783 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:43:31.783 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:31.783 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:31.783 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:31.783 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:31.783 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:31.783 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:31.783 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:43:31.783 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:43:31.783 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:31.783 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:31.783 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:31.783 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:31.783 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:31.783 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:43:31.783 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:43:31.783 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:43:31.783 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:31.783 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:31.783 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:31.783 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:31.783 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:31.783 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:31.783 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:31.783 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:43:31.783 Found net devices under 0000:4b:00.0: cvl_0_0 00:43:31.783 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:31.783 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:31.783 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:31.783 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:31.783 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:31.783 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:31.783 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:31.783 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:31.783 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:43:31.783 Found net devices under 0000:4b:00.1: cvl_0_1 00:43:31.783 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:31.783 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:43:31.783 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:43:31.783 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:43:31.783 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:43:31.783 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:43:31.783 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:31.783 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:31.783 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:31.783 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:31.784 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:43:31.784 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:31.784 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:31.784 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:43:31.784 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:43:31.784 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:31.784 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:31.784 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:43:31.784 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:43:31.784 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:43:31.784 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:32.045 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:32.045 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:32.045 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:43:32.045 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:32.045 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:32.045 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:32.045 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:43:32.045 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:43:32.045 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:32.045 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.715 ms 00:43:32.045 00:43:32.045 --- 10.0.0.2 ping statistics --- 00:43:32.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:32.045 rtt min/avg/max/mdev = 0.715/0.715/0.715/0.000 ms 00:43:32.045 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:32.045 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:32.045 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.293 ms 00:43:32.045 00:43:32.046 --- 10.0.0.1 ping statistics --- 00:43:32.046 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:32.046 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:43:32.046 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:32.046 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:43:32.046 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:43:32.046 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:32.046 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:43:32.046 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:43:32.046 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:32.046 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:43:32.046 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:43:32.306 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:43:32.306 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:43:32.306 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:32.306 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:32.306 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=3154559 00:43:32.306 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 3154559 00:43:32.306 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:43:32.306 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 3154559 ']' 00:43:32.306 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:32.306 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:32.306 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:32.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:32.306 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:32.306 10:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:32.306 [2024-12-09 10:01:07.561190] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:43:32.306 [2024-12-09 10:01:07.562175] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:43:32.306 [2024-12-09 10:01:07.562212] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:32.306 [2024-12-09 10:01:07.655581] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:43:32.306 [2024-12-09 10:01:07.678245] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:32.306 [2024-12-09 10:01:07.678288] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:32.306 [2024-12-09 10:01:07.678300] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:32.306 [2024-12-09 10:01:07.678310] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:32.306 [2024-12-09 10:01:07.678317] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:32.306 [2024-12-09 10:01:07.680015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:43:32.306 [2024-12-09 10:01:07.680135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:43:32.306 [2024-12-09 10:01:07.680294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:43:32.306 [2024-12-09 10:01:07.680296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:43:32.306 [2024-12-09 10:01:07.735405] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:43:32.306 [2024-12-09 10:01:07.736724] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:43:32.306 [2024-12-09 10:01:07.736850] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:43:32.306 [2024-12-09 10:01:07.737414] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:43:32.306 [2024-12-09 10:01:07.737495] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:43:33.247 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:33.247 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:43:33.247 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:43:33.247 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:33.247 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:33.247 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:33.247 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:43:33.247 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:33.247 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:33.247 [2024-12-09 10:01:08.397208] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:33.247 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:33.247 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:43:33.247 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:33.247 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:33.247 Malloc0 00:43:33.247 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:33.247 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:43:33.247 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:33.247 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:33.247 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:33.247 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:43:33.247 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:33.247 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:33.247 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:33.247 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:33.247 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:33.247 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:33.247 [2024-12-09 10:01:08.485461] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:33.247 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:33.247 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:43:33.247 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:43:33.247 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:43:33.247 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:43:33.247 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:33.247 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:33.247 { 00:43:33.247 "params": { 00:43:33.247 "name": "Nvme$subsystem", 00:43:33.247 "trtype": "$TEST_TRANSPORT", 00:43:33.247 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:33.247 "adrfam": "ipv4", 00:43:33.247 "trsvcid": "$NVMF_PORT", 00:43:33.247 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:33.247 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:33.247 "hdgst": ${hdgst:-false}, 00:43:33.247 "ddgst": ${ddgst:-false} 00:43:33.247 }, 00:43:33.247 "method": "bdev_nvme_attach_controller" 00:43:33.247 } 00:43:33.247 EOF 00:43:33.247 )") 00:43:33.247 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:43:33.247 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:43:33.247 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:43:33.247 10:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:43:33.247 "params": { 00:43:33.247 "name": "Nvme1", 00:43:33.248 "trtype": "tcp", 00:43:33.248 "traddr": "10.0.0.2", 00:43:33.248 "adrfam": "ipv4", 00:43:33.248 "trsvcid": "4420", 00:43:33.248 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:43:33.248 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:43:33.248 "hdgst": false, 00:43:33.248 "ddgst": false 00:43:33.248 }, 00:43:33.248 "method": "bdev_nvme_attach_controller" 00:43:33.248 }' 00:43:33.248 [2024-12-09 10:01:08.539598] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:43:33.248 [2024-12-09 10:01:08.539655] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3154910 ] 00:43:33.248 [2024-12-09 10:01:08.628971] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:43:33.248 [2024-12-09 10:01:08.650148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:43:33.248 [2024-12-09 10:01:08.650272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:43:33.248 [2024-12-09 10:01:08.650275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:33.509 I/O targets: 00:43:33.509 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:43:33.509 00:43:33.509 00:43:33.509 CUnit - A unit testing framework for C - Version 2.1-3 00:43:33.509 http://cunit.sourceforge.net/ 00:43:33.509 00:43:33.509 00:43:33.509 Suite: bdevio tests on: Nvme1n1 00:43:33.509 Test: blockdev write read block ...passed 00:43:33.509 Test: blockdev write zeroes read block ...passed 00:43:33.509 Test: blockdev write zeroes read no split ...passed 00:43:33.509 Test: blockdev write zeroes read split ...passed 00:43:33.509 Test: blockdev write zeroes read split partial ...passed 00:43:33.509 Test: blockdev reset ...[2024-12-09 10:01:08.948009] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:43:33.509 [2024-12-09 10:01:08.948072] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f0580 (9): Bad file descriptor 00:43:33.770 [2024-12-09 10:01:09.041606] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:43:33.770 passed 00:43:33.770 Test: blockdev write read 8 blocks ...passed 00:43:33.770 Test: blockdev write read size > 128k ...passed 00:43:33.770 Test: blockdev write read invalid size ...passed 00:43:33.770 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:43:33.770 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:43:33.770 Test: blockdev write read max offset ...passed 00:43:33.770 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:43:34.030 Test: blockdev writev readv 8 blocks ...passed 00:43:34.030 Test: blockdev writev readv 30 x 1block ...passed 00:43:34.030 Test: blockdev writev readv block ...passed 00:43:34.030 Test: blockdev writev readv size > 128k ...passed 00:43:34.030 Test: blockdev writev readv size > 128k in two iovs ...passed 00:43:34.030 Test: blockdev comparev and writev ...[2024-12-09 10:01:09.305912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:34.030 [2024-12-09 10:01:09.305941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:43:34.030 [2024-12-09 10:01:09.305953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:34.030 [2024-12-09 10:01:09.305959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:43:34.030 [2024-12-09 10:01:09.306489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:34.030 [2024-12-09 10:01:09.306498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:43:34.030 [2024-12-09 10:01:09.306507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:34.030 [2024-12-09 10:01:09.306513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:43:34.030 [2024-12-09 10:01:09.307044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:34.030 [2024-12-09 10:01:09.307053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:43:34.030 [2024-12-09 10:01:09.307062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:34.030 [2024-12-09 10:01:09.307068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:43:34.030 [2024-12-09 10:01:09.307611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:34.031 [2024-12-09 10:01:09.307618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:43:34.031 [2024-12-09 10:01:09.307628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:43:34.031 [2024-12-09 10:01:09.307634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:43:34.031 passed 00:43:34.031 Test: blockdev nvme passthru rw ...passed 00:43:34.031 Test: blockdev nvme passthru vendor specific ...[2024-12-09 10:01:09.391345] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:43:34.031 [2024-12-09 10:01:09.391356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:43:34.031 [2024-12-09 10:01:09.391797] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:43:34.031 [2024-12-09 10:01:09.391805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:43:34.031 [2024-12-09 10:01:09.392220] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:43:34.031 [2024-12-09 10:01:09.392227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:43:34.031 [2024-12-09 10:01:09.392589] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:43:34.031 [2024-12-09 10:01:09.392596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:43:34.031 passed 00:43:34.031 Test: blockdev nvme admin passthru ...passed 00:43:34.031 Test: blockdev copy ...passed 00:43:34.031 00:43:34.031 Run Summary: Type Total Ran Passed Failed Inactive 00:43:34.031 suites 1 1 n/a 0 0 00:43:34.031 tests 23 23 23 0 0 00:43:34.031 asserts 152 152 152 0 n/a 00:43:34.031 00:43:34.031 Elapsed time = 1.347 seconds 00:43:34.306 10:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:43:34.306 10:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:34.306 10:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:34.306 10:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:34.306 10:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:43:34.306 10:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:43:34.306 10:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:43:34.306 10:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:43:34.306 10:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:34.306 10:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:43:34.306 10:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:34.306 10:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:34.306 rmmod nvme_tcp 00:43:34.306 rmmod nvme_fabrics 00:43:34.306 rmmod nvme_keyring 00:43:34.306 10:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:34.306 10:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:43:34.306 10:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:43:34.306 10:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 3154559 ']' 00:43:34.306 10:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 3154559 00:43:34.306 10:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 3154559 ']' 00:43:34.306 10:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 3154559 00:43:34.306 10:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:43:34.306 10:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:34.306 10:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3154559 00:43:34.306 10:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:43:34.306 10:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:43:34.306 10:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3154559' 00:43:34.306 killing process with pid 3154559 00:43:34.306 10:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 3154559 00:43:34.306 10:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 3154559 00:43:34.567 10:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:43:34.567 10:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:43:34.567 10:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:43:34.567 10:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:43:34.567 10:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:43:34.567 10:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:43:34.567 10:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:43:34.567 10:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:34.567 10:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:43:34.567 10:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:34.567 10:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:43:34.567 10:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:37.114 10:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:43:37.114 00:43:37.114 real 0m11.798s 00:43:37.114 user 0m9.095s 00:43:37.114 sys 0m6.198s 00:43:37.114 10:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:37.114 10:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:43:37.114 ************************************ 00:43:37.114 END TEST nvmf_bdevio 00:43:37.114 ************************************ 00:43:37.114 10:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:43:37.114 00:43:37.114 real 4m53.331s 00:43:37.114 user 10m8.571s 00:43:37.114 sys 2m2.150s 00:43:37.114 10:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:37.114 10:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:43:37.114 ************************************ 00:43:37.114 END TEST nvmf_target_core_interrupt_mode 00:43:37.114 ************************************ 00:43:37.114 10:01:12 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:43:37.114 10:01:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:43:37.114 10:01:12 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:37.114 10:01:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:43:37.114 ************************************ 00:43:37.114 START TEST nvmf_interrupt 00:43:37.114 ************************************ 00:43:37.114 10:01:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:43:37.114 * Looking for test storage... 00:43:37.114 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:43:37.114 10:01:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:43:37.114 10:01:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lcov --version 00:43:37.114 10:01:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:43:37.114 10:01:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:43:37.114 10:01:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:37.114 10:01:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:37.114 10:01:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:37.114 10:01:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:43:37.114 10:01:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:43:37.114 10:01:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:43:37.114 10:01:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:43:37.114 10:01:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:43:37.114 10:01:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:43:37.114 10:01:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:43:37.114 10:01:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:37.114 10:01:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:43:37.114 10:01:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:43:37.114 10:01:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:37.114 10:01:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:37.114 10:01:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:43:37.114 10:01:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:43:37.114 10:01:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:37.114 10:01:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:43:37.114 10:01:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:43:37.114 10:01:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:43:37.114 10:01:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:43:37.115 10:01:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:37.115 10:01:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:43:37.115 10:01:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:43:37.115 10:01:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:37.115 10:01:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:37.115 10:01:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:43:37.115 10:01:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:37.115 10:01:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:43:37.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:37.115 --rc genhtml_branch_coverage=1 00:43:37.115 --rc genhtml_function_coverage=1 00:43:37.115 --rc genhtml_legend=1 00:43:37.115 --rc geninfo_all_blocks=1 00:43:37.115 --rc geninfo_unexecuted_blocks=1 00:43:37.115 00:43:37.115 ' 00:43:37.115 10:01:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:43:37.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:37.115 --rc genhtml_branch_coverage=1 00:43:37.115 --rc genhtml_function_coverage=1 00:43:37.115 --rc genhtml_legend=1 00:43:37.115 --rc geninfo_all_blocks=1 00:43:37.115 --rc geninfo_unexecuted_blocks=1 00:43:37.115 00:43:37.115 ' 00:43:37.115 10:01:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:43:37.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:37.115 --rc genhtml_branch_coverage=1 00:43:37.115 --rc genhtml_function_coverage=1 00:43:37.115 --rc genhtml_legend=1 00:43:37.115 --rc geninfo_all_blocks=1 00:43:37.115 --rc geninfo_unexecuted_blocks=1 00:43:37.115 00:43:37.115 ' 00:43:37.115 10:01:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:43:37.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:37.115 --rc genhtml_branch_coverage=1 00:43:37.115 --rc genhtml_function_coverage=1 00:43:37.115 --rc genhtml_legend=1 00:43:37.115 --rc geninfo_all_blocks=1 00:43:37.115 --rc geninfo_unexecuted_blocks=1 00:43:37.115 00:43:37.115 ' 00:43:37.115 10:01:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:37.115 10:01:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:43:37.115 10:01:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:37.115 10:01:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:37.115 10:01:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:37.115 10:01:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:37.115 10:01:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:37.115 10:01:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:37.115 10:01:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:37.115 10:01:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:37.115 10:01:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:37.115 10:01:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:37.115 10:01:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:43:37.115 10:01:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:43:37.115 10:01:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:37.115 10:01:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:37.115 10:01:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:37.115 10:01:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:37.115 10:01:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:37.115 10:01:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:43:37.115 10:01:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:37.115 10:01:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:37.115 10:01:12 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:37.115 10:01:12 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:37.115 10:01:12 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:37.115 10:01:12 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:37.115 10:01:12 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:43:37.115 10:01:12 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:37.115 10:01:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:43:37.115 10:01:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:37.115 10:01:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:37.115 10:01:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:37.115 10:01:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:37.115 10:01:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:37.115 10:01:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:43:37.115 10:01:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:43:37.115 10:01:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:37.115 10:01:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:37.115 10:01:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:37.115 10:01:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:43:37.115 10:01:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:43:37.115 10:01:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:43:37.115 10:01:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:43:37.115 10:01:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:37.115 10:01:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:43:37.115 10:01:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:43:37.115 10:01:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:43:37.115 10:01:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:37.115 10:01:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:43:37.115 10:01:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:37.115 10:01:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:43:37.115 10:01:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:43:37.115 10:01:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:43:37.115 10:01:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:43:43.851 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:43.851 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:43:43.851 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:43:43.851 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:43:43.851 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:43:43.851 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:43:43.851 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:43:43.851 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:43:43.851 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:43:43.851 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:43:43.851 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:43:43.851 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:43:43.851 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:43:43.851 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:43:43.851 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:43:43.851 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:43.851 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:43.851 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:43.851 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:43.851 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:43.851 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:43.851 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:43.851 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:43:43.851 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:43.851 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:43.851 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:43.851 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:43.851 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:43:43.851 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:43:43.851 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:43:43.851 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:43:43.851 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:43:43.851 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:43:43.851 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:43.851 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:43:43.851 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:43:43.851 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:43.851 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:43.851 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:43.851 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:43.851 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:43.851 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:43.851 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:43:43.851 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:43:43.851 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:43.851 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:43.851 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:43.851 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:43.851 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:43.851 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:43:43.852 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:43:43.852 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:43:43.852 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:43.852 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:43.852 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:43.852 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:43.852 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:43.852 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:43.852 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:43.852 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:43:43.852 Found net devices under 0000:4b:00.0: cvl_0_0 00:43:43.852 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:43.852 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:43.852 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:43.852 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:43.852 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:43.852 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:43.852 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:43.852 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:43.852 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:43:43.852 Found net devices under 0000:4b:00.1: cvl_0_1 00:43:43.852 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:43.852 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:43:43.852 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:43:43.852 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:43:43.852 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:43:43.852 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:43:43.852 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:43.852 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:43.852 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:43.852 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:43.852 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:43:43.852 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:43.852 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:43.852 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:43:43.852 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:43:43.852 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:43.852 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:43.852 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:43:43.852 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:43:43.852 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:43:43.852 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:44.156 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:44.156 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:44.156 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:43:44.156 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:44.156 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:44.156 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:44.156 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:43:44.156 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:43:44.156 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:44.156 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.678 ms 00:43:44.156 00:43:44.156 --- 10.0.0.2 ping statistics --- 00:43:44.156 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:44.156 rtt min/avg/max/mdev = 0.678/0.678/0.678/0.000 ms 00:43:44.156 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:44.156 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:44.156 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.295 ms 00:43:44.156 00:43:44.156 --- 10.0.0.1 ping statistics --- 00:43:44.156 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:44.156 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:43:44.156 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:44.156 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:43:44.156 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:43:44.156 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:44.156 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:43:44.156 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:43:44.156 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:44.156 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:43:44.156 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:43:44.156 10:01:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:43:44.156 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:43:44.156 10:01:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:44.156 10:01:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:43:44.156 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=3159260 00:43:44.156 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 3159260 00:43:44.156 10:01:19 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:43:44.156 10:01:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 3159260 ']' 00:43:44.156 10:01:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:44.156 10:01:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:44.156 10:01:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:44.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:44.156 10:01:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:44.156 10:01:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:43:44.156 [2024-12-09 10:01:19.554715] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:43:44.156 [2024-12-09 10:01:19.555689] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:43:44.156 [2024-12-09 10:01:19.555727] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:44.419 [2024-12-09 10:01:19.649412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:43:44.419 [2024-12-09 10:01:19.666839] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:44.419 [2024-12-09 10:01:19.666871] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:44.419 [2024-12-09 10:01:19.666879] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:44.419 [2024-12-09 10:01:19.666886] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:44.419 [2024-12-09 10:01:19.666892] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:44.419 [2024-12-09 10:01:19.668133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:43:44.419 [2024-12-09 10:01:19.668135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:44.419 [2024-12-09 10:01:19.717456] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:43:44.419 [2024-12-09 10:01:19.717986] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:43:44.419 [2024-12-09 10:01:19.718321] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:43:44.990 10:01:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:44.990 10:01:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:43:44.990 10:01:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:43:44.990 10:01:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:44.990 10:01:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:43:44.990 10:01:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:44.990 10:01:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:43:44.990 10:01:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:43:44.990 10:01:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:43:44.990 10:01:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:43:44.990 5000+0 records in 00:43:44.990 5000+0 records out 00:43:44.990 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0171875 s, 596 MB/s 00:43:44.990 10:01:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:43:44.990 10:01:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:44.990 10:01:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:43:44.990 AIO0 00:43:44.990 10:01:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:44.990 10:01:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:43:44.990 10:01:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:44.990 10:01:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:43:45.251 [2024-12-09 10:01:20.445100] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:45.251 10:01:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:45.251 10:01:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:43:45.251 10:01:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:45.251 10:01:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:43:45.251 10:01:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:45.251 10:01:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:43:45.251 10:01:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:45.251 10:01:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:43:45.251 10:01:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:45.251 10:01:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:45.251 10:01:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:45.251 10:01:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:43:45.251 [2024-12-09 10:01:20.489460] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:45.251 10:01:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:45.251 10:01:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:43:45.251 10:01:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3159260 0 00:43:45.251 10:01:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3159260 0 idle 00:43:45.251 10:01:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3159260 00:43:45.251 10:01:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:43:45.251 10:01:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:43:45.251 10:01:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:43:45.251 10:01:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:43:45.251 10:01:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:43:45.251 10:01:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:43:45.251 10:01:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:43:45.251 10:01:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:43:45.251 10:01:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:43:45.251 10:01:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3159260 -w 256 00:43:45.251 10:01:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:43:45.251 10:01:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3159260 root 20 0 128.2g 44928 32256 S 6.7 0.0 0:00.25 reactor_0' 00:43:45.251 10:01:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3159260 root 20 0 128.2g 44928 32256 S 6.7 0.0 0:00.25 reactor_0 00:43:45.251 10:01:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:43:45.251 10:01:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:43:45.251 10:01:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:43:45.251 10:01:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:43:45.251 10:01:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:43:45.251 10:01:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:43:45.251 10:01:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:43:45.251 10:01:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:43:45.251 10:01:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:43:45.251 10:01:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3159260 1 00:43:45.251 10:01:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3159260 1 idle 00:43:45.251 10:01:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3159260 00:43:45.251 10:01:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:43:45.251 10:01:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:43:45.251 10:01:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:43:45.251 10:01:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:43:45.251 10:01:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:43:45.251 10:01:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:43:45.251 10:01:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:43:45.251 10:01:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:43:45.251 10:01:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:43:45.251 10:01:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3159260 -w 256 00:43:45.251 10:01:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:43:45.512 10:01:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3159267 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.00 reactor_1' 00:43:45.513 10:01:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3159267 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.00 reactor_1 00:43:45.513 10:01:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:43:45.513 10:01:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:43:45.513 10:01:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:43:45.513 10:01:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:43:45.513 10:01:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:43:45.513 10:01:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:43:45.513 10:01:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:43:45.513 10:01:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:43:45.513 10:01:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:43:45.513 10:01:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=3159480 00:43:45.513 10:01:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:43:45.513 10:01:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:43:45.513 10:01:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:43:45.513 10:01:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3159260 0 00:43:45.513 10:01:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3159260 0 busy 00:43:45.513 10:01:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3159260 00:43:45.513 10:01:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:43:45.513 10:01:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:43:45.513 10:01:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:43:45.513 10:01:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:43:45.513 10:01:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:43:45.513 10:01:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:43:45.513 10:01:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:43:45.513 10:01:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:43:45.513 10:01:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3159260 -w 256 00:43:45.513 10:01:20 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:43:45.773 10:01:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3159260 root 20 0 128.2g 44928 32256 R 40.0 0.0 0:00.31 reactor_0' 00:43:45.773 10:01:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3159260 root 20 0 128.2g 44928 32256 R 40.0 0.0 0:00.31 reactor_0 00:43:45.773 10:01:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:43:45.773 10:01:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:43:45.773 10:01:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=40.0 00:43:45.773 10:01:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=40 00:43:45.773 10:01:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:43:45.773 10:01:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:43:45.773 10:01:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:43:45.773 10:01:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:43:45.773 10:01:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:43:45.773 10:01:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:43:45.773 10:01:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3159260 1 00:43:45.773 10:01:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3159260 1 busy 00:43:45.773 10:01:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3159260 00:43:45.773 10:01:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:43:45.773 10:01:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:43:45.773 10:01:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:43:45.773 10:01:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:43:45.773 10:01:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:43:45.773 10:01:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:43:45.773 10:01:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:43:45.773 10:01:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:43:45.773 10:01:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3159260 -w 256 00:43:45.773 10:01:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:43:46.033 10:01:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3159267 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:00.22 reactor_1' 00:43:46.033 10:01:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3159267 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:00.22 reactor_1 00:43:46.033 10:01:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:43:46.033 10:01:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:43:46.033 10:01:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:43:46.033 10:01:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:43:46.033 10:01:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:43:46.033 10:01:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:43:46.033 10:01:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:43:46.033 10:01:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:43:46.033 10:01:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 3159480 00:43:56.025 Initializing NVMe Controllers 00:43:56.025 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:43:56.025 Controller IO queue size 256, less than required. 00:43:56.025 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:43:56.025 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:43:56.025 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:43:56.025 Initialization complete. Launching workers. 00:43:56.025 ======================================================== 00:43:56.025 Latency(us) 00:43:56.025 Device Information : IOPS MiB/s Average min max 00:43:56.025 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 18886.80 73.78 13559.80 3020.17 32637.74 00:43:56.025 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 19593.00 76.54 13067.90 7306.74 30008.15 00:43:56.025 ======================================================== 00:43:56.025 Total : 38479.80 150.31 13309.34 3020.17 32637.74 00:43:56.025 00:43:56.025 10:01:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:43:56.025 10:01:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3159260 0 00:43:56.025 10:01:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3159260 0 idle 00:43:56.025 10:01:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3159260 00:43:56.025 10:01:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:43:56.025 10:01:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:43:56.025 10:01:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:43:56.025 10:01:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:43:56.025 10:01:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:43:56.025 10:01:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:43:56.025 10:01:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:43:56.025 10:01:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:43:56.025 10:01:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:43:56.025 10:01:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3159260 -w 256 00:43:56.025 10:01:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:43:56.025 10:01:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3159260 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:20.25 reactor_0' 00:43:56.025 10:01:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3159260 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:20.25 reactor_0 00:43:56.025 10:01:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:43:56.025 10:01:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:43:56.025 10:01:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:43:56.025 10:01:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:43:56.025 10:01:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:43:56.025 10:01:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:43:56.025 10:01:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:43:56.025 10:01:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:43:56.025 10:01:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:43:56.025 10:01:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3159260 1 00:43:56.025 10:01:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3159260 1 idle 00:43:56.025 10:01:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3159260 00:43:56.025 10:01:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:43:56.025 10:01:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:43:56.025 10:01:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:43:56.025 10:01:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:43:56.025 10:01:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:43:56.025 10:01:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:43:56.025 10:01:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:43:56.025 10:01:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:43:56.025 10:01:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:43:56.025 10:01:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3159260 -w 256 00:43:56.025 10:01:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:43:56.025 10:01:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3159267 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:10.01 reactor_1' 00:43:56.025 10:01:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3159267 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:10.01 reactor_1 00:43:56.025 10:01:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:43:56.025 10:01:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:43:56.025 10:01:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:43:56.025 10:01:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:43:56.025 10:01:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:43:56.025 10:01:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:43:56.025 10:01:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:43:56.025 10:01:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:43:56.025 10:01:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:43:56.967 10:01:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:43:56.967 10:01:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:43:56.967 10:01:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:43:56.967 10:01:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:43:56.967 10:01:32 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:43:58.881 10:01:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:43:58.881 10:01:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:43:58.881 10:01:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:43:58.881 10:01:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:43:58.881 10:01:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:43:58.881 10:01:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:43:58.881 10:01:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:43:58.881 10:01:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3159260 0 00:43:58.881 10:01:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3159260 0 idle 00:43:58.881 10:01:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3159260 00:43:58.881 10:01:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:43:58.881 10:01:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:43:58.881 10:01:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:43:58.881 10:01:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:43:58.881 10:01:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:43:58.881 10:01:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:43:58.881 10:01:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:43:58.881 10:01:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:43:58.881 10:01:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:43:58.881 10:01:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3159260 -w 256 00:43:58.881 10:01:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:43:59.142 10:01:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3159260 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:20.61 reactor_0' 00:43:59.142 10:01:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3159260 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:20.61 reactor_0 00:43:59.142 10:01:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:43:59.142 10:01:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:43:59.142 10:01:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:43:59.142 10:01:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:43:59.142 10:01:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:43:59.142 10:01:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:43:59.142 10:01:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:43:59.142 10:01:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:43:59.142 10:01:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:43:59.142 10:01:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3159260 1 00:43:59.142 10:01:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3159260 1 idle 00:43:59.142 10:01:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3159260 00:43:59.142 10:01:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:43:59.142 10:01:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:43:59.142 10:01:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:43:59.142 10:01:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:43:59.142 10:01:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:43:59.142 10:01:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:43:59.142 10:01:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:43:59.142 10:01:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:43:59.142 10:01:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:43:59.142 10:01:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3159260 -w 256 00:43:59.142 10:01:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:43:59.142 10:01:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3159267 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.14 reactor_1' 00:43:59.142 10:01:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3159267 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.14 reactor_1 00:43:59.142 10:01:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:43:59.142 10:01:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:43:59.142 10:01:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:43:59.142 10:01:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:43:59.142 10:01:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:43:59.142 10:01:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:43:59.142 10:01:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:43:59.142 10:01:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:43:59.142 10:01:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:43:59.403 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:43:59.403 10:01:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:43:59.403 10:01:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:43:59.403 10:01:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:43:59.403 10:01:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:43:59.403 10:01:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:43:59.403 10:01:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:43:59.403 10:01:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:43:59.403 10:01:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:43:59.403 10:01:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:43:59.403 10:01:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:43:59.403 10:01:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:43:59.403 10:01:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:59.403 10:01:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:43:59.403 10:01:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:59.403 10:01:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:59.403 rmmod nvme_tcp 00:43:59.403 rmmod nvme_fabrics 00:43:59.403 rmmod nvme_keyring 00:43:59.403 10:01:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:59.403 10:01:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:43:59.403 10:01:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:43:59.403 10:01:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 3159260 ']' 00:43:59.403 10:01:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 3159260 00:43:59.403 10:01:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 3159260 ']' 00:43:59.403 10:01:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 3159260 00:43:59.403 10:01:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:43:59.403 10:01:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:59.403 10:01:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3159260 00:43:59.663 10:01:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:43:59.663 10:01:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:43:59.663 10:01:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3159260' 00:43:59.663 killing process with pid 3159260 00:43:59.663 10:01:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 3159260 00:43:59.663 10:01:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 3159260 00:43:59.663 10:01:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:43:59.663 10:01:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:43:59.663 10:01:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:43:59.663 10:01:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:43:59.663 10:01:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:43:59.663 10:01:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:43:59.663 10:01:34 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:43:59.663 10:01:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:59.663 10:01:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:43:59.663 10:01:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:59.663 10:01:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:43:59.663 10:01:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:02.229 10:01:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:44:02.229 00:44:02.229 real 0m25.018s 00:44:02.229 user 0m40.321s 00:44:02.229 sys 0m9.302s 00:44:02.229 10:01:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:02.229 10:01:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:44:02.229 ************************************ 00:44:02.229 END TEST nvmf_interrupt 00:44:02.229 ************************************ 00:44:02.229 00:44:02.229 real 37m44.558s 00:44:02.229 user 91m16.546s 00:44:02.229 sys 11m18.507s 00:44:02.229 10:01:37 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:02.229 10:01:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:02.229 ************************************ 00:44:02.229 END TEST nvmf_tcp 00:44:02.229 ************************************ 00:44:02.229 10:01:37 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:44:02.229 10:01:37 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:44:02.229 10:01:37 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:44:02.229 10:01:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:02.229 10:01:37 -- common/autotest_common.sh@10 -- # set +x 00:44:02.229 ************************************ 00:44:02.229 START TEST spdkcli_nvmf_tcp 00:44:02.229 ************************************ 00:44:02.229 10:01:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:44:02.229 * Looking for test storage... 00:44:02.229 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:44:02.229 10:01:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:44:02.229 10:01:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:44:02.229 10:01:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:44:02.229 10:01:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:44:02.229 10:01:37 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:02.229 10:01:37 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:02.229 10:01:37 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:02.229 10:01:37 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:44:02.229 10:01:37 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:44:02.229 10:01:37 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:44:02.229 10:01:37 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:44:02.229 10:01:37 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:44:02.229 10:01:37 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:44:02.229 10:01:37 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:44:02.229 10:01:37 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:02.229 10:01:37 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:44:02.229 10:01:37 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:44:02.229 10:01:37 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:02.229 10:01:37 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:02.229 10:01:37 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:44:02.229 10:01:37 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:44:02.229 10:01:37 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:02.230 10:01:37 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:44:02.230 10:01:37 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:44:02.230 10:01:37 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:44:02.230 10:01:37 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:44:02.230 10:01:37 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:02.230 10:01:37 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:44:02.230 10:01:37 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:44:02.230 10:01:37 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:02.230 10:01:37 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:02.230 10:01:37 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:44:02.230 10:01:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:02.230 10:01:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:44:02.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:02.230 --rc genhtml_branch_coverage=1 00:44:02.230 --rc genhtml_function_coverage=1 00:44:02.230 --rc genhtml_legend=1 00:44:02.230 --rc geninfo_all_blocks=1 00:44:02.230 --rc geninfo_unexecuted_blocks=1 00:44:02.230 00:44:02.230 ' 00:44:02.230 10:01:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:44:02.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:02.230 --rc genhtml_branch_coverage=1 00:44:02.230 --rc genhtml_function_coverage=1 00:44:02.230 --rc genhtml_legend=1 00:44:02.230 --rc geninfo_all_blocks=1 00:44:02.230 --rc geninfo_unexecuted_blocks=1 00:44:02.230 00:44:02.230 ' 00:44:02.230 10:01:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:44:02.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:02.230 --rc genhtml_branch_coverage=1 00:44:02.230 --rc genhtml_function_coverage=1 00:44:02.230 --rc genhtml_legend=1 00:44:02.230 --rc geninfo_all_blocks=1 00:44:02.230 --rc geninfo_unexecuted_blocks=1 00:44:02.230 00:44:02.230 ' 00:44:02.230 10:01:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:44:02.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:02.230 --rc genhtml_branch_coverage=1 00:44:02.230 --rc genhtml_function_coverage=1 00:44:02.230 --rc genhtml_legend=1 00:44:02.230 --rc geninfo_all_blocks=1 00:44:02.230 --rc geninfo_unexecuted_blocks=1 00:44:02.230 00:44:02.230 ' 00:44:02.230 10:01:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:44:02.230 10:01:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:44:02.230 10:01:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:44:02.230 10:01:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:44:02.230 10:01:37 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:44:02.230 10:01:37 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:02.230 10:01:37 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:02.230 10:01:37 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:02.230 10:01:37 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:02.230 10:01:37 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:02.230 10:01:37 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:02.230 10:01:37 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:02.230 10:01:37 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:02.230 10:01:37 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:02.230 10:01:37 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:02.230 10:01:37 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:44:02.230 10:01:37 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:44:02.230 10:01:37 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:02.230 10:01:37 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:02.230 10:01:37 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:44:02.230 10:01:37 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:02.230 10:01:37 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:02.230 10:01:37 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:44:02.230 10:01:37 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:02.230 10:01:37 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:02.230 10:01:37 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:02.230 10:01:37 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:02.230 10:01:37 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:02.230 10:01:37 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:02.230 10:01:37 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:44:02.230 10:01:37 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:02.230 10:01:37 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:44:02.230 10:01:37 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:44:02.230 10:01:37 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:44:02.230 10:01:37 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:02.230 10:01:37 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:02.230 10:01:37 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:02.230 10:01:37 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:44:02.230 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:44:02.230 10:01:37 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:44:02.230 10:01:37 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:44:02.230 10:01:37 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:44:02.230 10:01:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:44:02.230 10:01:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:44:02.230 10:01:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:44:02.230 10:01:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:44:02.230 10:01:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:44:02.230 10:01:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:02.230 10:01:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:44:02.230 10:01:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3162743 00:44:02.230 10:01:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 3162743 00:44:02.230 10:01:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 3162743 ']' 00:44:02.230 10:01:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:02.230 10:01:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:44:02.230 10:01:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:02.230 10:01:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:02.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:02.230 10:01:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:02.230 10:01:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:02.230 [2024-12-09 10:01:37.506435] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:44:02.230 [2024-12-09 10:01:37.506514] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3162743 ] 00:44:02.230 [2024-12-09 10:01:37.596343] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:44:02.230 [2024-12-09 10:01:37.625486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:44:02.230 [2024-12-09 10:01:37.625493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:03.172 10:01:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:03.172 10:01:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:44:03.172 10:01:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:44:03.172 10:01:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:44:03.172 10:01:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:03.172 10:01:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:44:03.173 10:01:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:44:03.173 10:01:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:44:03.173 10:01:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:44:03.173 10:01:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:03.173 10:01:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:44:03.173 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:44:03.173 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:44:03.173 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:44:03.173 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:44:03.173 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:44:03.173 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:44:03.173 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:44:03.173 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:44:03.173 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:44:03.173 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:44:03.173 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:44:03.173 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:44:03.173 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:44:03.173 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:44:03.173 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:44:03.173 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:44:03.173 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:44:03.173 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:44:03.173 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:44:03.173 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:44:03.173 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:44:03.173 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:44:03.173 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:44:03.173 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:44:03.173 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:44:03.173 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:44:03.173 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:44:03.173 ' 00:44:05.718 [2024-12-09 10:01:40.858281] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:06.658 [2024-12-09 10:01:42.066172] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:44:09.198 [2024-12-09 10:01:44.284368] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:44:11.108 [2024-12-09 10:01:46.189770] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:44:12.495 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:44:12.496 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:44:12.496 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:44:12.496 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:44:12.496 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:44:12.496 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:44:12.496 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:44:12.496 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:44:12.496 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:44:12.496 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:44:12.496 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:44:12.496 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:44:12.496 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:44:12.496 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:44:12.496 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:44:12.496 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:44:12.496 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:44:12.496 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:44:12.496 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:44:12.496 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:44:12.496 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:44:12.496 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:44:12.496 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:44:12.496 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:44:12.496 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:44:12.496 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:44:12.496 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:44:12.496 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:44:12.496 10:01:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:44:12.496 10:01:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:44:12.496 10:01:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:12.496 10:01:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:44:12.496 10:01:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:44:12.496 10:01:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:12.496 10:01:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:44:12.496 10:01:47 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:44:12.757 10:01:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:44:13.018 10:01:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:44:13.018 10:01:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:44:13.018 10:01:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:44:13.018 10:01:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:13.018 10:01:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:44:13.018 10:01:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:44:13.018 10:01:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:13.019 10:01:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:44:13.019 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:44:13.019 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:44:13.019 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:44:13.019 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:44:13.019 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:44:13.019 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:44:13.019 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:44:13.019 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:44:13.019 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:44:13.019 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:44:13.019 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:44:13.019 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:44:13.019 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:44:13.019 ' 00:44:18.312 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:44:18.312 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:44:18.312 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:44:18.312 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:44:18.312 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:44:18.312 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:44:18.312 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:44:18.312 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:44:18.312 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:44:18.312 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:44:18.312 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:44:18.312 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:44:18.312 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:44:18.312 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:44:18.312 10:01:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:44:18.312 10:01:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:44:18.312 10:01:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:18.312 10:01:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 3162743 00:44:18.312 10:01:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 3162743 ']' 00:44:18.312 10:01:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 3162743 00:44:18.312 10:01:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:44:18.312 10:01:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:18.312 10:01:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3162743 00:44:18.312 10:01:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:44:18.312 10:01:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:44:18.312 10:01:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3162743' 00:44:18.312 killing process with pid 3162743 00:44:18.312 10:01:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 3162743 00:44:18.312 10:01:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 3162743 00:44:18.312 10:01:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:44:18.312 10:01:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:44:18.312 10:01:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 3162743 ']' 00:44:18.312 10:01:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 3162743 00:44:18.312 10:01:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 3162743 ']' 00:44:18.313 10:01:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 3162743 00:44:18.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3162743) - No such process 00:44:18.313 10:01:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 3162743 is not found' 00:44:18.313 Process with pid 3162743 is not found 00:44:18.313 10:01:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:44:18.313 10:01:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:44:18.313 10:01:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:44:18.313 00:44:18.313 real 0m16.330s 00:44:18.313 user 0m33.861s 00:44:18.313 sys 0m0.781s 00:44:18.313 10:01:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:18.313 10:01:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:44:18.313 ************************************ 00:44:18.313 END TEST spdkcli_nvmf_tcp 00:44:18.313 ************************************ 00:44:18.313 10:01:53 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:44:18.313 10:01:53 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:44:18.313 10:01:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:18.313 10:01:53 -- common/autotest_common.sh@10 -- # set +x 00:44:18.313 ************************************ 00:44:18.313 START TEST nvmf_identify_passthru 00:44:18.313 ************************************ 00:44:18.313 10:01:53 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:44:18.313 * Looking for test storage... 00:44:18.313 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:44:18.313 10:01:53 nvmf_identify_passthru -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:44:18.313 10:01:53 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lcov --version 00:44:18.313 10:01:53 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:44:18.574 10:01:53 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:44:18.574 10:01:53 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:18.574 10:01:53 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:18.574 10:01:53 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:18.574 10:01:53 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:44:18.574 10:01:53 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:44:18.574 10:01:53 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:44:18.574 10:01:53 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:44:18.574 10:01:53 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:44:18.574 10:01:53 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:44:18.574 10:01:53 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:44:18.574 10:01:53 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:18.575 10:01:53 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:44:18.575 10:01:53 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:44:18.575 10:01:53 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:18.575 10:01:53 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:18.575 10:01:53 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:44:18.575 10:01:53 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:44:18.575 10:01:53 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:18.575 10:01:53 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:44:18.575 10:01:53 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:44:18.575 10:01:53 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:44:18.575 10:01:53 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:44:18.575 10:01:53 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:18.575 10:01:53 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:44:18.575 10:01:53 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:44:18.575 10:01:53 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:18.575 10:01:53 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:18.575 10:01:53 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:44:18.575 10:01:53 nvmf_identify_passthru -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:18.575 10:01:53 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:44:18.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:18.575 --rc genhtml_branch_coverage=1 00:44:18.575 --rc genhtml_function_coverage=1 00:44:18.575 --rc genhtml_legend=1 00:44:18.575 --rc geninfo_all_blocks=1 00:44:18.575 --rc geninfo_unexecuted_blocks=1 00:44:18.575 00:44:18.575 ' 00:44:18.575 10:01:53 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:44:18.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:18.575 --rc genhtml_branch_coverage=1 00:44:18.575 --rc genhtml_function_coverage=1 00:44:18.575 --rc genhtml_legend=1 00:44:18.575 --rc geninfo_all_blocks=1 00:44:18.575 --rc geninfo_unexecuted_blocks=1 00:44:18.575 00:44:18.575 ' 00:44:18.575 10:01:53 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:44:18.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:18.575 --rc genhtml_branch_coverage=1 00:44:18.575 --rc genhtml_function_coverage=1 00:44:18.575 --rc genhtml_legend=1 00:44:18.575 --rc geninfo_all_blocks=1 00:44:18.575 --rc geninfo_unexecuted_blocks=1 00:44:18.575 00:44:18.575 ' 00:44:18.575 10:01:53 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:44:18.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:18.575 --rc genhtml_branch_coverage=1 00:44:18.575 --rc genhtml_function_coverage=1 00:44:18.575 --rc genhtml_legend=1 00:44:18.575 --rc geninfo_all_blocks=1 00:44:18.575 --rc geninfo_unexecuted_blocks=1 00:44:18.575 00:44:18.575 ' 00:44:18.575 10:01:53 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:44:18.575 10:01:53 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:44:18.575 10:01:53 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:18.575 10:01:53 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:18.575 10:01:53 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:18.575 10:01:53 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:18.575 10:01:53 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:18.575 10:01:53 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:18.575 10:01:53 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:18.575 10:01:53 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:18.575 10:01:53 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:18.575 10:01:53 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:18.575 10:01:53 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:44:18.575 10:01:53 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:44:18.575 10:01:53 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:18.575 10:01:53 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:18.575 10:01:53 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:44:18.575 10:01:53 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:18.575 10:01:53 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:18.575 10:01:53 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:44:18.575 10:01:53 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:18.575 10:01:53 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:18.575 10:01:53 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:18.575 10:01:53 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:18.575 10:01:53 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:18.575 10:01:53 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:18.575 10:01:53 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:44:18.575 10:01:53 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:18.575 10:01:53 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:44:18.575 10:01:53 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:44:18.575 10:01:53 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:44:18.575 10:01:53 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:18.575 10:01:53 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:18.575 10:01:53 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:18.575 10:01:53 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:44:18.575 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:44:18.575 10:01:53 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:44:18.575 10:01:53 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:44:18.575 10:01:53 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:44:18.575 10:01:53 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:18.575 10:01:53 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:44:18.575 10:01:53 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:18.575 10:01:53 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:18.575 10:01:53 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:18.575 10:01:53 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:18.575 10:01:53 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:18.575 10:01:53 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:18.575 10:01:53 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:44:18.575 10:01:53 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:18.575 10:01:53 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:44:18.575 10:01:53 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:44:18.575 10:01:53 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:44:18.575 10:01:53 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:44:18.575 10:01:53 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:44:18.575 10:01:53 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:44:18.575 10:01:53 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:18.575 10:01:53 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:44:18.576 10:01:53 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:18.576 10:01:53 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:44:18.576 10:01:53 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:44:18.576 10:01:53 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:44:18.576 10:01:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:26.710 10:02:00 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:44:26.710 10:02:00 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:44:26.710 10:02:00 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:44:26.710 10:02:00 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:44:26.710 10:02:00 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:44:26.710 10:02:00 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:44:26.710 10:02:00 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:44:26.710 10:02:00 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:44:26.710 10:02:00 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:44:26.710 10:02:00 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:44:26.710 10:02:00 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:44:26.710 10:02:00 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:44:26.710 10:02:00 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:44:26.710 10:02:00 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:44:26.710 10:02:00 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:44:26.710 10:02:00 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:44:26.710 10:02:00 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:44:26.710 10:02:00 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:44:26.710 10:02:00 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:44:26.710 10:02:00 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:44:26.710 10:02:00 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:44:26.710 10:02:00 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:44:26.710 10:02:00 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:44:26.710 10:02:00 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:44:26.710 10:02:00 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:44:26.710 10:02:00 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:44:26.710 10:02:00 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:44:26.710 10:02:00 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:44:26.710 10:02:00 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:44:26.710 10:02:00 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:44:26.710 10:02:00 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:44:26.710 10:02:00 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:44:26.710 10:02:00 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:44:26.710 10:02:00 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:44:26.710 10:02:00 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:44:26.710 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:44:26.710 10:02:00 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:44:26.710 10:02:00 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:44:26.710 10:02:00 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:26.710 10:02:00 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:26.710 10:02:00 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:44:26.710 10:02:00 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:44:26.710 10:02:00 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:44:26.710 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:44:26.710 10:02:00 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:44:26.710 10:02:00 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:44:26.710 10:02:00 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:26.710 10:02:00 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:26.710 10:02:00 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:44:26.710 10:02:00 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:44:26.710 10:02:00 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:44:26.710 10:02:00 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:44:26.710 10:02:00 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:44:26.710 10:02:00 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:26.710 10:02:00 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:44:26.710 10:02:00 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:26.710 10:02:00 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:44:26.710 10:02:00 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:44:26.710 10:02:00 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:26.710 10:02:00 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:44:26.710 Found net devices under 0000:4b:00.0: cvl_0_0 00:44:26.710 10:02:00 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:44:26.710 10:02:00 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:44:26.710 10:02:00 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:26.710 10:02:00 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:44:26.710 10:02:00 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:26.710 10:02:00 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:44:26.710 10:02:00 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:44:26.710 10:02:00 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:26.710 10:02:00 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:44:26.710 Found net devices under 0000:4b:00.1: cvl_0_1 00:44:26.710 10:02:00 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:44:26.710 10:02:00 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:44:26.710 10:02:00 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:44:26.710 10:02:00 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:44:26.710 10:02:00 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:44:26.710 10:02:00 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:44:26.710 10:02:00 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:44:26.710 10:02:00 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:44:26.710 10:02:00 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:44:26.710 10:02:00 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:44:26.710 10:02:00 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:44:26.710 10:02:00 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:44:26.710 10:02:00 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:44:26.710 10:02:00 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:44:26.710 10:02:00 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:44:26.711 10:02:00 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:44:26.711 10:02:00 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:44:26.711 10:02:00 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:44:26.711 10:02:00 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:44:26.711 10:02:00 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:44:26.711 10:02:00 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:44:26.711 10:02:00 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:44:26.711 10:02:00 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:44:26.711 10:02:00 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:44:26.711 10:02:00 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:44:26.711 10:02:00 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:44:26.711 10:02:01 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:44:26.711 10:02:01 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:44:26.711 10:02:01 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:44:26.711 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:44:26.711 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.566 ms 00:44:26.711 00:44:26.711 --- 10.0.0.2 ping statistics --- 00:44:26.711 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:26.711 rtt min/avg/max/mdev = 0.566/0.566/0.566/0.000 ms 00:44:26.711 10:02:01 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:44:26.711 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:44:26.711 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:44:26.711 00:44:26.711 --- 10.0.0.1 ping statistics --- 00:44:26.711 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:26.711 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:44:26.711 10:02:01 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:44:26.711 10:02:01 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:44:26.711 10:02:01 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:44:26.711 10:02:01 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:44:26.711 10:02:01 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:44:26.711 10:02:01 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:44:26.711 10:02:01 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:44:26.711 10:02:01 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:44:26.711 10:02:01 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:44:26.711 10:02:01 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:44:26.711 10:02:01 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:44:26.711 10:02:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:26.711 10:02:01 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:44:26.711 10:02:01 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:44:26.711 10:02:01 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:44:26.711 10:02:01 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:44:26.711 10:02:01 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:44:26.711 10:02:01 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:44:26.711 10:02:01 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:44:26.711 10:02:01 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:44:26.711 10:02:01 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:44:26.711 10:02:01 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:44:26.711 10:02:01 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:44:26.711 10:02:01 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:44:26.711 10:02:01 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:65:00.0 00:44:26.711 10:02:01 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:44:26.711 10:02:01 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:44:26.711 10:02:01 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:44:26.711 10:02:01 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:44:26.711 10:02:01 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:44:26.711 10:02:01 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605487 00:44:26.711 10:02:01 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:44:26.711 10:02:01 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:44:26.711 10:02:01 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:44:26.972 10:02:02 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:44:26.972 10:02:02 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:44:26.972 10:02:02 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:44:26.972 10:02:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:26.972 10:02:02 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:44:26.973 10:02:02 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:44:26.973 10:02:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:26.973 10:02:02 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=3169585 00:44:26.973 10:02:02 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:44:26.973 10:02:02 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:44:26.973 10:02:02 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 3169585 00:44:26.973 10:02:02 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 3169585 ']' 00:44:26.973 10:02:02 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:26.973 10:02:02 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:26.973 10:02:02 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:26.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:26.973 10:02:02 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:26.973 10:02:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:26.973 [2024-12-09 10:02:02.285494] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:44:26.973 [2024-12-09 10:02:02.285545] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:44:26.973 [2024-12-09 10:02:02.376938] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:44:26.973 [2024-12-09 10:02:02.396213] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:44:26.973 [2024-12-09 10:02:02.396251] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:44:26.973 [2024-12-09 10:02:02.396260] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:44:26.973 [2024-12-09 10:02:02.396266] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:44:26.973 [2024-12-09 10:02:02.396272] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:44:26.973 [2024-12-09 10:02:02.397905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:44:26.973 [2024-12-09 10:02:02.398019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:44:26.973 [2024-12-09 10:02:02.398173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:26.973 [2024-12-09 10:02:02.398174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:44:27.914 10:02:03 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:27.914 10:02:03 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:44:27.914 10:02:03 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:44:27.914 10:02:03 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:27.914 10:02:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:27.914 INFO: Log level set to 20 00:44:27.914 INFO: Requests: 00:44:27.914 { 00:44:27.914 "jsonrpc": "2.0", 00:44:27.914 "method": "nvmf_set_config", 00:44:27.914 "id": 1, 00:44:27.914 "params": { 00:44:27.914 "admin_cmd_passthru": { 00:44:27.914 "identify_ctrlr": true 00:44:27.914 } 00:44:27.914 } 00:44:27.914 } 00:44:27.914 00:44:27.914 INFO: response: 00:44:27.914 { 00:44:27.914 "jsonrpc": "2.0", 00:44:27.914 "id": 1, 00:44:27.914 "result": true 00:44:27.914 } 00:44:27.914 00:44:27.914 10:02:03 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:27.914 10:02:03 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:44:27.914 10:02:03 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:27.914 10:02:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:27.914 INFO: Setting log level to 20 00:44:27.914 INFO: Setting log level to 20 00:44:27.914 INFO: Log level set to 20 00:44:27.914 INFO: Log level set to 20 00:44:27.914 INFO: Requests: 00:44:27.914 { 00:44:27.914 "jsonrpc": "2.0", 00:44:27.914 "method": "framework_start_init", 00:44:27.914 "id": 1 00:44:27.914 } 00:44:27.914 00:44:27.914 INFO: Requests: 00:44:27.914 { 00:44:27.914 "jsonrpc": "2.0", 00:44:27.914 "method": "framework_start_init", 00:44:27.914 "id": 1 00:44:27.914 } 00:44:27.914 00:44:27.914 [2024-12-09 10:02:03.149085] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:44:27.914 INFO: response: 00:44:27.914 { 00:44:27.914 "jsonrpc": "2.0", 00:44:27.914 "id": 1, 00:44:27.914 "result": true 00:44:27.914 } 00:44:27.914 00:44:27.914 INFO: response: 00:44:27.914 { 00:44:27.914 "jsonrpc": "2.0", 00:44:27.914 "id": 1, 00:44:27.914 "result": true 00:44:27.914 } 00:44:27.914 00:44:27.914 10:02:03 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:27.914 10:02:03 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:44:27.914 10:02:03 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:27.914 10:02:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:27.914 INFO: Setting log level to 40 00:44:27.914 INFO: Setting log level to 40 00:44:27.914 INFO: Setting log level to 40 00:44:27.914 [2024-12-09 10:02:03.162396] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:27.914 10:02:03 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:27.914 10:02:03 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:44:27.914 10:02:03 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:44:27.914 10:02:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:27.914 10:02:03 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:44:27.914 10:02:03 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:27.914 10:02:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:28.175 Nvme0n1 00:44:28.175 10:02:03 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:28.175 10:02:03 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:44:28.175 10:02:03 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:28.175 10:02:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:28.175 10:02:03 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:28.175 10:02:03 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:44:28.175 10:02:03 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:28.175 10:02:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:28.175 10:02:03 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:28.175 10:02:03 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:44:28.175 10:02:03 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:28.175 10:02:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:28.175 [2024-12-09 10:02:03.553695] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:28.175 10:02:03 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:28.175 10:02:03 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:44:28.175 10:02:03 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:28.175 10:02:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:28.175 [ 00:44:28.175 { 00:44:28.175 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:44:28.175 "subtype": "Discovery", 00:44:28.175 "listen_addresses": [], 00:44:28.175 "allow_any_host": true, 00:44:28.175 "hosts": [] 00:44:28.175 }, 00:44:28.175 { 00:44:28.175 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:44:28.175 "subtype": "NVMe", 00:44:28.175 "listen_addresses": [ 00:44:28.175 { 00:44:28.175 "trtype": "TCP", 00:44:28.175 "adrfam": "IPv4", 00:44:28.175 "traddr": "10.0.0.2", 00:44:28.175 "trsvcid": "4420" 00:44:28.175 } 00:44:28.175 ], 00:44:28.175 "allow_any_host": true, 00:44:28.175 "hosts": [], 00:44:28.175 "serial_number": "SPDK00000000000001", 00:44:28.175 "model_number": "SPDK bdev Controller", 00:44:28.175 "max_namespaces": 1, 00:44:28.175 "min_cntlid": 1, 00:44:28.175 "max_cntlid": 65519, 00:44:28.175 "namespaces": [ 00:44:28.175 { 00:44:28.175 "nsid": 1, 00:44:28.175 "bdev_name": "Nvme0n1", 00:44:28.175 "name": "Nvme0n1", 00:44:28.175 "nguid": "36344730526054870025384500000044", 00:44:28.175 "uuid": "36344730-5260-5487-0025-384500000044" 00:44:28.175 } 00:44:28.175 ] 00:44:28.175 } 00:44:28.175 ] 00:44:28.175 10:02:03 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:28.175 10:02:03 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:44:28.175 10:02:03 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:44:28.175 10:02:03 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:44:28.747 10:02:03 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605487 00:44:28.748 10:02:03 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:44:28.748 10:02:03 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:44:28.748 10:02:03 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:44:28.748 10:02:04 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:44:28.748 10:02:04 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605487 '!=' S64GNE0R605487 ']' 00:44:28.748 10:02:04 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:44:28.748 10:02:04 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:44:28.748 10:02:04 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:28.748 10:02:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:28.748 10:02:04 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:28.748 10:02:04 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:44:28.748 10:02:04 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:44:28.748 10:02:04 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:44:28.748 10:02:04 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:44:28.748 10:02:04 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:44:28.748 10:02:04 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:44:28.748 10:02:04 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:44:28.748 10:02:04 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:44:28.748 rmmod nvme_tcp 00:44:28.748 rmmod nvme_fabrics 00:44:28.748 rmmod nvme_keyring 00:44:28.748 10:02:04 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:44:28.748 10:02:04 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:44:28.748 10:02:04 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:44:28.748 10:02:04 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 3169585 ']' 00:44:28.748 10:02:04 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 3169585 00:44:28.748 10:02:04 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 3169585 ']' 00:44:28.748 10:02:04 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 3169585 00:44:28.748 10:02:04 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:44:28.748 10:02:04 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:28.748 10:02:04 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3169585 00:44:28.748 10:02:04 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:44:28.748 10:02:04 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:44:28.748 10:02:04 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3169585' 00:44:28.748 killing process with pid 3169585 00:44:28.748 10:02:04 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 3169585 00:44:28.748 10:02:04 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 3169585 00:44:29.009 10:02:04 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:44:29.009 10:02:04 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:44:29.009 10:02:04 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:44:29.009 10:02:04 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:44:29.009 10:02:04 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:44:29.009 10:02:04 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:44:29.009 10:02:04 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:44:29.009 10:02:04 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:44:29.009 10:02:04 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:44:29.009 10:02:04 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:29.009 10:02:04 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:44:29.009 10:02:04 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:31.553 10:02:06 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:44:31.553 00:44:31.553 real 0m12.902s 00:44:31.553 user 0m10.633s 00:44:31.553 sys 0m6.171s 00:44:31.553 10:02:06 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:31.553 10:02:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:44:31.553 ************************************ 00:44:31.553 END TEST nvmf_identify_passthru 00:44:31.553 ************************************ 00:44:31.553 10:02:06 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:44:31.553 10:02:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:44:31.553 10:02:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:31.553 10:02:06 -- common/autotest_common.sh@10 -- # set +x 00:44:31.553 ************************************ 00:44:31.553 START TEST nvmf_dif 00:44:31.553 ************************************ 00:44:31.553 10:02:06 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:44:31.553 * Looking for test storage... 00:44:31.553 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:44:31.553 10:02:06 nvmf_dif -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:44:31.553 10:02:06 nvmf_dif -- common/autotest_common.sh@1711 -- # lcov --version 00:44:31.553 10:02:06 nvmf_dif -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:44:31.553 10:02:06 nvmf_dif -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:44:31.553 10:02:06 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:31.553 10:02:06 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:31.553 10:02:06 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:31.553 10:02:06 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:44:31.553 10:02:06 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:44:31.553 10:02:06 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:44:31.553 10:02:06 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:44:31.553 10:02:06 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:44:31.553 10:02:06 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:44:31.553 10:02:06 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:44:31.553 10:02:06 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:31.553 10:02:06 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:44:31.553 10:02:06 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:44:31.553 10:02:06 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:31.553 10:02:06 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:31.553 10:02:06 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:44:31.553 10:02:06 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:44:31.553 10:02:06 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:31.553 10:02:06 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:44:31.553 10:02:06 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:44:31.553 10:02:06 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:44:31.553 10:02:06 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:44:31.553 10:02:06 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:31.553 10:02:06 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:44:31.553 10:02:06 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:44:31.553 10:02:06 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:31.553 10:02:06 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:31.553 10:02:06 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:44:31.553 10:02:06 nvmf_dif -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:31.553 10:02:06 nvmf_dif -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:44:31.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:31.553 --rc genhtml_branch_coverage=1 00:44:31.553 --rc genhtml_function_coverage=1 00:44:31.553 --rc genhtml_legend=1 00:44:31.553 --rc geninfo_all_blocks=1 00:44:31.553 --rc geninfo_unexecuted_blocks=1 00:44:31.553 00:44:31.553 ' 00:44:31.553 10:02:06 nvmf_dif -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:44:31.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:31.553 --rc genhtml_branch_coverage=1 00:44:31.553 --rc genhtml_function_coverage=1 00:44:31.553 --rc genhtml_legend=1 00:44:31.553 --rc geninfo_all_blocks=1 00:44:31.553 --rc geninfo_unexecuted_blocks=1 00:44:31.553 00:44:31.553 ' 00:44:31.553 10:02:06 nvmf_dif -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:44:31.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:31.553 --rc genhtml_branch_coverage=1 00:44:31.553 --rc genhtml_function_coverage=1 00:44:31.553 --rc genhtml_legend=1 00:44:31.553 --rc geninfo_all_blocks=1 00:44:31.553 --rc geninfo_unexecuted_blocks=1 00:44:31.553 00:44:31.553 ' 00:44:31.553 10:02:06 nvmf_dif -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:44:31.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:31.553 --rc genhtml_branch_coverage=1 00:44:31.553 --rc genhtml_function_coverage=1 00:44:31.553 --rc genhtml_legend=1 00:44:31.553 --rc geninfo_all_blocks=1 00:44:31.553 --rc geninfo_unexecuted_blocks=1 00:44:31.553 00:44:31.553 ' 00:44:31.553 10:02:06 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:44:31.553 10:02:06 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:44:31.553 10:02:06 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:31.553 10:02:06 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:31.553 10:02:06 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:31.553 10:02:06 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:31.553 10:02:06 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:31.553 10:02:06 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:31.553 10:02:06 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:31.553 10:02:06 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:31.553 10:02:06 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:31.553 10:02:06 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:31.553 10:02:06 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:44:31.553 10:02:06 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:44:31.553 10:02:06 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:31.553 10:02:06 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:31.553 10:02:06 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:44:31.553 10:02:06 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:31.553 10:02:06 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:31.553 10:02:06 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:44:31.553 10:02:06 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:31.553 10:02:06 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:31.553 10:02:06 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:31.553 10:02:06 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:31.553 10:02:06 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:31.553 10:02:06 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:31.553 10:02:06 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:44:31.553 10:02:06 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:31.553 10:02:06 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:44:31.553 10:02:06 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:44:31.553 10:02:06 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:44:31.553 10:02:06 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:31.553 10:02:06 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:31.553 10:02:06 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:31.553 10:02:06 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:44:31.553 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:44:31.554 10:02:06 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:44:31.554 10:02:06 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:44:31.554 10:02:06 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:44:31.554 10:02:06 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:44:31.554 10:02:06 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:44:31.554 10:02:06 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:44:31.554 10:02:06 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:44:31.554 10:02:06 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:44:31.554 10:02:06 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:44:31.554 10:02:06 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:44:31.554 10:02:06 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:44:31.554 10:02:06 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:44:31.554 10:02:06 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:44:31.554 10:02:06 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:31.554 10:02:06 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:44:31.554 10:02:06 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:31.554 10:02:06 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:44:31.554 10:02:06 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:44:31.554 10:02:06 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:44:31.554 10:02:06 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:44:38.156 10:02:13 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:44:38.156 10:02:13 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:44:38.156 10:02:13 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:44:38.156 10:02:13 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:44:38.156 10:02:13 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:44:38.156 10:02:13 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:44:38.156 10:02:13 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:44:38.156 10:02:13 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:44:38.156 10:02:13 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:44:38.156 10:02:13 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:44:38.156 10:02:13 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:44:38.156 10:02:13 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:44:38.156 10:02:13 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:44:38.156 10:02:13 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:44:38.156 10:02:13 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:44:38.156 10:02:13 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:44:38.156 10:02:13 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:44:38.156 10:02:13 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:44:38.156 10:02:13 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:44:38.156 10:02:13 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:44:38.156 10:02:13 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:44:38.156 10:02:13 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:44:38.156 10:02:13 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:44:38.156 10:02:13 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:44:38.156 10:02:13 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:44:38.156 10:02:13 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:44:38.156 10:02:13 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:44:38.156 10:02:13 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:44:38.156 10:02:13 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:44:38.156 10:02:13 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:44:38.156 10:02:13 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:44:38.156 10:02:13 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:44:38.156 10:02:13 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:44:38.156 10:02:13 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:44:38.156 10:02:13 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:44:38.156 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:44:38.156 10:02:13 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:44:38.156 10:02:13 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:44:38.156 10:02:13 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:38.156 10:02:13 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:38.156 10:02:13 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:44:38.156 10:02:13 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:44:38.156 10:02:13 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:44:38.156 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:44:38.156 10:02:13 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:44:38.156 10:02:13 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:44:38.156 10:02:13 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:44:38.156 10:02:13 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:44:38.156 10:02:13 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:44:38.156 10:02:13 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:44:38.156 10:02:13 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:44:38.156 10:02:13 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:44:38.156 10:02:13 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:44:38.156 10:02:13 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:38.156 10:02:13 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:44:38.156 10:02:13 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:38.156 10:02:13 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:44:38.156 10:02:13 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:44:38.156 10:02:13 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:38.156 10:02:13 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:44:38.156 Found net devices under 0000:4b:00.0: cvl_0_0 00:44:38.156 10:02:13 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:44:38.156 10:02:13 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:44:38.156 10:02:13 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:44:38.156 10:02:13 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:44:38.156 10:02:13 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:44:38.156 10:02:13 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:44:38.156 10:02:13 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:44:38.156 10:02:13 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:44:38.156 10:02:13 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:44:38.156 Found net devices under 0000:4b:00.1: cvl_0_1 00:44:38.156 10:02:13 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:44:38.156 10:02:13 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:44:38.156 10:02:13 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:44:38.156 10:02:13 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:44:38.156 10:02:13 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:44:38.156 10:02:13 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:44:38.156 10:02:13 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:44:38.156 10:02:13 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:44:38.156 10:02:13 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:44:38.156 10:02:13 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:44:38.156 10:02:13 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:44:38.156 10:02:13 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:44:38.156 10:02:13 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:44:38.156 10:02:13 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:44:38.156 10:02:13 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:44:38.156 10:02:13 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:44:38.156 10:02:13 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:44:38.156 10:02:13 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:44:38.156 10:02:13 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:44:38.156 10:02:13 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:44:38.417 10:02:13 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:44:38.417 10:02:13 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:44:38.417 10:02:13 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:44:38.417 10:02:13 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:44:38.417 10:02:13 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:44:38.417 10:02:13 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:44:38.417 10:02:13 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:44:38.417 10:02:13 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:44:38.417 10:02:13 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:44:38.417 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:44:38.417 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.627 ms 00:44:38.417 00:44:38.417 --- 10.0.0.2 ping statistics --- 00:44:38.417 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:38.417 rtt min/avg/max/mdev = 0.627/0.627/0.627/0.000 ms 00:44:38.678 10:02:13 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:44:38.678 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:44:38.678 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.270 ms 00:44:38.678 00:44:38.678 --- 10.0.0.1 ping statistics --- 00:44:38.678 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:44:38.678 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:44:38.678 10:02:13 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:44:38.678 10:02:13 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:44:38.678 10:02:13 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:44:38.678 10:02:13 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:44:41.977 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:44:41.977 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:44:41.977 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:44:41.977 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:44:41.977 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:44:41.977 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:44:41.977 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:44:41.977 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:44:41.977 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:44:41.977 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:44:41.977 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:44:41.977 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:44:41.977 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:44:41.977 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:44:41.977 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:44:41.977 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:44:41.977 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:44:41.977 10:02:17 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:44:41.977 10:02:17 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:44:41.977 10:02:17 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:44:41.977 10:02:17 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:44:41.977 10:02:17 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:44:41.977 10:02:17 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:44:41.977 10:02:17 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:44:41.977 10:02:17 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:44:41.977 10:02:17 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:44:41.977 10:02:17 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:44:41.977 10:02:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:44:41.977 10:02:17 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=3175455 00:44:41.977 10:02:17 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 3175455 00:44:41.977 10:02:17 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:44:41.977 10:02:17 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 3175455 ']' 00:44:41.977 10:02:17 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:41.977 10:02:17 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:41.977 10:02:17 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:41.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:41.977 10:02:17 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:41.977 10:02:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:44:41.977 [2024-12-09 10:02:17.243368] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:44:41.977 [2024-12-09 10:02:17.243434] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:44:41.977 [2024-12-09 10:02:17.343477] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:41.977 [2024-12-09 10:02:17.370213] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:44:41.978 [2024-12-09 10:02:17.370263] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:44:41.978 [2024-12-09 10:02:17.370272] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:44:41.978 [2024-12-09 10:02:17.370279] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:44:41.978 [2024-12-09 10:02:17.370286] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:44:41.978 [2024-12-09 10:02:17.371114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:42.921 10:02:18 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:42.921 10:02:18 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:44:42.921 10:02:18 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:44:42.921 10:02:18 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:44:42.921 10:02:18 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:44:42.921 10:02:18 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:44:42.921 10:02:18 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:44:42.921 10:02:18 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:44:42.921 10:02:18 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:42.921 10:02:18 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:44:42.921 [2024-12-09 10:02:18.101880] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:42.921 10:02:18 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:42.921 10:02:18 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:44:42.921 10:02:18 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:44:42.921 10:02:18 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:42.921 10:02:18 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:44:42.921 ************************************ 00:44:42.921 START TEST fio_dif_1_default 00:44:42.921 ************************************ 00:44:42.921 10:02:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:44:42.921 10:02:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:44:42.921 10:02:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:44:42.921 10:02:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:44:42.921 10:02:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:44:42.921 10:02:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:44:42.921 10:02:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:44:42.921 10:02:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:42.921 10:02:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:44:42.921 bdev_null0 00:44:42.921 10:02:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:42.921 10:02:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:44:42.921 10:02:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:42.921 10:02:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:44:42.921 10:02:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:42.921 10:02:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:44:42.921 10:02:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:42.921 10:02:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:44:42.921 10:02:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:42.921 10:02:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:44:42.921 10:02:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:42.922 10:02:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:44:42.922 [2024-12-09 10:02:18.174196] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:42.922 10:02:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:42.922 10:02:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:44:42.922 10:02:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:44:42.922 10:02:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:44:42.922 10:02:18 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:44:42.922 10:02:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:42.922 10:02:18 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:44:42.922 10:02:18 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:44:42.922 10:02:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:42.922 10:02:18 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:44:42.922 { 00:44:42.922 "params": { 00:44:42.922 "name": "Nvme$subsystem", 00:44:42.922 "trtype": "$TEST_TRANSPORT", 00:44:42.922 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:42.922 "adrfam": "ipv4", 00:44:42.922 "trsvcid": "$NVMF_PORT", 00:44:42.922 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:42.922 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:42.922 "hdgst": ${hdgst:-false}, 00:44:42.922 "ddgst": ${ddgst:-false} 00:44:42.922 }, 00:44:42.922 "method": "bdev_nvme_attach_controller" 00:44:42.922 } 00:44:42.922 EOF 00:44:42.922 )") 00:44:42.922 10:02:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:44:42.922 10:02:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:44:42.922 10:02:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:44:42.922 10:02:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:44:42.922 10:02:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:44:42.922 10:02:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:44:42.922 10:02:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:42.922 10:02:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:44:42.922 10:02:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:44:42.922 10:02:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:44:42.922 10:02:18 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:44:42.922 10:02:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:42.922 10:02:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:44:42.922 10:02:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:44:42.922 10:02:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:44:42.922 10:02:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:44:42.922 10:02:18 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:44:42.922 10:02:18 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:44:42.922 10:02:18 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:44:42.922 "params": { 00:44:42.922 "name": "Nvme0", 00:44:42.922 "trtype": "tcp", 00:44:42.922 "traddr": "10.0.0.2", 00:44:42.922 "adrfam": "ipv4", 00:44:42.922 "trsvcid": "4420", 00:44:42.922 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:42.922 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:42.922 "hdgst": false, 00:44:42.922 "ddgst": false 00:44:42.922 }, 00:44:42.922 "method": "bdev_nvme_attach_controller" 00:44:42.922 }' 00:44:42.922 10:02:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:44:42.922 10:02:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:44:42.922 10:02:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:44:42.922 10:02:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:42.922 10:02:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:44:42.922 10:02:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:44:42.922 10:02:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:44:42.922 10:02:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:44:42.922 10:02:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:44:42.922 10:02:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:43.183 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:44:43.183 fio-3.35 00:44:43.183 Starting 1 thread 00:44:55.430 00:44:55.430 filename0: (groupid=0, jobs=1): err= 0: pid=3176013: Mon Dec 9 10:02:29 2024 00:44:55.430 read: IOPS=142, BW=571KiB/s (585kB/s)(5728KiB/10025msec) 00:44:55.430 slat (nsec): min=5471, max=32320, avg=6525.81, stdev=2086.69 00:44:55.430 clat (usec): min=598, max=43023, avg=27984.66, stdev=18875.96 00:44:55.430 lat (usec): min=603, max=43031, avg=27991.19, stdev=18875.56 00:44:55.430 clat percentiles (usec): 00:44:55.430 | 1.00th=[ 750], 5.00th=[ 832], 10.00th=[ 857], 20.00th=[ 914], 00:44:55.430 | 30.00th=[ 1012], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:44:55.430 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:44:55.430 | 99.00th=[42206], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:44:55.430 | 99.99th=[43254] 00:44:55.430 bw ( KiB/s): min= 384, max= 768, per=99.94%, avg=571.20, stdev=181.40, samples=20 00:44:55.430 iops : min= 96, max= 192, avg=142.80, stdev=45.35, samples=20 00:44:55.430 lat (usec) : 750=1.12%, 1000=28.35% 00:44:55.430 lat (msec) : 2=2.93%, 4=0.28%, 50=67.32% 00:44:55.430 cpu : usr=93.19%, sys=6.59%, ctx=7, majf=0, minf=216 00:44:55.430 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:44:55.430 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:55.430 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:44:55.430 issued rwts: total=1432,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:44:55.430 latency : target=0, window=0, percentile=100.00%, depth=4 00:44:55.430 00:44:55.430 Run status group 0 (all jobs): 00:44:55.430 READ: bw=571KiB/s (585kB/s), 571KiB/s-571KiB/s (585kB/s-585kB/s), io=5728KiB (5865kB), run=10025-10025msec 00:44:55.430 10:02:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:44:55.430 10:02:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:44:55.430 10:02:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:44:55.430 10:02:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:44:55.430 10:02:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:44:55.430 10:02:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:44:55.430 10:02:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:55.430 10:02:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:44:55.430 10:02:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:55.430 10:02:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:44:55.430 10:02:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:55.430 10:02:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:44:55.430 10:02:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:55.430 00:44:55.430 real 0m11.276s 00:44:55.430 user 0m23.930s 00:44:55.430 sys 0m1.005s 00:44:55.430 10:02:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:55.430 10:02:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:44:55.430 ************************************ 00:44:55.430 END TEST fio_dif_1_default 00:44:55.430 ************************************ 00:44:55.430 10:02:29 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:44:55.430 10:02:29 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:44:55.430 10:02:29 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:55.430 10:02:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:44:55.430 ************************************ 00:44:55.430 START TEST fio_dif_1_multi_subsystems 00:44:55.430 ************************************ 00:44:55.430 10:02:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:44:55.430 10:02:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:44:55.430 10:02:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:44:55.430 10:02:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:44:55.430 10:02:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:44:55.430 10:02:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:44:55.430 10:02:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:44:55.430 10:02:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:44:55.430 10:02:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:55.430 10:02:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:44:55.430 bdev_null0 00:44:55.430 10:02:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:55.430 10:02:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:44:55.430 10:02:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:55.430 10:02:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:44:55.430 10:02:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:55.430 10:02:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:44:55.430 10:02:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:55.430 10:02:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:44:55.430 10:02:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:55.430 10:02:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:44:55.430 10:02:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:55.430 10:02:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:44:55.430 [2024-12-09 10:02:29.530433] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:55.430 10:02:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:55.430 10:02:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:44:55.430 10:02:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:44:55.430 10:02:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:44:55.430 10:02:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:44:55.430 10:02:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:55.430 10:02:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:44:55.430 bdev_null1 00:44:55.430 10:02:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:55.430 10:02:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:44:55.430 10:02:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:55.430 10:02:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:44:55.430 10:02:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:55.430 10:02:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:44:55.430 10:02:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:55.430 10:02:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:44:55.430 10:02:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:55.430 10:02:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:44:55.430 10:02:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:55.430 10:02:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:44:55.431 10:02:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:55.431 10:02:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:44:55.431 10:02:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:44:55.431 10:02:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:44:55.431 10:02:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:44:55.431 10:02:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:55.431 10:02:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:44:55.431 10:02:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:44:55.431 10:02:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:55.431 10:02:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:44:55.431 { 00:44:55.431 "params": { 00:44:55.431 "name": "Nvme$subsystem", 00:44:55.431 "trtype": "$TEST_TRANSPORT", 00:44:55.431 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:55.431 "adrfam": "ipv4", 00:44:55.431 "trsvcid": "$NVMF_PORT", 00:44:55.431 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:55.431 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:55.431 "hdgst": ${hdgst:-false}, 00:44:55.431 "ddgst": ${ddgst:-false} 00:44:55.431 }, 00:44:55.431 "method": "bdev_nvme_attach_controller" 00:44:55.431 } 00:44:55.431 EOF 00:44:55.431 )") 00:44:55.431 10:02:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:44:55.431 10:02:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:44:55.431 10:02:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:44:55.431 10:02:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:44:55.431 10:02:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:44:55.431 10:02:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:44:55.431 10:02:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:55.431 10:02:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:44:55.431 10:02:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:44:55.431 10:02:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:44:55.431 10:02:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:44:55.431 10:02:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:55.431 10:02:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:44:55.431 10:02:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:44:55.431 10:02:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:44:55.431 10:02:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:44:55.431 10:02:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:44:55.431 10:02:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:44:55.431 10:02:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:44:55.431 { 00:44:55.431 "params": { 00:44:55.431 "name": "Nvme$subsystem", 00:44:55.431 "trtype": "$TEST_TRANSPORT", 00:44:55.431 "traddr": "$NVMF_FIRST_TARGET_IP", 00:44:55.431 "adrfam": "ipv4", 00:44:55.431 "trsvcid": "$NVMF_PORT", 00:44:55.431 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:44:55.431 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:44:55.431 "hdgst": ${hdgst:-false}, 00:44:55.431 "ddgst": ${ddgst:-false} 00:44:55.431 }, 00:44:55.431 "method": "bdev_nvme_attach_controller" 00:44:55.431 } 00:44:55.431 EOF 00:44:55.431 )") 00:44:55.431 10:02:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:44:55.431 10:02:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:44:55.431 10:02:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:44:55.431 10:02:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:44:55.431 10:02:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:44:55.431 10:02:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:44:55.431 "params": { 00:44:55.431 "name": "Nvme0", 00:44:55.431 "trtype": "tcp", 00:44:55.431 "traddr": "10.0.0.2", 00:44:55.431 "adrfam": "ipv4", 00:44:55.431 "trsvcid": "4420", 00:44:55.431 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:55.431 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:55.431 "hdgst": false, 00:44:55.431 "ddgst": false 00:44:55.431 }, 00:44:55.431 "method": "bdev_nvme_attach_controller" 00:44:55.431 },{ 00:44:55.431 "params": { 00:44:55.431 "name": "Nvme1", 00:44:55.431 "trtype": "tcp", 00:44:55.431 "traddr": "10.0.0.2", 00:44:55.431 "adrfam": "ipv4", 00:44:55.431 "trsvcid": "4420", 00:44:55.431 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:44:55.431 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:44:55.431 "hdgst": false, 00:44:55.431 "ddgst": false 00:44:55.431 }, 00:44:55.431 "method": "bdev_nvme_attach_controller" 00:44:55.431 }' 00:44:55.431 10:02:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:44:55.431 10:02:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:44:55.431 10:02:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:44:55.431 10:02:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:44:55.431 10:02:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:44:55.431 10:02:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:44:55.431 10:02:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:44:55.431 10:02:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:44:55.431 10:02:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:44:55.431 10:02:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:44:55.431 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:44:55.431 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:44:55.431 fio-3.35 00:44:55.431 Starting 2 threads 00:45:05.552 00:45:05.552 filename0: (groupid=0, jobs=1): err= 0: pid=3178478: Mon Dec 9 10:02:40 2024 00:45:05.552 read: IOPS=189, BW=759KiB/s (777kB/s)(7616KiB/10039msec) 00:45:05.552 slat (nsec): min=5541, max=33224, avg=6338.65, stdev=1367.41 00:45:05.552 clat (usec): min=585, max=43142, avg=21071.45, stdev=20150.27 00:45:05.552 lat (usec): min=591, max=43175, avg=21077.79, stdev=20150.27 00:45:05.552 clat percentiles (usec): 00:45:05.552 | 1.00th=[ 635], 5.00th=[ 791], 10.00th=[ 824], 20.00th=[ 848], 00:45:05.552 | 30.00th=[ 857], 40.00th=[ 873], 50.00th=[41157], 60.00th=[41157], 00:45:05.552 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:45:05.552 | 99.00th=[41157], 99.50th=[41157], 99.90th=[43254], 99.95th=[43254], 00:45:05.552 | 99.99th=[43254] 00:45:05.552 bw ( KiB/s): min= 672, max= 768, per=49.98%, avg=760.00, stdev=22.92, samples=20 00:45:05.552 iops : min= 168, max= 192, avg=190.00, stdev= 5.73, samples=20 00:45:05.552 lat (usec) : 750=2.99%, 1000=46.80% 00:45:05.552 lat (msec) : 50=50.21% 00:45:05.552 cpu : usr=95.10%, sys=4.69%, ctx=14, majf=0, minf=167 00:45:05.552 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:05.552 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:05.552 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:05.552 issued rwts: total=1904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:05.552 latency : target=0, window=0, percentile=100.00%, depth=4 00:45:05.552 filename1: (groupid=0, jobs=1): err= 0: pid=3178479: Mon Dec 9 10:02:40 2024 00:45:05.552 read: IOPS=191, BW=765KiB/s (783kB/s)(7648KiB/10000msec) 00:45:05.552 slat (nsec): min=5479, max=32174, avg=6309.80, stdev=1735.41 00:45:05.552 clat (usec): min=595, max=41822, avg=20903.43, stdev=20148.46 00:45:05.552 lat (usec): min=603, max=41848, avg=20909.74, stdev=20148.42 00:45:05.552 clat percentiles (usec): 00:45:05.552 | 1.00th=[ 627], 5.00th=[ 750], 10.00th=[ 807], 20.00th=[ 848], 00:45:05.552 | 30.00th=[ 865], 40.00th=[ 881], 50.00th=[ 1037], 60.00th=[41157], 00:45:05.552 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:45:05.552 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:45:05.552 | 99.99th=[41681] 00:45:05.552 bw ( KiB/s): min= 704, max= 768, per=50.25%, avg=764.63, stdev=14.68, samples=19 00:45:05.552 iops : min= 176, max= 192, avg=191.16, stdev= 3.67, samples=19 00:45:05.552 lat (usec) : 750=5.07%, 1000=44.61% 00:45:05.552 lat (msec) : 2=0.31%, 4=0.21%, 50=49.79% 00:45:05.552 cpu : usr=95.16%, sys=4.63%, ctx=8, majf=0, minf=84 00:45:05.552 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:05.552 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:05.552 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:05.552 issued rwts: total=1912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:05.552 latency : target=0, window=0, percentile=100.00%, depth=4 00:45:05.552 00:45:05.552 Run status group 0 (all jobs): 00:45:05.552 READ: bw=1520KiB/s (1557kB/s), 759KiB/s-765KiB/s (777kB/s-783kB/s), io=14.9MiB (15.6MB), run=10000-10039msec 00:45:05.815 10:02:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:45:05.815 10:02:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:45:05.815 10:02:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:45:05.815 10:02:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:45:05.815 10:02:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:45:05.815 10:02:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:45:05.815 10:02:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:05.815 10:02:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:05.815 10:02:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:05.815 10:02:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:45:05.815 10:02:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:05.815 10:02:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:05.815 10:02:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:05.815 10:02:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:45:05.815 10:02:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:45:05.815 10:02:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:45:05.815 10:02:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:45:05.815 10:02:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:05.815 10:02:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:05.815 10:02:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:05.815 10:02:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:45:05.815 10:02:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:05.815 10:02:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:05.815 10:02:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:05.815 00:45:05.815 real 0m11.580s 00:45:05.815 user 0m31.045s 00:45:05.815 sys 0m1.296s 00:45:05.815 10:02:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:05.815 10:02:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:45:05.815 ************************************ 00:45:05.815 END TEST fio_dif_1_multi_subsystems 00:45:05.815 ************************************ 00:45:05.815 10:02:41 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:45:05.815 10:02:41 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:45:05.815 10:02:41 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:45:05.815 10:02:41 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:45:05.815 ************************************ 00:45:05.815 START TEST fio_dif_rand_params 00:45:05.815 ************************************ 00:45:05.815 10:02:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:45:05.815 10:02:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:45:05.815 10:02:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:45:05.815 10:02:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:45:05.815 10:02:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:45:05.815 10:02:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:45:05.815 10:02:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:45:05.815 10:02:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:45:05.815 10:02:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:45:05.815 10:02:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:45:05.815 10:02:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:45:05.815 10:02:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:45:05.815 10:02:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:45:05.815 10:02:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:45:05.815 10:02:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:05.815 10:02:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:05.815 bdev_null0 00:45:05.815 10:02:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:05.815 10:02:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:45:05.815 10:02:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:05.815 10:02:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:05.815 10:02:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:05.815 10:02:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:45:05.815 10:02:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:05.815 10:02:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:05.815 10:02:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:05.815 10:02:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:45:05.815 10:02:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:05.815 10:02:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:05.815 [2024-12-09 10:02:41.194223] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:05.815 10:02:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:05.815 10:02:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:45:05.815 10:02:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:45:05.815 10:02:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:45:05.815 10:02:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:45:05.815 10:02:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:05.815 10:02:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:45:05.815 10:02:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:45:05.815 10:02:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:05.815 10:02:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:45:05.815 { 00:45:05.815 "params": { 00:45:05.815 "name": "Nvme$subsystem", 00:45:05.815 "trtype": "$TEST_TRANSPORT", 00:45:05.815 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:05.815 "adrfam": "ipv4", 00:45:05.815 "trsvcid": "$NVMF_PORT", 00:45:05.815 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:05.815 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:05.815 "hdgst": ${hdgst:-false}, 00:45:05.815 "ddgst": ${ddgst:-false} 00:45:05.816 }, 00:45:05.816 "method": "bdev_nvme_attach_controller" 00:45:05.816 } 00:45:05.816 EOF 00:45:05.816 )") 00:45:05.816 10:02:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:45:05.816 10:02:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:45:05.816 10:02:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:45:05.816 10:02:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:45:05.816 10:02:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:45:05.816 10:02:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:45:05.816 10:02:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:05.816 10:02:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:45:05.816 10:02:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:45:05.816 10:02:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:45:05.816 10:02:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:45:05.816 10:02:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:05.816 10:02:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:45:05.816 10:02:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:45:05.816 10:02:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:45:05.816 10:02:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:45:05.816 10:02:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:45:05.816 10:02:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:45:05.816 10:02:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:45:05.816 "params": { 00:45:05.816 "name": "Nvme0", 00:45:05.816 "trtype": "tcp", 00:45:05.816 "traddr": "10.0.0.2", 00:45:05.816 "adrfam": "ipv4", 00:45:05.816 "trsvcid": "4420", 00:45:05.816 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:05.816 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:05.816 "hdgst": false, 00:45:05.816 "ddgst": false 00:45:05.816 }, 00:45:05.816 "method": "bdev_nvme_attach_controller" 00:45:05.816 }' 00:45:05.816 10:02:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:45:05.816 10:02:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:45:05.816 10:02:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:45:05.816 10:02:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:05.816 10:02:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:45:05.816 10:02:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:45:06.105 10:02:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:45:06.105 10:02:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:45:06.105 10:02:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:45:06.105 10:02:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:06.374 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:45:06.374 ... 00:45:06.374 fio-3.35 00:45:06.374 Starting 3 threads 00:45:12.950 00:45:12.950 filename0: (groupid=0, jobs=1): err= 0: pid=3180679: Mon Dec 9 10:02:47 2024 00:45:12.950 read: IOPS=360, BW=45.1MiB/s (47.3MB/s)(226MiB/5004msec) 00:45:12.950 slat (nsec): min=5537, max=33445, avg=8346.45, stdev=1528.29 00:45:12.950 clat (usec): min=4120, max=86957, avg=8305.78, stdev=6441.06 00:45:12.950 lat (usec): min=4128, max=86965, avg=8314.13, stdev=6441.07 00:45:12.950 clat percentiles (usec): 00:45:12.950 | 1.00th=[ 4621], 5.00th=[ 5473], 10.00th=[ 5932], 20.00th=[ 6456], 00:45:12.950 | 30.00th=[ 6783], 40.00th=[ 7111], 50.00th=[ 7373], 60.00th=[ 7701], 00:45:12.950 | 70.00th=[ 7963], 80.00th=[ 8291], 90.00th=[ 8717], 95.00th=[ 9372], 00:45:12.950 | 99.00th=[47449], 99.50th=[47973], 99.90th=[48497], 99.95th=[86508], 00:45:12.950 | 99.99th=[86508] 00:45:12.950 bw ( KiB/s): min=38144, max=51712, per=38.21%, avg=46131.20, stdev=4681.37, samples=10 00:45:12.950 iops : min= 298, max= 404, avg=360.40, stdev=36.57, samples=10 00:45:12.950 lat (msec) : 10=97.23%, 20=0.33%, 50=2.38%, 100=0.06% 00:45:12.950 cpu : usr=93.68%, sys=6.08%, ctx=12, majf=0, minf=54 00:45:12.950 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:12.950 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:12.950 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:12.950 issued rwts: total=1805,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:12.950 latency : target=0, window=0, percentile=100.00%, depth=3 00:45:12.950 filename0: (groupid=0, jobs=1): err= 0: pid=3180680: Mon Dec 9 10:02:47 2024 00:45:12.950 read: IOPS=294, BW=36.8MiB/s (38.6MB/s)(184MiB/5005msec) 00:45:12.950 slat (nsec): min=5792, max=42806, avg=9267.02, stdev=1905.42 00:45:12.950 clat (usec): min=5211, max=89870, avg=10165.70, stdev=6729.91 00:45:12.950 lat (usec): min=5220, max=89879, avg=10174.96, stdev=6729.75 00:45:12.950 clat percentiles (usec): 00:45:12.950 | 1.00th=[ 5669], 5.00th=[ 6390], 10.00th=[ 7373], 20.00th=[ 7963], 00:45:12.950 | 30.00th=[ 8291], 40.00th=[ 8717], 50.00th=[ 9241], 60.00th=[ 9634], 00:45:12.950 | 70.00th=[10028], 80.00th=[10421], 90.00th=[11076], 95.00th=[11731], 00:45:12.950 | 99.00th=[49546], 99.50th=[50594], 99.90th=[51643], 99.95th=[89654], 00:45:12.950 | 99.99th=[89654] 00:45:12.950 bw ( KiB/s): min=29440, max=44288, per=31.21%, avg=37683.20, stdev=4843.45, samples=10 00:45:12.950 iops : min= 230, max= 346, avg=294.40, stdev=37.84, samples=10 00:45:12.950 lat (msec) : 10=68.68%, 20=28.75%, 50=1.97%, 100=0.61% 00:45:12.950 cpu : usr=91.07%, sys=7.29%, ctx=502, majf=0, minf=75 00:45:12.950 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:12.950 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:12.950 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:12.950 issued rwts: total=1475,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:12.950 latency : target=0, window=0, percentile=100.00%, depth=3 00:45:12.950 filename0: (groupid=0, jobs=1): err= 0: pid=3180681: Mon Dec 9 10:02:47 2024 00:45:12.950 read: IOPS=293, BW=36.6MiB/s (38.4MB/s)(185MiB/5045msec) 00:45:12.950 slat (nsec): min=5542, max=32091, avg=8363.55, stdev=1457.50 00:45:12.950 clat (usec): min=4645, max=91765, avg=10192.90, stdev=4813.32 00:45:12.950 lat (usec): min=4651, max=91774, avg=10201.26, stdev=4813.65 00:45:12.950 clat percentiles (usec): 00:45:12.950 | 1.00th=[ 5866], 5.00th=[ 6587], 10.00th=[ 7439], 20.00th=[ 8356], 00:45:12.950 | 30.00th=[ 9241], 40.00th=[ 9634], 50.00th=[10028], 60.00th=[10421], 00:45:12.950 | 70.00th=[10683], 80.00th=[11076], 90.00th=[11600], 95.00th=[11994], 00:45:12.950 | 99.00th=[46924], 99.50th=[50070], 99.90th=[54264], 99.95th=[91751], 00:45:12.950 | 99.99th=[91751] 00:45:12.950 bw ( KiB/s): min=23342, max=47360, per=31.32%, avg=37815.80, stdev=6001.16, samples=10 00:45:12.950 iops : min= 182, max= 370, avg=295.40, stdev=46.98, samples=10 00:45:12.950 lat (msec) : 10=48.07%, 20=50.85%, 50=0.47%, 100=0.61% 00:45:12.950 cpu : usr=93.91%, sys=5.83%, ctx=13, majf=0, minf=138 00:45:12.950 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:12.950 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:12.950 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:12.950 issued rwts: total=1479,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:12.950 latency : target=0, window=0, percentile=100.00%, depth=3 00:45:12.950 00:45:12.950 Run status group 0 (all jobs): 00:45:12.950 READ: bw=118MiB/s (124MB/s), 36.6MiB/s-45.1MiB/s (38.4MB/s-47.3MB/s), io=595MiB (624MB), run=5004-5045msec 00:45:12.950 10:02:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:45:12.950 10:02:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:45:12.950 10:02:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:45:12.950 10:02:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:45:12.950 10:02:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:45:12.950 10:02:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:45:12.950 10:02:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:12.950 10:02:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:12.950 10:02:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:12.950 10:02:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:45:12.950 10:02:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:12.950 10:02:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:12.950 10:02:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:12.950 10:02:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:45:12.950 10:02:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:45:12.950 10:02:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:45:12.950 10:02:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:45:12.950 10:02:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:45:12.950 10:02:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:45:12.950 10:02:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:45:12.950 10:02:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:45:12.950 10:02:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:45:12.950 10:02:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:45:12.950 10:02:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:45:12.950 10:02:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:45:12.950 10:02:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:12.950 10:02:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:12.950 bdev_null0 00:45:12.950 10:02:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:12.950 10:02:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:45:12.950 10:02:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:12.950 10:02:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:12.950 10:02:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:12.950 10:02:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:45:12.950 10:02:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:12.950 10:02:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:12.950 10:02:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:12.950 10:02:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:45:12.950 10:02:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:12.950 10:02:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:12.950 [2024-12-09 10:02:47.445157] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:12.950 10:02:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:12.950 10:02:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:45:12.951 10:02:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:45:12.951 10:02:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:45:12.951 10:02:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:45:12.951 10:02:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:12.951 10:02:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:12.951 bdev_null1 00:45:12.951 10:02:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:12.951 10:02:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:45:12.951 10:02:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:12.951 10:02:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:12.951 10:02:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:12.951 10:02:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:45:12.951 10:02:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:12.951 10:02:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:12.951 10:02:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:12.951 10:02:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:45:12.951 10:02:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:12.951 10:02:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:12.951 10:02:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:12.951 10:02:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:45:12.951 10:02:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:45:12.951 10:02:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:45:12.951 10:02:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:45:12.951 10:02:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:12.951 10:02:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:12.951 bdev_null2 00:45:12.951 10:02:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:12.951 10:02:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:45:12.951 10:02:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:12.951 10:02:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:12.951 10:02:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:12.951 10:02:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:45:12.951 10:02:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:12.951 10:02:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:12.951 10:02:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:12.951 10:02:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:45:12.951 10:02:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:12.951 10:02:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:12.951 10:02:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:12.951 10:02:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:45:12.951 10:02:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:45:12.951 10:02:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:45:12.951 10:02:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:45:12.951 10:02:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:12.951 10:02:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:45:12.951 10:02:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:45:12.951 10:02:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:12.951 10:02:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:45:12.951 { 00:45:12.951 "params": { 00:45:12.951 "name": "Nvme$subsystem", 00:45:12.951 "trtype": "$TEST_TRANSPORT", 00:45:12.951 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:12.951 "adrfam": "ipv4", 00:45:12.951 "trsvcid": "$NVMF_PORT", 00:45:12.951 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:12.951 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:12.951 "hdgst": ${hdgst:-false}, 00:45:12.951 "ddgst": ${ddgst:-false} 00:45:12.951 }, 00:45:12.951 "method": "bdev_nvme_attach_controller" 00:45:12.951 } 00:45:12.951 EOF 00:45:12.951 )") 00:45:12.951 10:02:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:45:12.951 10:02:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:45:12.951 10:02:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:45:12.951 10:02:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:45:12.951 10:02:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:45:12.951 10:02:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:45:12.951 10:02:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:12.951 10:02:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:45:12.951 10:02:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:45:12.951 10:02:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:45:12.951 10:02:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:45:12.951 10:02:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:12.951 10:02:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:45:12.951 10:02:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:45:12.951 10:02:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:45:12.951 10:02:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:45:12.951 10:02:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:45:12.951 10:02:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:45:12.951 10:02:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:45:12.951 { 00:45:12.951 "params": { 00:45:12.951 "name": "Nvme$subsystem", 00:45:12.951 "trtype": "$TEST_TRANSPORT", 00:45:12.951 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:12.951 "adrfam": "ipv4", 00:45:12.951 "trsvcid": "$NVMF_PORT", 00:45:12.951 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:12.951 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:12.951 "hdgst": ${hdgst:-false}, 00:45:12.951 "ddgst": ${ddgst:-false} 00:45:12.951 }, 00:45:12.951 "method": "bdev_nvme_attach_controller" 00:45:12.951 } 00:45:12.951 EOF 00:45:12.951 )") 00:45:12.951 10:02:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:45:12.951 10:02:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:45:12.951 10:02:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:45:12.951 10:02:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:45:12.951 10:02:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:45:12.951 10:02:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:45:12.951 { 00:45:12.951 "params": { 00:45:12.951 "name": "Nvme$subsystem", 00:45:12.951 "trtype": "$TEST_TRANSPORT", 00:45:12.951 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:12.951 "adrfam": "ipv4", 00:45:12.951 "trsvcid": "$NVMF_PORT", 00:45:12.951 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:12.951 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:12.951 "hdgst": ${hdgst:-false}, 00:45:12.951 "ddgst": ${ddgst:-false} 00:45:12.951 }, 00:45:12.951 "method": "bdev_nvme_attach_controller" 00:45:12.951 } 00:45:12.951 EOF 00:45:12.951 )") 00:45:12.951 10:02:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:45:12.951 10:02:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:45:12.951 10:02:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:45:12.951 10:02:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:45:12.951 10:02:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:45:12.952 10:02:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:45:12.952 "params": { 00:45:12.952 "name": "Nvme0", 00:45:12.952 "trtype": "tcp", 00:45:12.952 "traddr": "10.0.0.2", 00:45:12.952 "adrfam": "ipv4", 00:45:12.952 "trsvcid": "4420", 00:45:12.952 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:12.952 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:12.952 "hdgst": false, 00:45:12.952 "ddgst": false 00:45:12.952 }, 00:45:12.952 "method": "bdev_nvme_attach_controller" 00:45:12.952 },{ 00:45:12.952 "params": { 00:45:12.952 "name": "Nvme1", 00:45:12.952 "trtype": "tcp", 00:45:12.952 "traddr": "10.0.0.2", 00:45:12.952 "adrfam": "ipv4", 00:45:12.952 "trsvcid": "4420", 00:45:12.952 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:45:12.952 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:45:12.952 "hdgst": false, 00:45:12.952 "ddgst": false 00:45:12.952 }, 00:45:12.952 "method": "bdev_nvme_attach_controller" 00:45:12.952 },{ 00:45:12.952 "params": { 00:45:12.952 "name": "Nvme2", 00:45:12.952 "trtype": "tcp", 00:45:12.952 "traddr": "10.0.0.2", 00:45:12.952 "adrfam": "ipv4", 00:45:12.952 "trsvcid": "4420", 00:45:12.952 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:45:12.952 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:45:12.952 "hdgst": false, 00:45:12.952 "ddgst": false 00:45:12.952 }, 00:45:12.952 "method": "bdev_nvme_attach_controller" 00:45:12.952 }' 00:45:12.952 10:02:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:45:12.952 10:02:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:45:12.952 10:02:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:45:12.952 10:02:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:12.952 10:02:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:45:12.952 10:02:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:45:12.952 10:02:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:45:12.952 10:02:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:45:12.952 10:02:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:45:12.952 10:02:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:12.952 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:45:12.952 ... 00:45:12.952 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:45:12.952 ... 00:45:12.952 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:45:12.952 ... 00:45:12.952 fio-3.35 00:45:12.952 Starting 24 threads 00:45:25.169 00:45:25.169 filename0: (groupid=0, jobs=1): err= 0: pid=3182188: Mon Dec 9 10:02:58 2024 00:45:25.169 read: IOPS=710, BW=2841KiB/s (2909kB/s)(27.8MiB/10003msec) 00:45:25.169 slat (nsec): min=5676, max=60765, avg=9942.06, stdev=5968.90 00:45:25.170 clat (usec): min=1177, max=35378, avg=22446.73, stdev=4548.49 00:45:25.170 lat (usec): min=1189, max=35400, avg=22456.67, stdev=4547.86 00:45:25.170 clat percentiles (usec): 00:45:25.170 | 1.00th=[ 1532], 5.00th=[14091], 10.00th=[16319], 20.00th=[23462], 00:45:25.170 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[23987], 00:45:25.170 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[24773], 00:45:25.170 | 99.00th=[25297], 99.50th=[26084], 99.90th=[34866], 99.95th=[35390], 00:45:25.170 | 99.99th=[35390] 00:45:25.170 bw ( KiB/s): min= 2560, max= 4352, per=4.43%, avg=2842.95, stdev=403.29, samples=19 00:45:25.170 iops : min= 640, max= 1088, avg=710.74, stdev=100.82, samples=19 00:45:25.170 lat (msec) : 2=2.01%, 4=0.24%, 10=1.13%, 20=12.47%, 50=84.15% 00:45:25.170 cpu : usr=97.46%, sys=1.58%, ctx=781, majf=0, minf=9 00:45:25.170 IO depths : 1=5.3%, 2=11.4%, 4=24.4%, 8=51.6%, 16=7.3%, 32=0.0%, >=64=0.0% 00:45:25.170 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:25.170 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:25.170 issued rwts: total=7104,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:25.170 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:25.170 filename0: (groupid=0, jobs=1): err= 0: pid=3182189: Mon Dec 9 10:02:58 2024 00:45:25.170 read: IOPS=663, BW=2653KiB/s (2716kB/s)(25.9MiB/10012msec) 00:45:25.170 slat (nsec): min=5732, max=66152, avg=15528.39, stdev=9734.74 00:45:25.170 clat (usec): min=9588, max=33111, avg=23987.75, stdev=1114.32 00:45:25.170 lat (usec): min=9597, max=33124, avg=24003.27, stdev=1114.14 00:45:25.170 clat percentiles (usec): 00:45:25.170 | 1.00th=[19530], 5.00th=[23200], 10.00th=[23462], 20.00th=[23725], 00:45:25.170 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[24249], 00:45:25.170 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:45:25.170 | 99.00th=[25560], 99.50th=[25560], 99.90th=[25822], 99.95th=[25822], 00:45:25.170 | 99.99th=[33162] 00:45:25.170 bw ( KiB/s): min= 2560, max= 2816, per=4.13%, avg=2649.60, stdev=73.12, samples=20 00:45:25.170 iops : min= 640, max= 704, avg=662.40, stdev=18.28, samples=20 00:45:25.170 lat (msec) : 10=0.24%, 20=0.78%, 50=98.98% 00:45:25.170 cpu : usr=98.67%, sys=1.00%, ctx=87, majf=0, minf=10 00:45:25.170 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:45:25.170 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:25.170 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:25.170 issued rwts: total=6640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:25.170 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:25.170 filename0: (groupid=0, jobs=1): err= 0: pid=3182190: Mon Dec 9 10:02:58 2024 00:45:25.170 read: IOPS=666, BW=2665KiB/s (2729kB/s)(26.1MiB/10013msec) 00:45:25.170 slat (nsec): min=5702, max=52082, avg=11112.84, stdev=7091.80 00:45:25.170 clat (usec): min=7233, max=25886, avg=23918.19, stdev=1788.15 00:45:25.170 lat (usec): min=7241, max=25893, avg=23929.30, stdev=1787.44 00:45:25.170 clat percentiles (usec): 00:45:25.170 | 1.00th=[10028], 5.00th=[23462], 10.00th=[23462], 20.00th=[23725], 00:45:25.170 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[24249], 00:45:25.170 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:45:25.170 | 99.00th=[25560], 99.50th=[25560], 99.90th=[25822], 99.95th=[25822], 00:45:25.170 | 99.99th=[25822] 00:45:25.170 bw ( KiB/s): min= 2554, max= 3072, per=4.15%, avg=2662.10, stdev=114.78, samples=20 00:45:25.170 iops : min= 638, max= 768, avg=665.50, stdev=28.72, samples=20 00:45:25.170 lat (msec) : 10=1.02%, 20=0.42%, 50=98.56% 00:45:25.170 cpu : usr=98.84%, sys=0.89%, ctx=14, majf=0, minf=9 00:45:25.170 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:45:25.170 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:25.170 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:25.170 issued rwts: total=6672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:25.170 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:25.170 filename0: (groupid=0, jobs=1): err= 0: pid=3182191: Mon Dec 9 10:02:58 2024 00:45:25.170 read: IOPS=659, BW=2638KiB/s (2701kB/s)(25.8MiB/10011msec) 00:45:25.170 slat (nsec): min=5711, max=96325, avg=25594.47, stdev=14390.71 00:45:25.170 clat (usec): min=13252, max=32840, avg=24022.79, stdev=814.85 00:45:25.170 lat (usec): min=13289, max=32859, avg=24048.38, stdev=813.80 00:45:25.170 clat percentiles (usec): 00:45:25.170 | 1.00th=[22938], 5.00th=[23462], 10.00th=[23462], 20.00th=[23725], 00:45:25.170 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:45:25.170 | 70.00th=[24249], 80.00th=[24249], 90.00th=[24773], 95.00th=[24773], 00:45:25.170 | 99.00th=[25822], 99.50th=[29754], 99.90th=[32637], 99.95th=[32900], 00:45:25.170 | 99.99th=[32900] 00:45:25.170 bw ( KiB/s): min= 2560, max= 2688, per=4.11%, avg=2634.11, stdev=64.93, samples=19 00:45:25.170 iops : min= 640, max= 672, avg=658.53, stdev=16.23, samples=19 00:45:25.170 lat (msec) : 20=0.32%, 50=99.68% 00:45:25.170 cpu : usr=98.74%, sys=1.00%, ctx=14, majf=0, minf=9 00:45:25.170 IO depths : 1=6.2%, 2=12.5%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:45:25.170 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:25.170 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:25.170 issued rwts: total=6602,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:25.170 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:25.170 filename0: (groupid=0, jobs=1): err= 0: pid=3182192: Mon Dec 9 10:02:58 2024 00:45:25.170 read: IOPS=661, BW=2646KiB/s (2709kB/s)(25.9MiB/10011msec) 00:45:25.170 slat (nsec): min=4258, max=60223, avg=15233.34, stdev=9960.61 00:45:25.170 clat (usec): min=11417, max=39885, avg=24035.83, stdev=1154.66 00:45:25.170 lat (usec): min=11426, max=39894, avg=24051.07, stdev=1154.63 00:45:25.170 clat percentiles (usec): 00:45:25.170 | 1.00th=[22676], 5.00th=[23200], 10.00th=[23462], 20.00th=[23725], 00:45:25.170 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[24249], 00:45:25.170 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:45:25.170 | 99.00th=[25560], 99.50th=[26084], 99.90th=[35914], 99.95th=[36439], 00:45:25.170 | 99.99th=[40109] 00:45:25.170 bw ( KiB/s): min= 2544, max= 2704, per=4.11%, avg=2640.84, stdev=65.42, samples=19 00:45:25.170 iops : min= 636, max= 676, avg=660.21, stdev=16.36, samples=19 00:45:25.170 lat (msec) : 20=0.85%, 50=99.15% 00:45:25.170 cpu : usr=98.69%, sys=0.93%, ctx=137, majf=0, minf=9 00:45:25.170 IO depths : 1=6.0%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.5%, 32=0.0%, >=64=0.0% 00:45:25.170 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:25.170 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:25.170 issued rwts: total=6622,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:25.170 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:25.170 filename0: (groupid=0, jobs=1): err= 0: pid=3182193: Mon Dec 9 10:02:58 2024 00:45:25.170 read: IOPS=660, BW=2642KiB/s (2706kB/s)(25.8MiB/10003msec) 00:45:25.170 slat (usec): min=5, max=101, avg=20.16, stdev=17.33 00:45:25.170 clat (usec): min=14133, max=31271, avg=24043.99, stdev=864.58 00:45:25.170 lat (usec): min=14139, max=31311, avg=24064.15, stdev=862.12 00:45:25.170 clat percentiles (usec): 00:45:25.170 | 1.00th=[22938], 5.00th=[23200], 10.00th=[23462], 20.00th=[23725], 00:45:25.170 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[24249], 00:45:25.170 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:45:25.170 | 99.00th=[25822], 99.50th=[28443], 99.90th=[31065], 99.95th=[31327], 00:45:25.170 | 99.99th=[31327] 00:45:25.170 bw ( KiB/s): min= 2560, max= 2688, per=4.11%, avg=2634.11, stdev=64.93, samples=19 00:45:25.170 iops : min= 640, max= 672, avg=658.53, stdev=16.23, samples=19 00:45:25.170 lat (msec) : 20=0.54%, 50=99.46% 00:45:25.170 cpu : usr=98.22%, sys=1.08%, ctx=204, majf=0, minf=9 00:45:25.170 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:45:25.170 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:25.170 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:25.170 issued rwts: total=6608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:25.170 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:25.170 filename0: (groupid=0, jobs=1): err= 0: pid=3182194: Mon Dec 9 10:02:58 2024 00:45:25.170 read: IOPS=668, BW=2672KiB/s (2736kB/s)(26.1MiB/10020msec) 00:45:25.170 slat (usec): min=5, max=103, avg=13.40, stdev=12.73 00:45:25.170 clat (usec): min=6672, max=31353, avg=23839.79, stdev=1909.45 00:45:25.170 lat (usec): min=6685, max=31359, avg=23853.19, stdev=1907.95 00:45:25.170 clat percentiles (usec): 00:45:25.170 | 1.00th=[11600], 5.00th=[23462], 10.00th=[23462], 20.00th=[23725], 00:45:25.170 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[24249], 00:45:25.170 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:45:25.170 | 99.00th=[25297], 99.50th=[25822], 99.90th=[29754], 99.95th=[30540], 00:45:25.170 | 99.99th=[31327] 00:45:25.170 bw ( KiB/s): min= 2560, max= 3232, per=4.16%, avg=2670.90, stdev=146.01, samples=20 00:45:25.170 iops : min= 640, max= 808, avg=667.70, stdev=36.50, samples=20 00:45:25.170 lat (msec) : 10=0.76%, 20=1.90%, 50=97.34% 00:45:25.170 cpu : usr=98.80%, sys=0.93%, ctx=21, majf=0, minf=9 00:45:25.170 IO depths : 1=6.0%, 2=12.2%, 4=24.7%, 8=50.6%, 16=6.5%, 32=0.0%, >=64=0.0% 00:45:25.170 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:25.170 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:25.170 issued rwts: total=6694,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:25.170 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:25.170 filename0: (groupid=0, jobs=1): err= 0: pid=3182195: Mon Dec 9 10:02:58 2024 00:45:25.170 read: IOPS=666, BW=2667KiB/s (2731kB/s)(26.1MiB/10004msec) 00:45:25.170 slat (nsec): min=5478, max=93400, avg=19439.88, stdev=15451.39 00:45:25.170 clat (usec): min=4966, max=42985, avg=23824.07, stdev=3242.40 00:45:25.170 lat (usec): min=4973, max=42994, avg=23843.51, stdev=3242.57 00:45:25.170 clat percentiles (usec): 00:45:25.170 | 1.00th=[14222], 5.00th=[18220], 10.00th=[21365], 20.00th=[23462], 00:45:25.170 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:45:25.170 | 70.00th=[24249], 80.00th=[24511], 90.00th=[25035], 95.00th=[26870], 00:45:25.170 | 99.00th=[36963], 99.50th=[39060], 99.90th=[40109], 99.95th=[42730], 00:45:25.170 | 99.99th=[42730] 00:45:25.170 bw ( KiB/s): min= 2544, max= 2864, per=4.15%, avg=2661.89, stdev=85.03, samples=19 00:45:25.170 iops : min= 636, max= 716, avg=665.47, stdev=21.26, samples=19 00:45:25.171 lat (msec) : 10=0.25%, 20=7.51%, 50=92.23% 00:45:25.171 cpu : usr=98.64%, sys=1.01%, ctx=44, majf=0, minf=9 00:45:25.171 IO depths : 1=3.8%, 2=8.8%, 4=20.9%, 8=57.5%, 16=9.1%, 32=0.0%, >=64=0.0% 00:45:25.171 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:25.171 complete : 0=0.0%, 4=93.1%, 8=1.5%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:25.171 issued rwts: total=6670,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:25.171 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:25.171 filename1: (groupid=0, jobs=1): err= 0: pid=3182196: Mon Dec 9 10:02:58 2024 00:45:25.171 read: IOPS=666, BW=2667KiB/s (2731kB/s)(26.1MiB/10006msec) 00:45:25.171 slat (usec): min=5, max=102, avg=17.41, stdev=15.09 00:45:25.171 clat (usec): min=9177, max=41978, avg=23905.28, stdev=3372.26 00:45:25.171 lat (usec): min=9183, max=41994, avg=23922.69, stdev=3372.62 00:45:25.171 clat percentiles (usec): 00:45:25.171 | 1.00th=[14484], 5.00th=[17171], 10.00th=[20317], 20.00th=[23462], 00:45:25.171 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[24249], 00:45:25.171 | 70.00th=[24511], 80.00th=[24773], 90.00th=[25822], 95.00th=[30016], 00:45:25.171 | 99.00th=[35914], 99.50th=[37487], 99.90th=[41681], 99.95th=[42206], 00:45:25.171 | 99.99th=[42206] 00:45:25.171 bw ( KiB/s): min= 2570, max= 2752, per=4.13%, avg=2651.47, stdev=48.20, samples=19 00:45:25.171 iops : min= 642, max= 688, avg=662.84, stdev=12.10, samples=19 00:45:25.171 lat (msec) : 10=0.09%, 20=9.35%, 50=90.56% 00:45:25.171 cpu : usr=98.99%, sys=0.74%, ctx=13, majf=0, minf=9 00:45:25.171 IO depths : 1=0.3%, 2=0.7%, 4=5.4%, 8=78.0%, 16=15.6%, 32=0.0%, >=64=0.0% 00:45:25.171 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:25.171 complete : 0=0.0%, 4=89.9%, 8=7.6%, 16=2.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:25.171 issued rwts: total=6672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:25.171 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:25.171 filename1: (groupid=0, jobs=1): err= 0: pid=3182197: Mon Dec 9 10:02:58 2024 00:45:25.171 read: IOPS=662, BW=2649KiB/s (2713kB/s)(25.9MiB/10002msec) 00:45:25.171 slat (nsec): min=5803, max=62877, avg=14323.79, stdev=9544.95 00:45:25.171 clat (usec): min=10633, max=36416, avg=24032.09, stdev=1085.96 00:45:25.171 lat (usec): min=10642, max=36427, avg=24046.41, stdev=1085.83 00:45:25.171 clat percentiles (usec): 00:45:25.171 | 1.00th=[19268], 5.00th=[23200], 10.00th=[23462], 20.00th=[23725], 00:45:25.171 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[24249], 00:45:25.171 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:45:25.171 | 99.00th=[25297], 99.50th=[25560], 99.90th=[26084], 99.95th=[35914], 00:45:25.171 | 99.99th=[36439] 00:45:25.171 bw ( KiB/s): min= 2560, max= 2688, per=4.13%, avg=2647.58, stdev=61.13, samples=19 00:45:25.171 iops : min= 640, max= 672, avg=661.89, stdev=15.28, samples=19 00:45:25.171 lat (msec) : 20=1.03%, 50=98.97% 00:45:25.171 cpu : usr=98.45%, sys=1.00%, ctx=189, majf=0, minf=9 00:45:25.171 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:45:25.171 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:25.171 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:25.171 issued rwts: total=6624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:25.171 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:25.171 filename1: (groupid=0, jobs=1): err= 0: pid=3182198: Mon Dec 9 10:02:58 2024 00:45:25.171 read: IOPS=661, BW=2645KiB/s (2708kB/s)(25.8MiB/10004msec) 00:45:25.171 slat (usec): min=5, max=122, avg=26.10, stdev=16.33 00:45:25.171 clat (usec): min=5281, max=39794, avg=23938.58, stdev=1304.77 00:45:25.171 lat (usec): min=5288, max=39807, avg=23964.68, stdev=1304.75 00:45:25.171 clat percentiles (usec): 00:45:25.171 | 1.00th=[22938], 5.00th=[23200], 10.00th=[23462], 20.00th=[23462], 00:45:25.171 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:45:25.171 | 70.00th=[24249], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:45:25.171 | 99.00th=[25297], 99.50th=[25822], 99.90th=[39584], 99.95th=[39584], 00:45:25.171 | 99.99th=[39584] 00:45:25.171 bw ( KiB/s): min= 2560, max= 2688, per=4.11%, avg=2634.11, stdev=64.93, samples=19 00:45:25.171 iops : min= 640, max= 672, avg=658.53, stdev=16.23, samples=19 00:45:25.171 lat (msec) : 10=0.11%, 20=0.48%, 50=99.41% 00:45:25.171 cpu : usr=98.97%, sys=0.77%, ctx=21, majf=0, minf=9 00:45:25.171 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:45:25.171 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:25.171 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:25.171 issued rwts: total=6615,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:25.171 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:25.171 filename1: (groupid=0, jobs=1): err= 0: pid=3182199: Mon Dec 9 10:02:58 2024 00:45:25.171 read: IOPS=668, BW=2673KiB/s (2737kB/s)(26.1MiB/10004msec) 00:45:25.171 slat (nsec): min=5644, max=90136, avg=20494.47, stdev=12756.78 00:45:25.171 clat (usec): min=7086, max=40156, avg=23759.51, stdev=2077.87 00:45:25.171 lat (usec): min=7092, max=40174, avg=23780.01, stdev=2079.38 00:45:25.171 clat percentiles (usec): 00:45:25.171 | 1.00th=[15401], 5.00th=[22938], 10.00th=[23462], 20.00th=[23725], 00:45:25.171 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:45:25.171 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:45:25.171 | 99.00th=[25822], 99.50th=[33817], 99.90th=[40109], 99.95th=[40109], 00:45:25.171 | 99.99th=[40109] 00:45:25.171 bw ( KiB/s): min= 2560, max= 2944, per=4.15%, avg=2665.53, stdev=105.23, samples=19 00:45:25.171 iops : min= 640, max= 736, avg=666.37, stdev=26.32, samples=19 00:45:25.171 lat (msec) : 10=0.03%, 20=4.25%, 50=95.72% 00:45:25.171 cpu : usr=98.71%, sys=0.94%, ctx=73, majf=0, minf=9 00:45:25.171 IO depths : 1=4.1%, 2=10.1%, 4=24.2%, 8=53.2%, 16=8.4%, 32=0.0%, >=64=0.0% 00:45:25.171 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:25.171 complete : 0=0.0%, 4=94.0%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:25.171 issued rwts: total=6684,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:25.171 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:25.171 filename1: (groupid=0, jobs=1): err= 0: pid=3182200: Mon Dec 9 10:02:58 2024 00:45:25.171 read: IOPS=710, BW=2840KiB/s (2908kB/s)(27.8MiB/10025msec) 00:45:25.171 slat (usec): min=5, max=114, avg=15.53, stdev=13.59 00:45:25.171 clat (usec): min=5388, max=42317, avg=22421.53, stdev=5462.65 00:45:25.171 lat (usec): min=5401, max=42340, avg=22437.06, stdev=5465.20 00:45:25.171 clat percentiles (usec): 00:45:25.171 | 1.00th=[11076], 5.00th=[14615], 10.00th=[15795], 20.00th=[17695], 00:45:25.171 | 30.00th=[19268], 40.00th=[21365], 50.00th=[23725], 60.00th=[23987], 00:45:25.171 | 70.00th=[24249], 80.00th=[24511], 90.00th=[28705], 95.00th=[33424], 00:45:25.171 | 99.00th=[39584], 99.50th=[41157], 99.90th=[41681], 99.95th=[42206], 00:45:25.171 | 99.99th=[42206] 00:45:25.171 bw ( KiB/s): min= 2592, max= 3224, per=4.43%, avg=2842.85, stdev=176.32, samples=20 00:45:25.171 iops : min= 648, max= 806, avg=710.70, stdev=44.06, samples=20 00:45:25.171 lat (msec) : 10=0.69%, 20=32.26%, 50=67.06% 00:45:25.171 cpu : usr=98.79%, sys=0.95%, ctx=7, majf=0, minf=9 00:45:25.171 IO depths : 1=1.1%, 2=2.4%, 4=9.4%, 8=74.8%, 16=12.3%, 32=0.0%, >=64=0.0% 00:45:25.171 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:25.171 complete : 0=0.0%, 4=90.1%, 8=5.2%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:25.171 issued rwts: total=7118,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:25.171 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:25.171 filename1: (groupid=0, jobs=1): err= 0: pid=3182201: Mon Dec 9 10:02:58 2024 00:45:25.171 read: IOPS=695, BW=2784KiB/s (2850kB/s)(27.2MiB/10013msec) 00:45:25.171 slat (nsec): min=5666, max=95932, avg=12423.58, stdev=10643.20 00:45:25.171 clat (usec): min=6862, max=40152, avg=22905.96, stdev=4234.30 00:45:25.171 lat (usec): min=6896, max=40171, avg=22918.39, stdev=4235.69 00:45:25.171 clat percentiles (usec): 00:45:25.171 | 1.00th=[11863], 5.00th=[15270], 10.00th=[16909], 20.00th=[19530], 00:45:25.171 | 30.00th=[22152], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:45:25.171 | 70.00th=[24249], 80.00th=[24511], 90.00th=[26346], 95.00th=[30016], 00:45:25.171 | 99.00th=[35914], 99.50th=[36963], 99.90th=[40109], 99.95th=[40109], 00:45:25.171 | 99.99th=[40109] 00:45:25.171 bw ( KiB/s): min= 2560, max= 3088, per=4.34%, avg=2784.50, stdev=142.80, samples=20 00:45:25.171 iops : min= 640, max= 772, avg=696.10, stdev=35.72, samples=20 00:45:25.171 lat (msec) : 10=0.63%, 20=20.98%, 50=78.39% 00:45:25.171 cpu : usr=98.46%, sys=1.26%, ctx=20, majf=0, minf=9 00:45:25.171 IO depths : 1=0.6%, 2=1.7%, 4=7.8%, 8=76.1%, 16=13.8%, 32=0.0%, >=64=0.0% 00:45:25.171 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:25.171 complete : 0=0.0%, 4=90.0%, 8=6.3%, 16=3.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:25.171 issued rwts: total=6968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:25.171 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:25.171 filename1: (groupid=0, jobs=1): err= 0: pid=3182202: Mon Dec 9 10:02:58 2024 00:45:25.171 read: IOPS=661, BW=2647KiB/s (2711kB/s)(25.9MiB/10005msec) 00:45:25.171 slat (nsec): min=5529, max=78646, avg=20463.85, stdev=12559.15 00:45:25.171 clat (usec): min=5462, max=40688, avg=24006.78, stdev=1479.62 00:45:25.171 lat (usec): min=5468, max=40703, avg=24027.24, stdev=1479.25 00:45:25.171 clat percentiles (usec): 00:45:25.171 | 1.00th=[22938], 5.00th=[23462], 10.00th=[23462], 20.00th=[23725], 00:45:25.171 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[23987], 00:45:25.171 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[24773], 00:45:25.171 | 99.00th=[25297], 99.50th=[25822], 99.90th=[40633], 99.95th=[40633], 00:45:25.171 | 99.99th=[40633] 00:45:25.171 bw ( KiB/s): min= 2560, max= 2688, per=4.11%, avg=2634.37, stdev=64.62, samples=19 00:45:25.171 iops : min= 640, max= 672, avg=658.58, stdev=16.17, samples=19 00:45:25.171 lat (msec) : 10=0.21%, 20=0.48%, 50=99.31% 00:45:25.171 cpu : usr=98.75%, sys=0.99%, ctx=13, majf=0, minf=9 00:45:25.171 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:45:25.171 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:25.171 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:25.171 issued rwts: total=6622,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:25.171 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:25.171 filename1: (groupid=0, jobs=1): err= 0: pid=3182203: Mon Dec 9 10:02:58 2024 00:45:25.171 read: IOPS=660, BW=2643KiB/s (2707kB/s)(25.8MiB/10008msec) 00:45:25.171 slat (nsec): min=5424, max=88018, avg=21927.12, stdev=12915.19 00:45:25.171 clat (usec): min=14676, max=37519, avg=24004.76, stdev=1557.87 00:45:25.171 lat (usec): min=14694, max=37526, avg=24026.69, stdev=1558.05 00:45:25.171 clat percentiles (usec): 00:45:25.172 | 1.00th=[17433], 5.00th=[23200], 10.00th=[23462], 20.00th=[23725], 00:45:25.172 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:45:25.172 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:45:25.172 | 99.00th=[30802], 99.50th=[31851], 99.90th=[36963], 99.95th=[36963], 00:45:25.172 | 99.99th=[37487] 00:45:25.172 bw ( KiB/s): min= 2560, max= 2688, per=4.12%, avg=2644.50, stdev=59.53, samples=20 00:45:25.172 iops : min= 640, max= 672, avg=661.10, stdev=14.87, samples=20 00:45:25.172 lat (msec) : 20=2.42%, 50=97.58% 00:45:25.172 cpu : usr=98.77%, sys=0.94%, ctx=29, majf=0, minf=9 00:45:25.172 IO depths : 1=5.3%, 2=11.3%, 4=24.4%, 8=51.8%, 16=7.2%, 32=0.0%, >=64=0.0% 00:45:25.172 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:25.172 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:25.172 issued rwts: total=6614,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:25.172 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:25.172 filename2: (groupid=0, jobs=1): err= 0: pid=3182204: Mon Dec 9 10:02:58 2024 00:45:25.172 read: IOPS=661, BW=2646KiB/s (2709kB/s)(25.9MiB/10014msec) 00:45:25.172 slat (nsec): min=5671, max=49423, avg=10527.75, stdev=6015.50 00:45:25.172 clat (usec): min=10623, max=30153, avg=24082.77, stdev=942.68 00:45:25.172 lat (usec): min=10632, max=30160, avg=24093.30, stdev=942.33 00:45:25.172 clat percentiles (usec): 00:45:25.172 | 1.00th=[22676], 5.00th=[23200], 10.00th=[23462], 20.00th=[23725], 00:45:25.172 | 30.00th=[23987], 40.00th=[23987], 50.00th=[23987], 60.00th=[24249], 00:45:25.172 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:45:25.172 | 99.00th=[25297], 99.50th=[25560], 99.90th=[26084], 99.95th=[30016], 00:45:25.172 | 99.99th=[30278] 00:45:25.172 bw ( KiB/s): min= 2560, max= 2688, per=4.13%, avg=2647.58, stdev=61.13, samples=19 00:45:25.172 iops : min= 640, max= 672, avg=661.89, stdev=15.28, samples=19 00:45:25.172 lat (msec) : 20=0.63%, 50=99.37% 00:45:25.172 cpu : usr=98.82%, sys=0.92%, ctx=15, majf=0, minf=9 00:45:25.172 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:45:25.172 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:25.172 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:25.172 issued rwts: total=6624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:25.172 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:25.172 filename2: (groupid=0, jobs=1): err= 0: pid=3182205: Mon Dec 9 10:02:58 2024 00:45:25.172 read: IOPS=674, BW=2700KiB/s (2764kB/s)(26.4MiB/10013msec) 00:45:25.172 slat (nsec): min=5666, max=77385, avg=13211.70, stdev=8227.57 00:45:25.172 clat (usec): min=6650, max=36942, avg=23591.19, stdev=2560.49 00:45:25.172 lat (usec): min=6663, max=36956, avg=23604.40, stdev=2560.70 00:45:25.172 clat percentiles (usec): 00:45:25.172 | 1.00th=[11731], 5.00th=[17433], 10.00th=[23200], 20.00th=[23725], 00:45:25.172 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[23987], 00:45:25.172 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:45:25.172 | 99.00th=[30540], 99.50th=[32113], 99.90th=[36963], 99.95th=[36963], 00:45:25.172 | 99.99th=[36963] 00:45:25.172 bw ( KiB/s): min= 2554, max= 3200, per=4.20%, avg=2696.50, stdev=163.89, samples=20 00:45:25.172 iops : min= 638, max= 800, avg=674.10, stdev=41.00, samples=20 00:45:25.172 lat (msec) : 10=0.83%, 20=5.89%, 50=93.28% 00:45:25.172 cpu : usr=98.20%, sys=1.35%, ctx=88, majf=0, minf=9 00:45:25.172 IO depths : 1=5.1%, 2=10.9%, 4=23.7%, 8=52.9%, 16=7.4%, 32=0.0%, >=64=0.0% 00:45:25.172 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:25.172 complete : 0=0.0%, 4=93.8%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:25.172 issued rwts: total=6758,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:25.172 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:25.172 filename2: (groupid=0, jobs=1): err= 0: pid=3182206: Mon Dec 9 10:02:58 2024 00:45:25.172 read: IOPS=663, BW=2654KiB/s (2717kB/s)(25.9MiB/10003msec) 00:45:25.172 slat (nsec): min=5503, max=67576, avg=16202.55, stdev=10363.93 00:45:25.172 clat (usec): min=10278, max=40793, avg=23987.17, stdev=2227.50 00:45:25.172 lat (usec): min=10284, max=40808, avg=24003.37, stdev=2228.55 00:45:25.172 clat percentiles (usec): 00:45:25.172 | 1.00th=[15401], 5.00th=[23200], 10.00th=[23462], 20.00th=[23725], 00:45:25.172 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[24249], 00:45:25.172 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25297], 00:45:25.172 | 99.00th=[33162], 99.50th=[36963], 99.90th=[40633], 99.95th=[40633], 00:45:25.172 | 99.99th=[40633] 00:45:25.172 bw ( KiB/s): min= 2560, max= 2688, per=4.11%, avg=2634.95, stdev=59.16, samples=19 00:45:25.172 iops : min= 640, max= 672, avg=658.74, stdev=14.79, samples=19 00:45:25.172 lat (msec) : 20=3.71%, 50=96.29% 00:45:25.172 cpu : usr=98.81%, sys=0.90%, ctx=72, majf=0, minf=9 00:45:25.172 IO depths : 1=3.7%, 2=9.7%, 4=24.1%, 8=53.7%, 16=8.8%, 32=0.0%, >=64=0.0% 00:45:25.172 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:25.172 complete : 0=0.0%, 4=94.0%, 8=0.3%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:25.172 issued rwts: total=6636,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:25.172 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:25.172 filename2: (groupid=0, jobs=1): err= 0: pid=3182207: Mon Dec 9 10:02:58 2024 00:45:25.172 read: IOPS=666, BW=2665KiB/s (2729kB/s)(26.1MiB/10013msec) 00:45:25.172 slat (usec): min=5, max=112, avg=14.31, stdev=11.33 00:45:25.172 clat (usec): min=6819, max=29644, avg=23894.82, stdev=1739.45 00:45:25.172 lat (usec): min=6854, max=29653, avg=23909.13, stdev=1738.52 00:45:25.172 clat percentiles (usec): 00:45:25.172 | 1.00th=[11731], 5.00th=[23462], 10.00th=[23462], 20.00th=[23725], 00:45:25.172 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[23987], 00:45:25.172 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:45:25.172 | 99.00th=[25560], 99.50th=[25560], 99.90th=[25822], 99.95th=[25822], 00:45:25.172 | 99.99th=[29754] 00:45:25.172 bw ( KiB/s): min= 2554, max= 3072, per=4.15%, avg=2662.10, stdev=114.78, samples=20 00:45:25.172 iops : min= 638, max= 768, avg=665.50, stdev=28.72, samples=20 00:45:25.172 lat (msec) : 10=0.91%, 20=0.79%, 50=98.29% 00:45:25.172 cpu : usr=98.80%, sys=0.92%, ctx=29, majf=0, minf=9 00:45:25.172 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:45:25.172 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:25.172 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:25.172 issued rwts: total=6672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:25.172 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:25.172 filename2: (groupid=0, jobs=1): err= 0: pid=3182208: Mon Dec 9 10:02:58 2024 00:45:25.172 read: IOPS=662, BW=2649KiB/s (2713kB/s)(25.9MiB/10004msec) 00:45:25.172 slat (nsec): min=5560, max=93454, avg=16354.35, stdev=14633.90 00:45:25.172 clat (usec): min=6086, max=47007, avg=24086.21, stdev=2833.91 00:45:25.172 lat (usec): min=6091, max=47025, avg=24102.57, stdev=2834.21 00:45:25.172 clat percentiles (usec): 00:45:25.172 | 1.00th=[16319], 5.00th=[19268], 10.00th=[21365], 20.00th=[23725], 00:45:25.172 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[24249], 00:45:25.172 | 70.00th=[24511], 80.00th=[24773], 90.00th=[25560], 95.00th=[28967], 00:45:25.172 | 99.00th=[32900], 99.50th=[34341], 99.90th=[46924], 99.95th=[46924], 00:45:25.172 | 99.99th=[46924] 00:45:25.172 bw ( KiB/s): min= 2400, max= 2752, per=4.11%, avg=2640.00, stdev=72.88, samples=19 00:45:25.172 iops : min= 600, max= 688, avg=660.00, stdev=18.22, samples=19 00:45:25.172 lat (msec) : 10=0.24%, 20=6.96%, 50=92.80% 00:45:25.172 cpu : usr=98.90%, sys=0.84%, ctx=11, majf=0, minf=11 00:45:25.172 IO depths : 1=0.1%, 2=0.2%, 4=2.4%, 8=80.5%, 16=16.8%, 32=0.0%, >=64=0.0% 00:45:25.172 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:25.172 complete : 0=0.0%, 4=89.4%, 8=9.1%, 16=1.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:25.172 issued rwts: total=6626,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:25.172 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:25.172 filename2: (groupid=0, jobs=1): err= 0: pid=3182209: Mon Dec 9 10:02:58 2024 00:45:25.172 read: IOPS=665, BW=2660KiB/s (2724kB/s)(26.0MiB/10003msec) 00:45:25.172 slat (nsec): min=5695, max=64831, avg=17226.96, stdev=11227.89 00:45:25.172 clat (usec): min=4949, max=66976, avg=23918.08, stdev=2672.88 00:45:25.172 lat (usec): min=4955, max=66994, avg=23935.30, stdev=2673.67 00:45:25.172 clat percentiles (usec): 00:45:25.172 | 1.00th=[14746], 5.00th=[22414], 10.00th=[23462], 20.00th=[23725], 00:45:25.172 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[24249], 00:45:25.172 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[25035], 00:45:25.172 | 99.00th=[30278], 99.50th=[33424], 99.90th=[57410], 99.95th=[57410], 00:45:25.172 | 99.99th=[66847] 00:45:25.172 bw ( KiB/s): min= 2436, max= 2832, per=4.12%, avg=2646.11, stdev=90.81, samples=19 00:45:25.172 iops : min= 609, max= 708, avg=661.53, stdev=22.70, samples=19 00:45:25.172 lat (msec) : 10=0.33%, 20=3.91%, 50=95.52%, 100=0.24% 00:45:25.172 cpu : usr=99.03%, sys=0.71%, ctx=28, majf=0, minf=9 00:45:25.172 IO depths : 1=2.9%, 2=8.5%, 4=23.5%, 8=55.5%, 16=9.7%, 32=0.0%, >=64=0.0% 00:45:25.172 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:25.172 complete : 0=0.0%, 4=93.9%, 8=0.4%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:25.172 issued rwts: total=6652,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:25.172 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:25.172 filename2: (groupid=0, jobs=1): err= 0: pid=3182210: Mon Dec 9 10:02:58 2024 00:45:25.172 read: IOPS=660, BW=2643KiB/s (2706kB/s)(25.8MiB/10001msec) 00:45:25.172 slat (nsec): min=5680, max=88656, avg=22225.72, stdev=14641.18 00:45:25.172 clat (usec): min=15005, max=39120, avg=23993.39, stdev=1277.93 00:45:25.172 lat (usec): min=15011, max=39133, avg=24015.62, stdev=1278.27 00:45:25.172 clat percentiles (usec): 00:45:25.172 | 1.00th=[19530], 5.00th=[23200], 10.00th=[23462], 20.00th=[23725], 00:45:25.172 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:45:25.172 | 70.00th=[24249], 80.00th=[24511], 90.00th=[24773], 95.00th=[24773], 00:45:25.172 | 99.00th=[28181], 99.50th=[30802], 99.90th=[39060], 99.95th=[39060], 00:45:25.172 | 99.99th=[39060] 00:45:25.172 bw ( KiB/s): min= 2544, max= 2704, per=4.11%, avg=2640.84, stdev=65.42, samples=19 00:45:25.172 iops : min= 636, max= 676, avg=660.21, stdev=16.36, samples=19 00:45:25.172 lat (msec) : 20=1.24%, 50=98.76% 00:45:25.172 cpu : usr=98.21%, sys=1.09%, ctx=171, majf=0, minf=9 00:45:25.172 IO depths : 1=5.7%, 2=11.9%, 4=24.8%, 8=50.8%, 16=6.8%, 32=0.0%, >=64=0.0% 00:45:25.172 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:25.172 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:25.173 issued rwts: total=6608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:25.173 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:25.173 filename2: (groupid=0, jobs=1): err= 0: pid=3182211: Mon Dec 9 10:02:58 2024 00:45:25.173 read: IOPS=668, BW=2674KiB/s (2738kB/s)(26.1MiB/10005msec) 00:45:25.173 slat (usec): min=5, max=109, avg=17.76, stdev=15.23 00:45:25.173 clat (usec): min=8710, max=40532, avg=23855.57, stdev=2973.87 00:45:25.173 lat (usec): min=8716, max=40549, avg=23873.32, stdev=2974.76 00:45:25.173 clat percentiles (usec): 00:45:25.173 | 1.00th=[14484], 5.00th=[17433], 10.00th=[21627], 20.00th=[23725], 00:45:25.173 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[24249], 00:45:25.173 | 70.00th=[24249], 80.00th=[24511], 90.00th=[25035], 95.00th=[26346], 00:45:25.173 | 99.00th=[33817], 99.50th=[38011], 99.90th=[40633], 99.95th=[40633], 00:45:25.173 | 99.99th=[40633] 00:45:25.173 bw ( KiB/s): min= 2484, max= 2864, per=4.16%, avg=2668.00, stdev=75.60, samples=19 00:45:25.173 iops : min= 621, max= 716, avg=667.00, stdev=18.90, samples=19 00:45:25.173 lat (msec) : 10=0.07%, 20=7.95%, 50=91.97% 00:45:25.173 cpu : usr=97.82%, sys=1.40%, ctx=265, majf=0, minf=9 00:45:25.173 IO depths : 1=0.4%, 2=1.1%, 4=5.4%, 8=77.1%, 16=16.0%, 32=0.0%, >=64=0.0% 00:45:25.173 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:25.173 complete : 0=0.0%, 4=90.2%, 8=7.8%, 16=2.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:25.173 issued rwts: total=6688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:25.173 latency : target=0, window=0, percentile=100.00%, depth=16 00:45:25.173 00:45:25.173 Run status group 0 (all jobs): 00:45:25.173 READ: bw=62.7MiB/s (65.7MB/s), 2638KiB/s-2841KiB/s (2701kB/s-2909kB/s), io=628MiB (659MB), run=10001-10025msec 00:45:25.173 10:02:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:45:25.173 10:02:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:45:25.173 10:02:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:45:25.173 10:02:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:45:25.173 10:02:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:45:25.173 10:02:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:45:25.173 10:02:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:25.173 10:02:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:25.173 10:02:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:25.173 10:02:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:45:25.173 10:02:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:25.173 10:02:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:25.173 10:02:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:25.173 10:02:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:45:25.173 10:02:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:45:25.173 10:02:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:45:25.173 10:02:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:45:25.173 10:02:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:25.173 10:02:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:25.173 10:02:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:25.173 10:02:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:45:25.173 10:02:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:25.173 10:02:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:25.173 10:02:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:25.173 10:02:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:45:25.173 10:02:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:45:25.173 10:02:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:45:25.173 10:02:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:45:25.173 10:02:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:25.173 10:02:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:25.173 10:02:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:25.173 10:02:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:45:25.173 10:02:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:25.173 10:02:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:25.173 10:02:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:25.173 10:02:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:45:25.173 10:02:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:45:25.173 10:02:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:45:25.173 10:02:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:45:25.173 10:02:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:45:25.173 10:02:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:45:25.173 10:02:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:45:25.173 10:02:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:45:25.173 10:02:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:45:25.173 10:02:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:45:25.173 10:02:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:45:25.173 10:02:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:45:25.173 10:02:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:25.173 10:02:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:25.173 bdev_null0 00:45:25.173 10:02:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:25.173 10:02:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:45:25.173 10:02:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:25.173 10:02:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:25.173 10:02:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:25.173 10:02:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:45:25.173 10:02:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:25.173 10:02:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:25.173 10:02:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:25.173 10:02:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:45:25.173 10:02:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:25.173 10:02:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:25.173 [2024-12-09 10:02:59.208939] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:25.173 10:02:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:25.173 10:02:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:45:25.173 10:02:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:45:25.173 10:02:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:45:25.173 10:02:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:45:25.173 10:02:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:25.173 10:02:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:25.173 bdev_null1 00:45:25.173 10:02:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:25.173 10:02:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:45:25.173 10:02:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:25.173 10:02:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:25.173 10:02:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:25.173 10:02:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:45:25.173 10:02:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:25.173 10:02:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:25.173 10:02:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:25.173 10:02:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:45:25.173 10:02:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:25.173 10:02:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:25.173 10:02:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:25.173 10:02:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:45:25.173 10:02:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:45:25.173 10:02:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:45:25.173 10:02:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:45:25.173 10:02:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:25.173 10:02:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:45:25.173 10:02:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:25.173 10:02:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:45:25.173 10:02:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:45:25.173 { 00:45:25.173 "params": { 00:45:25.173 "name": "Nvme$subsystem", 00:45:25.173 "trtype": "$TEST_TRANSPORT", 00:45:25.173 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:25.173 "adrfam": "ipv4", 00:45:25.173 "trsvcid": "$NVMF_PORT", 00:45:25.173 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:25.173 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:25.173 "hdgst": ${hdgst:-false}, 00:45:25.173 "ddgst": ${ddgst:-false} 00:45:25.173 }, 00:45:25.173 "method": "bdev_nvme_attach_controller" 00:45:25.173 } 00:45:25.173 EOF 00:45:25.173 )") 00:45:25.173 10:02:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:45:25.173 10:02:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:45:25.174 10:02:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:45:25.174 10:02:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:45:25.174 10:02:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:45:25.174 10:02:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:45:25.174 10:02:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:25.174 10:02:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:45:25.174 10:02:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:45:25.174 10:02:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:45:25.174 10:02:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:45:25.174 10:02:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:25.174 10:02:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:45:25.174 10:02:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:45:25.174 10:02:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:45:25.174 10:02:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:45:25.174 10:02:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:45:25.174 10:02:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:45:25.174 10:02:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:45:25.174 { 00:45:25.174 "params": { 00:45:25.174 "name": "Nvme$subsystem", 00:45:25.174 "trtype": "$TEST_TRANSPORT", 00:45:25.174 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:25.174 "adrfam": "ipv4", 00:45:25.174 "trsvcid": "$NVMF_PORT", 00:45:25.174 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:25.174 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:25.174 "hdgst": ${hdgst:-false}, 00:45:25.174 "ddgst": ${ddgst:-false} 00:45:25.174 }, 00:45:25.174 "method": "bdev_nvme_attach_controller" 00:45:25.174 } 00:45:25.174 EOF 00:45:25.174 )") 00:45:25.174 10:02:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:45:25.174 10:02:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:45:25.174 10:02:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:45:25.174 10:02:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:45:25.174 10:02:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:45:25.174 10:02:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:45:25.174 "params": { 00:45:25.174 "name": "Nvme0", 00:45:25.174 "trtype": "tcp", 00:45:25.174 "traddr": "10.0.0.2", 00:45:25.174 "adrfam": "ipv4", 00:45:25.174 "trsvcid": "4420", 00:45:25.174 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:25.174 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:25.174 "hdgst": false, 00:45:25.174 "ddgst": false 00:45:25.174 }, 00:45:25.174 "method": "bdev_nvme_attach_controller" 00:45:25.174 },{ 00:45:25.174 "params": { 00:45:25.174 "name": "Nvme1", 00:45:25.174 "trtype": "tcp", 00:45:25.174 "traddr": "10.0.0.2", 00:45:25.174 "adrfam": "ipv4", 00:45:25.174 "trsvcid": "4420", 00:45:25.174 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:45:25.174 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:45:25.174 "hdgst": false, 00:45:25.174 "ddgst": false 00:45:25.174 }, 00:45:25.174 "method": "bdev_nvme_attach_controller" 00:45:25.174 }' 00:45:25.174 10:02:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:45:25.174 10:02:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:45:25.174 10:02:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:45:25.174 10:02:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:25.174 10:02:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:45:25.174 10:02:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:45:25.174 10:02:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:45:25.174 10:02:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:45:25.174 10:02:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:45:25.174 10:02:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:25.174 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:45:25.174 ... 00:45:25.174 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:45:25.174 ... 00:45:25.174 fio-3.35 00:45:25.174 Starting 4 threads 00:45:30.454 00:45:30.454 filename0: (groupid=0, jobs=1): err= 0: pid=3184414: Mon Dec 9 10:03:05 2024 00:45:30.454 read: IOPS=2973, BW=23.2MiB/s (24.4MB/s)(116MiB/5002msec) 00:45:30.454 slat (nsec): min=5500, max=54428, avg=8691.10, stdev=4114.94 00:45:30.454 clat (usec): min=1199, max=44394, avg=2666.06, stdev=1042.46 00:45:30.454 lat (usec): min=1207, max=44431, avg=2674.75, stdev=1042.66 00:45:30.454 clat percentiles (usec): 00:45:30.454 | 1.00th=[ 1713], 5.00th=[ 2057], 10.00th=[ 2245], 20.00th=[ 2376], 00:45:30.454 | 30.00th=[ 2507], 40.00th=[ 2540], 50.00th=[ 2638], 60.00th=[ 2704], 00:45:30.454 | 70.00th=[ 2737], 80.00th=[ 2835], 90.00th=[ 3032], 95.00th=[ 3392], 00:45:30.454 | 99.00th=[ 4047], 99.50th=[ 4178], 99.90th=[ 4621], 99.95th=[44303], 00:45:30.454 | 99.99th=[44303] 00:45:30.454 bw ( KiB/s): min=21712, max=24352, per=25.56%, avg=23770.67, stdev=802.60, samples=9 00:45:30.454 iops : min= 2714, max= 3044, avg=2971.33, stdev=100.32, samples=9 00:45:30.454 lat (msec) : 2=3.73%, 4=95.23%, 10=0.99%, 50=0.05% 00:45:30.454 cpu : usr=96.32%, sys=3.38%, ctx=11, majf=0, minf=68 00:45:30.454 IO depths : 1=0.2%, 2=0.6%, 4=72.2%, 8=27.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:30.454 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:30.454 complete : 0=0.0%, 4=92.2%, 8=7.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:30.454 issued rwts: total=14874,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:30.454 latency : target=0, window=0, percentile=100.00%, depth=8 00:45:30.454 filename0: (groupid=0, jobs=1): err= 0: pid=3184415: Mon Dec 9 10:03:05 2024 00:45:30.454 read: IOPS=2892, BW=22.6MiB/s (23.7MB/s)(113MiB/5001msec) 00:45:30.454 slat (nsec): min=5502, max=52634, avg=6927.10, stdev=3486.66 00:45:30.454 clat (usec): min=1101, max=5126, avg=2746.44, stdev=454.45 00:45:30.454 lat (usec): min=1106, max=5132, avg=2753.36, stdev=454.35 00:45:30.454 clat percentiles (usec): 00:45:30.454 | 1.00th=[ 1876], 5.00th=[ 2180], 10.00th=[ 2311], 20.00th=[ 2442], 00:45:30.454 | 30.00th=[ 2540], 40.00th=[ 2606], 50.00th=[ 2671], 60.00th=[ 2704], 00:45:30.454 | 70.00th=[ 2802], 80.00th=[ 2966], 90.00th=[ 3359], 95.00th=[ 3785], 00:45:30.454 | 99.00th=[ 4178], 99.50th=[ 4293], 99.90th=[ 4686], 99.95th=[ 4948], 00:45:30.454 | 99.99th=[ 5145] 00:45:30.454 bw ( KiB/s): min=22752, max=23424, per=24.93%, avg=23180.44, stdev=198.03, samples=9 00:45:30.454 iops : min= 2844, max= 2928, avg=2897.56, stdev=24.75, samples=9 00:45:30.454 lat (msec) : 2=1.97%, 4=95.82%, 10=2.21% 00:45:30.454 cpu : usr=96.44%, sys=3.14%, ctx=153, majf=0, minf=69 00:45:30.454 IO depths : 1=0.2%, 2=0.9%, 4=71.4%, 8=27.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:30.454 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:30.454 complete : 0=0.0%, 4=92.5%, 8=7.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:30.454 issued rwts: total=14467,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:30.454 latency : target=0, window=0, percentile=100.00%, depth=8 00:45:30.454 filename1: (groupid=0, jobs=1): err= 0: pid=3184416: Mon Dec 9 10:03:05 2024 00:45:30.454 read: IOPS=2868, BW=22.4MiB/s (23.5MB/s)(112MiB/5001msec) 00:45:30.454 slat (nsec): min=5484, max=62676, avg=6924.66, stdev=3458.58 00:45:30.454 clat (usec): min=1017, max=5288, avg=2770.08, stdev=482.95 00:45:30.454 lat (usec): min=1028, max=5294, avg=2777.00, stdev=482.85 00:45:30.454 clat percentiles (usec): 00:45:30.454 | 1.00th=[ 1860], 5.00th=[ 2212], 10.00th=[ 2343], 20.00th=[ 2474], 00:45:30.454 | 30.00th=[ 2573], 40.00th=[ 2638], 50.00th=[ 2671], 60.00th=[ 2704], 00:45:30.454 | 70.00th=[ 2769], 80.00th=[ 2933], 90.00th=[ 3523], 95.00th=[ 3884], 00:45:30.454 | 99.00th=[ 4359], 99.50th=[ 4555], 99.90th=[ 4817], 99.95th=[ 4948], 00:45:30.454 | 99.99th=[ 5276] 00:45:30.454 bw ( KiB/s): min=22544, max=23278, per=24.68%, avg=22949.11, stdev=280.05, samples=9 00:45:30.454 iops : min= 2818, max= 2909, avg=2868.56, stdev=34.90, samples=9 00:45:30.454 lat (msec) : 2=1.70%, 4=95.00%, 10=3.30% 00:45:30.454 cpu : usr=97.02%, sys=2.72%, ctx=5, majf=0, minf=39 00:45:30.454 IO depths : 1=0.2%, 2=0.6%, 4=70.9%, 8=28.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:30.454 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:30.454 complete : 0=0.0%, 4=93.4%, 8=6.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:30.454 issued rwts: total=14347,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:30.454 latency : target=0, window=0, percentile=100.00%, depth=8 00:45:30.454 filename1: (groupid=0, jobs=1): err= 0: pid=3184417: Mon Dec 9 10:03:05 2024 00:45:30.454 read: IOPS=2889, BW=22.6MiB/s (23.7MB/s)(113MiB/5001msec) 00:45:30.454 slat (nsec): min=5487, max=64455, avg=7007.62, stdev=3729.81 00:45:30.454 clat (usec): min=942, max=4988, avg=2749.41, stdev=425.78 00:45:30.454 lat (usec): min=948, max=4994, avg=2756.42, stdev=425.78 00:45:30.454 clat percentiles (usec): 00:45:30.454 | 1.00th=[ 1876], 5.00th=[ 2212], 10.00th=[ 2343], 20.00th=[ 2507], 00:45:30.454 | 30.00th=[ 2540], 40.00th=[ 2638], 50.00th=[ 2704], 60.00th=[ 2737], 00:45:30.454 | 70.00th=[ 2802], 80.00th=[ 2966], 90.00th=[ 3261], 95.00th=[ 3720], 00:45:30.454 | 99.00th=[ 4228], 99.50th=[ 4359], 99.90th=[ 4752], 99.95th=[ 4752], 00:45:30.454 | 99.99th=[ 5014] 00:45:30.454 bw ( KiB/s): min=22672, max=23456, per=24.82%, avg=23082.44, stdev=272.57, samples=9 00:45:30.454 iops : min= 2834, max= 2932, avg=2885.22, stdev=33.98, samples=9 00:45:30.454 lat (usec) : 1000=0.03% 00:45:30.454 lat (msec) : 2=1.97%, 4=95.99%, 10=2.01% 00:45:30.454 cpu : usr=96.86%, sys=2.86%, ctx=8, majf=0, minf=39 00:45:30.454 IO depths : 1=0.2%, 2=1.1%, 4=71.3%, 8=27.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:30.454 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:30.454 complete : 0=0.0%, 4=92.3%, 8=7.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:30.454 issued rwts: total=14449,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:30.454 latency : target=0, window=0, percentile=100.00%, depth=8 00:45:30.454 00:45:30.454 Run status group 0 (all jobs): 00:45:30.454 READ: bw=90.8MiB/s (95.2MB/s), 22.4MiB/s-23.2MiB/s (23.5MB/s-24.4MB/s), io=454MiB (476MB), run=5001-5002msec 00:45:30.454 10:03:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:45:30.454 10:03:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:45:30.454 10:03:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:45:30.454 10:03:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:45:30.454 10:03:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:45:30.454 10:03:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:45:30.454 10:03:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:30.454 10:03:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:30.454 10:03:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:30.454 10:03:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:45:30.454 10:03:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:30.454 10:03:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:30.454 10:03:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:30.454 10:03:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:45:30.454 10:03:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:45:30.454 10:03:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:45:30.454 10:03:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:45:30.454 10:03:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:30.454 10:03:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:30.454 10:03:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:30.454 10:03:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:45:30.454 10:03:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:30.454 10:03:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:30.454 10:03:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:30.454 00:45:30.454 real 0m24.572s 00:45:30.454 user 5m16.462s 00:45:30.454 sys 0m5.139s 00:45:30.454 10:03:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:30.454 10:03:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:45:30.454 ************************************ 00:45:30.454 END TEST fio_dif_rand_params 00:45:30.454 ************************************ 00:45:30.454 10:03:05 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:45:30.454 10:03:05 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:45:30.454 10:03:05 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:45:30.454 10:03:05 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:45:30.454 ************************************ 00:45:30.454 START TEST fio_dif_digest 00:45:30.454 ************************************ 00:45:30.454 10:03:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:45:30.454 10:03:05 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:45:30.454 10:03:05 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:45:30.454 10:03:05 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:45:30.454 10:03:05 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:45:30.454 10:03:05 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:45:30.454 10:03:05 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:45:30.455 10:03:05 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:45:30.455 10:03:05 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:45:30.455 10:03:05 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:45:30.455 10:03:05 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:45:30.455 10:03:05 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:45:30.455 10:03:05 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:45:30.455 10:03:05 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:45:30.455 10:03:05 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:45:30.455 10:03:05 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:45:30.455 10:03:05 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:45:30.455 10:03:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:30.455 10:03:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:45:30.455 bdev_null0 00:45:30.455 10:03:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:30.455 10:03:05 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:45:30.455 10:03:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:30.455 10:03:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:45:30.455 10:03:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:30.455 10:03:05 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:45:30.455 10:03:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:30.455 10:03:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:45:30.455 10:03:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:30.455 10:03:05 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:45:30.455 10:03:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:30.455 10:03:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:45:30.455 [2024-12-09 10:03:05.842741] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:45:30.455 10:03:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:30.455 10:03:05 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:45:30.455 10:03:05 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:45:30.455 10:03:05 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:45:30.455 10:03:05 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:45:30.455 10:03:05 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:30.455 10:03:05 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:45:30.455 10:03:05 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:45:30.455 10:03:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:30.455 10:03:05 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:45:30.455 { 00:45:30.455 "params": { 00:45:30.455 "name": "Nvme$subsystem", 00:45:30.455 "trtype": "$TEST_TRANSPORT", 00:45:30.455 "traddr": "$NVMF_FIRST_TARGET_IP", 00:45:30.455 "adrfam": "ipv4", 00:45:30.455 "trsvcid": "$NVMF_PORT", 00:45:30.455 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:45:30.455 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:45:30.455 "hdgst": ${hdgst:-false}, 00:45:30.455 "ddgst": ${ddgst:-false} 00:45:30.455 }, 00:45:30.455 "method": "bdev_nvme_attach_controller" 00:45:30.455 } 00:45:30.455 EOF 00:45:30.455 )") 00:45:30.455 10:03:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:45:30.455 10:03:05 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:45:30.455 10:03:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:45:30.455 10:03:05 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:45:30.455 10:03:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:45:30.455 10:03:05 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:45:30.455 10:03:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:30.455 10:03:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:45:30.455 10:03:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:45:30.455 10:03:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:45:30.455 10:03:05 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:45:30.455 10:03:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:30.455 10:03:05 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:45:30.455 10:03:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:45:30.455 10:03:05 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:45:30.455 10:03:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:45:30.455 10:03:05 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:45:30.455 10:03:05 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:45:30.455 10:03:05 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:45:30.455 "params": { 00:45:30.455 "name": "Nvme0", 00:45:30.455 "trtype": "tcp", 00:45:30.455 "traddr": "10.0.0.2", 00:45:30.455 "adrfam": "ipv4", 00:45:30.455 "trsvcid": "4420", 00:45:30.455 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:45:30.455 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:45:30.455 "hdgst": true, 00:45:30.455 "ddgst": true 00:45:30.455 }, 00:45:30.455 "method": "bdev_nvme_attach_controller" 00:45:30.455 }' 00:45:30.455 10:03:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:45:30.455 10:03:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:45:30.455 10:03:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:45:30.455 10:03:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:45:30.455 10:03:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:45:30.455 10:03:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:45:30.747 10:03:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:45:30.747 10:03:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:45:30.747 10:03:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:45:30.747 10:03:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:45:31.010 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:45:31.010 ... 00:45:31.010 fio-3.35 00:45:31.010 Starting 3 threads 00:45:43.235 00:45:43.235 filename0: (groupid=0, jobs=1): err= 0: pid=3185933: Mon Dec 9 10:03:16 2024 00:45:43.235 read: IOPS=191, BW=24.0MiB/s (25.1MB/s)(240MiB/10015msec) 00:45:43.235 slat (nsec): min=5900, max=54566, avg=7088.00, stdev=1836.63 00:45:43.235 clat (msec): min=6, max=132, avg=15.63, stdev=14.43 00:45:43.235 lat (msec): min=6, max=132, avg=15.64, stdev=14.43 00:45:43.235 clat percentiles (msec): 00:45:43.235 | 1.00th=[ 9], 5.00th=[ 9], 10.00th=[ 10], 20.00th=[ 10], 00:45:43.235 | 30.00th=[ 10], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 11], 00:45:43.235 | 70.00th=[ 11], 80.00th=[ 12], 90.00th=[ 51], 95.00th=[ 52], 00:45:43.235 | 99.00th=[ 53], 99.50th=[ 54], 99.90th=[ 94], 99.95th=[ 133], 00:45:43.235 | 99.99th=[ 133] 00:45:43.235 bw ( KiB/s): min=17152, max=34048, per=22.67%, avg=24550.40, stdev=4909.46, samples=20 00:45:43.235 iops : min= 134, max= 266, avg=191.80, stdev=38.36, samples=20 00:45:43.235 lat (msec) : 10=33.68%, 20=53.67%, 50=1.87%, 100=10.72%, 250=0.05% 00:45:43.235 cpu : usr=95.88%, sys=3.91%, ctx=23, majf=0, minf=142 00:45:43.235 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:43.235 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:43.235 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:43.235 issued rwts: total=1921,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:43.235 latency : target=0, window=0, percentile=100.00%, depth=3 00:45:43.235 filename0: (groupid=0, jobs=1): err= 0: pid=3185934: Mon Dec 9 10:03:16 2024 00:45:43.235 read: IOPS=340, BW=42.5MiB/s (44.6MB/s)(427MiB/10045msec) 00:45:43.235 slat (nsec): min=8359, max=49570, avg=9481.95, stdev=1644.92 00:45:43.235 clat (usec): min=5413, max=45251, avg=8780.51, stdev=1573.66 00:45:43.235 lat (usec): min=5422, max=45260, avg=8789.99, stdev=1573.73 00:45:43.235 clat percentiles (usec): 00:45:43.235 | 1.00th=[ 6128], 5.00th=[ 6587], 10.00th=[ 6849], 20.00th=[ 7242], 00:45:43.235 | 30.00th=[ 7701], 40.00th=[ 8291], 50.00th=[ 8848], 60.00th=[ 9372], 00:45:43.235 | 70.00th=[ 9765], 80.00th=[10028], 90.00th=[10552], 95.00th=[11076], 00:45:43.235 | 99.00th=[11731], 99.50th=[12256], 99.90th=[12780], 99.95th=[12911], 00:45:43.235 | 99.99th=[45351] 00:45:43.235 bw ( KiB/s): min=38400, max=47616, per=40.38%, avg=43741.95, stdev=2730.02, samples=20 00:45:43.235 iops : min= 300, max= 372, avg=341.70, stdev=21.33, samples=20 00:45:43.235 lat (msec) : 10=77.88%, 20=22.09%, 50=0.03% 00:45:43.235 cpu : usr=94.04%, sys=5.68%, ctx=68, majf=0, minf=185 00:45:43.235 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:43.235 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:43.235 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:43.235 issued rwts: total=3418,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:43.235 latency : target=0, window=0, percentile=100.00%, depth=3 00:45:43.235 filename0: (groupid=0, jobs=1): err= 0: pid=3185935: Mon Dec 9 10:03:16 2024 00:45:43.235 read: IOPS=314, BW=39.4MiB/s (41.3MB/s)(395MiB/10047msec) 00:45:43.235 slat (nsec): min=5894, max=46892, avg=7008.32, stdev=1251.42 00:45:43.235 clat (usec): min=5900, max=52864, avg=9506.32, stdev=2191.59 00:45:43.235 lat (usec): min=5908, max=52900, avg=9513.33, stdev=2191.79 00:45:43.235 clat percentiles (usec): 00:45:43.235 | 1.00th=[ 6587], 5.00th=[ 7242], 10.00th=[ 7504], 20.00th=[ 7963], 00:45:43.235 | 30.00th=[ 8455], 40.00th=[ 8979], 50.00th=[ 9634], 60.00th=[10028], 00:45:43.235 | 70.00th=[10421], 80.00th=[10814], 90.00th=[11338], 95.00th=[11731], 00:45:43.235 | 99.00th=[12518], 99.50th=[12780], 99.90th=[50594], 99.95th=[51643], 00:45:43.235 | 99.99th=[52691] 00:45:43.235 bw ( KiB/s): min=37120, max=42752, per=37.35%, avg=40460.80, stdev=1768.70, samples=20 00:45:43.235 iops : min= 290, max= 334, avg=316.10, stdev=13.82, samples=20 00:45:43.235 lat (msec) : 10=59.60%, 20=40.25%, 50=0.03%, 100=0.13% 00:45:43.235 cpu : usr=94.28%, sys=5.49%, ctx=23, majf=0, minf=155 00:45:43.235 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:45:43.235 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:43.235 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:45:43.235 issued rwts: total=3163,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:45:43.235 latency : target=0, window=0, percentile=100.00%, depth=3 00:45:43.235 00:45:43.235 Run status group 0 (all jobs): 00:45:43.235 READ: bw=106MiB/s (111MB/s), 24.0MiB/s-42.5MiB/s (25.1MB/s-44.6MB/s), io=1063MiB (1114MB), run=10015-10047msec 00:45:43.235 10:03:16 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:45:43.235 10:03:16 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:45:43.235 10:03:16 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:45:43.235 10:03:16 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:45:43.235 10:03:16 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:45:43.235 10:03:16 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:45:43.235 10:03:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:43.235 10:03:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:45:43.235 10:03:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:43.235 10:03:16 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:45:43.235 10:03:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:45:43.235 10:03:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:45:43.235 10:03:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:45:43.235 00:45:43.235 real 0m11.126s 00:45:43.235 user 0m41.518s 00:45:43.235 sys 0m1.882s 00:45:43.235 10:03:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:43.235 10:03:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:45:43.235 ************************************ 00:45:43.235 END TEST fio_dif_digest 00:45:43.235 ************************************ 00:45:43.235 10:03:16 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:45:43.235 10:03:16 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:45:43.235 10:03:16 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:45:43.235 10:03:16 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:45:43.235 10:03:16 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:45:43.235 10:03:16 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:45:43.235 10:03:16 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:45:43.235 10:03:16 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:45:43.235 rmmod nvme_tcp 00:45:43.235 rmmod nvme_fabrics 00:45:43.235 rmmod nvme_keyring 00:45:43.235 10:03:17 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:45:43.235 10:03:17 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:45:43.235 10:03:17 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:45:43.235 10:03:17 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 3175455 ']' 00:45:43.235 10:03:17 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 3175455 00:45:43.235 10:03:17 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 3175455 ']' 00:45:43.235 10:03:17 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 3175455 00:45:43.235 10:03:17 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:45:43.235 10:03:17 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:45:43.235 10:03:17 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3175455 00:45:43.235 10:03:17 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:45:43.235 10:03:17 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:45:43.235 10:03:17 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3175455' 00:45:43.235 killing process with pid 3175455 00:45:43.235 10:03:17 nvmf_dif -- common/autotest_common.sh@973 -- # kill 3175455 00:45:43.235 10:03:17 nvmf_dif -- common/autotest_common.sh@978 -- # wait 3175455 00:45:43.235 10:03:17 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:45:43.235 10:03:17 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:45:45.149 Waiting for block devices as requested 00:45:45.149 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:45:45.409 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:45:45.410 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:45:45.410 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:45:45.670 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:45:45.670 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:45:45.670 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:45:45.670 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:45:45.930 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:45:45.930 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:45:46.190 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:45:46.190 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:45:46.190 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:45:46.451 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:45:46.451 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:45:46.451 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:45:46.710 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:45:46.969 10:03:22 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:45:46.969 10:03:22 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:45:46.969 10:03:22 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:45:46.969 10:03:22 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:45:46.969 10:03:22 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:45:46.969 10:03:22 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:45:46.969 10:03:22 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:45:46.969 10:03:22 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:45:46.969 10:03:22 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:46.969 10:03:22 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:45:46.969 10:03:22 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:48.876 10:03:24 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:45:49.136 00:45:49.136 real 1m17.735s 00:45:49.136 user 7m55.105s 00:45:49.136 sys 0m22.137s 00:45:49.136 10:03:24 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:45:49.136 10:03:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:45:49.136 ************************************ 00:45:49.136 END TEST nvmf_dif 00:45:49.136 ************************************ 00:45:49.136 10:03:24 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:45:49.136 10:03:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:45:49.136 10:03:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:45:49.136 10:03:24 -- common/autotest_common.sh@10 -- # set +x 00:45:49.136 ************************************ 00:45:49.136 START TEST nvmf_abort_qd_sizes 00:45:49.136 ************************************ 00:45:49.136 10:03:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:45:49.136 * Looking for test storage... 00:45:49.136 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:45:49.136 10:03:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:45:49.136 10:03:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lcov --version 00:45:49.136 10:03:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:45:49.397 10:03:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:45:49.397 10:03:24 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:45:49.397 10:03:24 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:45:49.397 10:03:24 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:45:49.397 10:03:24 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:45:49.397 10:03:24 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:45:49.397 10:03:24 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:45:49.397 10:03:24 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:45:49.397 10:03:24 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:45:49.397 10:03:24 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:45:49.397 10:03:24 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:45:49.397 10:03:24 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:45:49.397 10:03:24 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:45:49.397 10:03:24 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:45:49.397 10:03:24 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:45:49.397 10:03:24 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:45:49.397 10:03:24 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:45:49.397 10:03:24 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:45:49.397 10:03:24 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:45:49.397 10:03:24 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:45:49.397 10:03:24 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:45:49.397 10:03:24 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:45:49.397 10:03:24 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:45:49.397 10:03:24 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:45:49.397 10:03:24 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:45:49.397 10:03:24 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:45:49.397 10:03:24 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:45:49.397 10:03:24 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:45:49.397 10:03:24 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:45:49.397 10:03:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:45:49.397 10:03:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:45:49.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:49.397 --rc genhtml_branch_coverage=1 00:45:49.397 --rc genhtml_function_coverage=1 00:45:49.397 --rc genhtml_legend=1 00:45:49.397 --rc geninfo_all_blocks=1 00:45:49.397 --rc geninfo_unexecuted_blocks=1 00:45:49.397 00:45:49.397 ' 00:45:49.397 10:03:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:45:49.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:49.398 --rc genhtml_branch_coverage=1 00:45:49.398 --rc genhtml_function_coverage=1 00:45:49.398 --rc genhtml_legend=1 00:45:49.398 --rc geninfo_all_blocks=1 00:45:49.398 --rc geninfo_unexecuted_blocks=1 00:45:49.398 00:45:49.398 ' 00:45:49.398 10:03:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:45:49.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:49.398 --rc genhtml_branch_coverage=1 00:45:49.398 --rc genhtml_function_coverage=1 00:45:49.398 --rc genhtml_legend=1 00:45:49.398 --rc geninfo_all_blocks=1 00:45:49.398 --rc geninfo_unexecuted_blocks=1 00:45:49.398 00:45:49.398 ' 00:45:49.398 10:03:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:45:49.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:45:49.398 --rc genhtml_branch_coverage=1 00:45:49.398 --rc genhtml_function_coverage=1 00:45:49.398 --rc genhtml_legend=1 00:45:49.398 --rc geninfo_all_blocks=1 00:45:49.398 --rc geninfo_unexecuted_blocks=1 00:45:49.398 00:45:49.398 ' 00:45:49.398 10:03:24 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:45:49.398 10:03:24 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:45:49.398 10:03:24 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:45:49.398 10:03:24 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:45:49.398 10:03:24 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:45:49.398 10:03:24 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:45:49.398 10:03:24 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:45:49.398 10:03:24 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:45:49.398 10:03:24 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:45:49.398 10:03:24 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:45:49.398 10:03:24 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:45:49.398 10:03:24 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:45:49.398 10:03:24 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:45:49.398 10:03:24 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:45:49.398 10:03:24 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:45:49.398 10:03:24 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:45:49.398 10:03:24 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:45:49.398 10:03:24 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:45:49.398 10:03:24 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:45:49.398 10:03:24 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:45:49.398 10:03:24 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:45:49.398 10:03:24 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:45:49.398 10:03:24 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:45:49.398 10:03:24 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:49.398 10:03:24 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:49.398 10:03:24 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:49.398 10:03:24 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:45:49.398 10:03:24 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:45:49.398 10:03:24 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:45:49.398 10:03:24 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:45:49.398 10:03:24 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:45:49.398 10:03:24 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:45:49.398 10:03:24 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:45:49.398 10:03:24 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:45:49.398 10:03:24 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:45:49.398 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:45:49.398 10:03:24 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:45:49.398 10:03:24 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:45:49.398 10:03:24 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:45:49.398 10:03:24 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:45:49.398 10:03:24 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:45:49.398 10:03:24 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:45:49.398 10:03:24 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:45:49.398 10:03:24 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:45:49.398 10:03:24 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:45:49.398 10:03:24 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:45:49.398 10:03:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:45:49.398 10:03:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:45:49.398 10:03:24 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:45:49.398 10:03:24 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:45:49.398 10:03:24 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:45:49.398 10:03:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:45:57.526 10:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:45:57.526 10:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:45:57.526 10:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:45:57.526 10:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:45:57.526 10:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:45:57.526 10:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:45:57.526 10:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:45:57.526 10:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:45:57.526 10:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:45:57.526 10:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:45:57.526 10:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:45:57.526 10:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:45:57.526 10:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:45:57.526 10:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:45:57.526 10:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:45:57.526 10:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:45:57.526 10:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:45:57.526 10:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:45:57.526 10:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:45:57.526 10:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:45:57.526 10:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:45:57.527 10:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:45:57.527 10:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:45:57.527 10:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:45:57.527 10:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:45:57.527 10:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:45:57.527 10:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:45:57.527 10:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:45:57.527 10:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:45:57.527 10:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:45:57.527 10:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:45:57.527 10:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:45:57.527 10:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:45:57.527 10:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:45:57.527 10:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:45:57.527 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:45:57.527 10:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:45:57.527 10:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:45:57.527 10:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:45:57.527 10:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:45:57.527 10:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:45:57.527 10:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:45:57.527 10:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:45:57.527 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:45:57.527 10:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:45:57.527 10:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:45:57.527 10:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:45:57.527 10:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:45:57.527 10:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:45:57.527 10:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:45:57.527 10:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:45:57.527 10:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:45:57.527 10:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:45:57.527 10:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:45:57.527 10:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:45:57.527 10:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:45:57.527 10:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:45:57.527 10:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:45:57.527 10:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:45:57.527 10:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:45:57.527 Found net devices under 0000:4b:00.0: cvl_0_0 00:45:57.527 10:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:45:57.527 10:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:45:57.527 10:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:45:57.527 10:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:45:57.527 10:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:45:57.527 10:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:45:57.527 10:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:45:57.527 10:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:45:57.527 10:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:45:57.527 Found net devices under 0000:4b:00.1: cvl_0_1 00:45:57.527 10:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:45:57.527 10:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:45:57.527 10:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:45:57.527 10:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:45:57.527 10:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:45:57.527 10:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:45:57.527 10:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:45:57.527 10:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:45:57.527 10:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:45:57.527 10:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:45:57.527 10:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:45:57.527 10:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:45:57.527 10:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:45:57.527 10:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:45:57.527 10:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:45:57.527 10:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:45:57.527 10:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:45:57.527 10:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:45:57.527 10:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:45:57.527 10:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:45:57.527 10:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:45:57.527 10:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:45:57.527 10:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:45:57.527 10:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:45:57.527 10:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:45:57.527 10:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:45:57.527 10:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:45:57.527 10:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:45:57.527 10:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:45:57.527 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:45:57.527 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.612 ms 00:45:57.527 00:45:57.527 --- 10.0.0.2 ping statistics --- 00:45:57.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:57.527 rtt min/avg/max/mdev = 0.612/0.612/0.612/0.000 ms 00:45:57.527 10:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:45:57.527 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:45:57.527 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.297 ms 00:45:57.527 00:45:57.527 --- 10.0.0.1 ping statistics --- 00:45:57.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:45:57.527 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:45:57.527 10:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:45:57.527 10:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:45:57.527 10:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:45:57.527 10:03:31 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:46:00.072 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:46:00.072 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:46:00.072 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:46:00.072 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:46:00.072 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:46:00.072 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:46:00.072 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:46:00.072 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:46:00.072 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:46:00.072 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:46:00.072 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:46:00.072 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:46:00.072 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:46:00.072 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:46:00.072 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:46:00.072 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:46:00.072 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:46:00.332 10:03:35 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:46:00.332 10:03:35 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:46:00.332 10:03:35 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:46:00.332 10:03:35 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:46:00.332 10:03:35 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:46:00.332 10:03:35 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:46:00.332 10:03:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:46:00.332 10:03:35 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:46:00.332 10:03:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:46:00.332 10:03:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:46:00.332 10:03:35 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=3195694 00:46:00.332 10:03:35 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 3195694 00:46:00.332 10:03:35 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:46:00.332 10:03:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 3195694 ']' 00:46:00.332 10:03:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:00.332 10:03:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:46:00.332 10:03:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:00.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:00.332 10:03:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:46:00.332 10:03:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:46:00.591 [2024-12-09 10:03:35.828187] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:46:00.591 [2024-12-09 10:03:35.828244] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:46:00.591 [2024-12-09 10:03:35.924621] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:46:00.591 [2024-12-09 10:03:35.944735] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:46:00.591 [2024-12-09 10:03:35.944773] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:46:00.591 [2024-12-09 10:03:35.944782] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:46:00.591 [2024-12-09 10:03:35.944789] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:46:00.591 [2024-12-09 10:03:35.944794] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:46:00.591 [2024-12-09 10:03:35.946314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:46:00.591 [2024-12-09 10:03:35.946431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:46:00.591 [2024-12-09 10:03:35.946593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:46:00.591 [2024-12-09 10:03:35.946594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:46:01.524 10:03:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:46:01.524 10:03:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:46:01.524 10:03:36 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:46:01.524 10:03:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:46:01.525 10:03:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:46:01.525 10:03:36 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:46:01.525 10:03:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:46:01.525 10:03:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:46:01.525 10:03:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:46:01.525 10:03:36 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:46:01.525 10:03:36 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:46:01.525 10:03:36 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:65:00.0 ]] 00:46:01.525 10:03:36 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:46:01.525 10:03:36 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:46:01.525 10:03:36 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:46:01.525 10:03:36 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:46:01.525 10:03:36 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:46:01.525 10:03:36 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:46:01.525 10:03:36 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:46:01.525 10:03:36 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:65:00.0 00:46:01.525 10:03:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:46:01.525 10:03:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:46:01.525 10:03:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:46:01.525 10:03:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:46:01.525 10:03:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:46:01.525 10:03:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:46:01.525 ************************************ 00:46:01.525 START TEST spdk_target_abort 00:46:01.525 ************************************ 00:46:01.525 10:03:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:46:01.525 10:03:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:46:01.525 10:03:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:46:01.525 10:03:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:01.525 10:03:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:01.783 spdk_targetn1 00:46:01.783 10:03:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:01.783 10:03:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:46:01.783 10:03:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:01.783 10:03:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:01.783 [2024-12-09 10:03:37.033604] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:46:01.783 10:03:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:01.783 10:03:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:46:01.783 10:03:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:01.783 10:03:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:01.783 10:03:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:01.783 10:03:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:46:01.783 10:03:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:01.783 10:03:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:01.783 10:03:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:01.783 10:03:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:46:01.783 10:03:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:01.783 10:03:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:01.783 [2024-12-09 10:03:37.081896] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:46:01.783 10:03:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:01.783 10:03:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:46:01.783 10:03:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:46:01.783 10:03:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:46:01.783 10:03:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:46:01.783 10:03:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:46:01.783 10:03:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:46:01.783 10:03:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:46:01.783 10:03:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:46:01.783 10:03:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:46:01.783 10:03:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:01.783 10:03:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:46:01.783 10:03:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:01.783 10:03:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:46:01.783 10:03:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:01.783 10:03:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:46:01.783 10:03:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:01.783 10:03:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:46:01.783 10:03:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:01.783 10:03:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:01.783 10:03:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:46:01.783 10:03:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:02.042 [2024-12-09 10:03:37.263277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:32 len:8 PRP1 0x200004ac0000 PRP2 0x0 00:46:02.042 [2024-12-09 10:03:37.263300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0005 p:1 m:0 dnr:0 00:46:02.042 [2024-12-09 10:03:37.335266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:1208 len:8 PRP1 0x200004ac0000 PRP2 0x0 00:46:02.042 [2024-12-09 10:03:37.335285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:0098 p:1 m:0 dnr:0 00:46:02.042 [2024-12-09 10:03:37.359128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:2136 len:8 PRP1 0x200004ac2000 PRP2 0x0 00:46:02.042 [2024-12-09 10:03:37.359145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:46:02.042 [2024-12-09 10:03:37.384245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:3256 len:8 PRP1 0x200004ac0000 PRP2 0x0 00:46:02.042 [2024-12-09 10:03:37.384261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:0099 p:0 m:0 dnr:0 00:46:02.042 [2024-12-09 10:03:37.398053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:3728 len:8 PRP1 0x200004ac0000 PRP2 0x0 00:46:02.042 [2024-12-09 10:03:37.398068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:00d4 p:0 m:0 dnr:0 00:46:05.328 Initializing NVMe Controllers 00:46:05.328 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:46:05.328 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:46:05.328 Initialization complete. Launching workers. 00:46:05.328 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 13276, failed: 5 00:46:05.328 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 3089, failed to submit 10192 00:46:05.328 success 691, unsuccessful 2398, failed 0 00:46:05.328 10:03:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:46:05.328 10:03:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:05.328 [2024-12-09 10:03:40.481902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:183 nsid:1 lba:1008 len:8 PRP1 0x200004e56000 PRP2 0x0 00:46:05.328 [2024-12-09 10:03:40.481948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:183 cdw0:0 sqhd:0088 p:1 m:0 dnr:0 00:46:05.328 [2024-12-09 10:03:40.497775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:184 nsid:1 lba:1448 len:8 PRP1 0x200004e40000 PRP2 0x0 00:46:05.328 [2024-12-09 10:03:40.497798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:184 cdw0:0 sqhd:00b8 p:1 m:0 dnr:0 00:46:05.328 [2024-12-09 10:03:40.521799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:172 nsid:1 lba:2024 len:8 PRP1 0x200004e44000 PRP2 0x0 00:46:05.328 [2024-12-09 10:03:40.521820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:172 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:46:05.328 [2024-12-09 10:03:40.548618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:185 nsid:1 lba:2552 len:8 PRP1 0x200004e56000 PRP2 0x0 00:46:05.328 [2024-12-09 10:03:40.548651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:185 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:46:05.328 [2024-12-09 10:03:40.563763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:189 nsid:1 lba:2904 len:8 PRP1 0x200004e4e000 PRP2 0x0 00:46:05.328 [2024-12-09 10:03:40.563785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:189 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:46:05.328 [2024-12-09 10:03:40.610602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:169 nsid:1 lba:3984 len:8 PRP1 0x200004e48000 PRP2 0x0 00:46:05.328 [2024-12-09 10:03:40.610625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:169 cdw0:0 sqhd:00fa p:0 m:0 dnr:0 00:46:06.711 [2024-12-09 10:03:42.104790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:173 nsid:1 lba:37496 len:8 PRP1 0x200004e3a000 PRP2 0x0 00:46:06.711 [2024-12-09 10:03:42.104822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:173 cdw0:0 sqhd:005a p:1 m:0 dnr:0 00:46:08.619 Initializing NVMe Controllers 00:46:08.619 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:46:08.619 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:46:08.619 Initialization complete. Launching workers. 00:46:08.619 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8479, failed: 7 00:46:08.619 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1207, failed to submit 7279 00:46:08.619 success 372, unsuccessful 835, failed 0 00:46:08.619 10:03:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:46:08.619 10:03:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:11.913 Initializing NVMe Controllers 00:46:11.913 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:46:11.913 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:46:11.913 Initialization complete. Launching workers. 00:46:11.913 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 43852, failed: 0 00:46:11.913 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2710, failed to submit 41142 00:46:11.913 success 586, unsuccessful 2124, failed 0 00:46:11.913 10:03:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:46:11.913 10:03:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:11.913 10:03:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:11.913 10:03:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:11.913 10:03:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:46:11.913 10:03:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:11.913 10:03:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:13.294 10:03:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:13.294 10:03:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 3195694 00:46:13.294 10:03:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 3195694 ']' 00:46:13.294 10:03:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 3195694 00:46:13.294 10:03:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:46:13.294 10:03:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:46:13.294 10:03:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3195694 00:46:13.553 10:03:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:46:13.553 10:03:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:46:13.553 10:03:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3195694' 00:46:13.553 killing process with pid 3195694 00:46:13.553 10:03:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 3195694 00:46:13.553 10:03:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 3195694 00:46:13.553 00:46:13.553 real 0m12.153s 00:46:13.553 user 0m49.717s 00:46:13.553 sys 0m1.900s 00:46:13.553 10:03:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:46:13.553 10:03:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:13.553 ************************************ 00:46:13.553 END TEST spdk_target_abort 00:46:13.553 ************************************ 00:46:13.554 10:03:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:46:13.554 10:03:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:46:13.554 10:03:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:46:13.554 10:03:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:46:13.554 ************************************ 00:46:13.554 START TEST kernel_target_abort 00:46:13.554 ************************************ 00:46:13.554 10:03:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:46:13.554 10:03:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:46:13.554 10:03:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:46:13.554 10:03:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:46:13.554 10:03:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:46:13.554 10:03:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:46:13.554 10:03:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:46:13.554 10:03:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:46:13.554 10:03:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:46:13.554 10:03:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:46:13.554 10:03:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:46:13.554 10:03:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:46:13.554 10:03:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:46:13.554 10:03:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:46:13.554 10:03:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:46:13.554 10:03:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:46:13.554 10:03:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:46:13.554 10:03:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:46:13.554 10:03:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:46:13.554 10:03:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:46:13.554 10:03:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:46:13.554 10:03:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:46:13.554 10:03:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:46:16.844 Waiting for block devices as requested 00:46:16.844 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:46:16.844 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:46:17.105 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:46:17.105 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:46:17.105 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:46:17.364 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:46:17.364 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:46:17.364 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:46:17.623 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:46:17.623 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:46:17.882 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:46:17.882 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:46:17.882 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:46:18.142 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:46:18.142 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:46:18.142 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:46:18.402 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:46:18.667 10:03:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:46:18.668 10:03:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:46:18.668 10:03:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:46:18.668 10:03:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:46:18.668 10:03:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:46:18.668 10:03:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:46:18.668 10:03:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:46:18.668 10:03:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:46:18.668 10:03:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:46:18.668 No valid GPT data, bailing 00:46:18.668 10:03:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:46:18.668 10:03:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:46:18.668 10:03:53 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:46:18.668 10:03:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:46:18.668 10:03:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:46:18.668 10:03:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:46:18.668 10:03:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:46:18.668 10:03:53 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:46:18.668 10:03:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:46:18.668 10:03:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:46:18.668 10:03:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:46:18.668 10:03:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:46:18.668 10:03:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:46:18.668 10:03:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:46:18.668 10:03:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:46:18.668 10:03:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:46:18.668 10:03:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:46:18.668 10:03:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:46:18.668 00:46:18.668 Discovery Log Number of Records 2, Generation counter 2 00:46:18.668 =====Discovery Log Entry 0====== 00:46:18.668 trtype: tcp 00:46:18.668 adrfam: ipv4 00:46:18.668 subtype: current discovery subsystem 00:46:18.668 treq: not specified, sq flow control disable supported 00:46:18.668 portid: 1 00:46:18.668 trsvcid: 4420 00:46:18.668 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:46:18.668 traddr: 10.0.0.1 00:46:18.668 eflags: none 00:46:18.668 sectype: none 00:46:18.668 =====Discovery Log Entry 1====== 00:46:18.668 trtype: tcp 00:46:18.668 adrfam: ipv4 00:46:18.668 subtype: nvme subsystem 00:46:18.668 treq: not specified, sq flow control disable supported 00:46:18.668 portid: 1 00:46:18.668 trsvcid: 4420 00:46:18.668 subnqn: nqn.2016-06.io.spdk:testnqn 00:46:18.668 traddr: 10.0.0.1 00:46:18.668 eflags: none 00:46:18.668 sectype: none 00:46:18.668 10:03:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:46:18.668 10:03:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:46:18.668 10:03:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:46:18.668 10:03:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:46:18.668 10:03:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:46:18.668 10:03:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:46:18.668 10:03:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:46:18.668 10:03:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:46:18.668 10:03:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:46:18.668 10:03:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:18.668 10:03:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:46:18.668 10:03:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:18.668 10:03:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:46:18.668 10:03:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:18.668 10:03:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:46:18.668 10:03:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:18.668 10:03:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:46:18.668 10:03:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:46:18.668 10:03:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:18.668 10:03:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:46:18.668 10:03:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:22.028 Initializing NVMe Controllers 00:46:22.028 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:46:22.028 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:46:22.028 Initialization complete. Launching workers. 00:46:22.028 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 67532, failed: 0 00:46:22.028 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 67532, failed to submit 0 00:46:22.028 success 0, unsuccessful 67532, failed 0 00:46:22.028 10:03:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:46:22.028 10:03:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:25.325 Initializing NVMe Controllers 00:46:25.325 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:46:25.325 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:46:25.325 Initialization complete. Launching workers. 00:46:25.325 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 114706, failed: 0 00:46:25.325 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 28882, failed to submit 85824 00:46:25.325 success 0, unsuccessful 28882, failed 0 00:46:25.325 10:04:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:46:25.325 10:04:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:46:28.619 Initializing NVMe Controllers 00:46:28.619 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:46:28.619 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:46:28.619 Initialization complete. Launching workers. 00:46:28.619 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 145992, failed: 0 00:46:28.619 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36538, failed to submit 109454 00:46:28.619 success 0, unsuccessful 36538, failed 0 00:46:28.619 10:04:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:46:28.619 10:04:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:46:28.619 10:04:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:46:28.619 10:04:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:46:28.619 10:04:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:46:28.619 10:04:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:46:28.619 10:04:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:46:28.619 10:04:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:46:28.619 10:04:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:46:28.619 10:04:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:46:31.910 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:46:31.910 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:46:31.910 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:46:31.910 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:46:31.910 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:46:31.910 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:46:31.910 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:46:31.910 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:46:31.910 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:46:31.910 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:46:31.910 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:46:31.910 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:46:31.910 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:46:31.910 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:46:31.910 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:46:31.910 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:46:33.820 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:46:33.820 00:46:33.820 real 0m20.165s 00:46:33.820 user 0m9.810s 00:46:33.820 sys 0m5.965s 00:46:33.820 10:04:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:46:33.820 10:04:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:46:33.820 ************************************ 00:46:33.820 END TEST kernel_target_abort 00:46:33.820 ************************************ 00:46:33.820 10:04:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:46:33.820 10:04:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:46:33.820 10:04:09 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:46:33.820 10:04:09 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:46:33.820 10:04:09 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:46:33.820 10:04:09 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:46:33.820 10:04:09 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:46:33.820 10:04:09 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:46:33.820 rmmod nvme_tcp 00:46:33.820 rmmod nvme_fabrics 00:46:33.820 rmmod nvme_keyring 00:46:33.820 10:04:09 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:46:33.820 10:04:09 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:46:33.820 10:04:09 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:46:33.820 10:04:09 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 3195694 ']' 00:46:33.820 10:04:09 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 3195694 00:46:33.820 10:04:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 3195694 ']' 00:46:33.820 10:04:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 3195694 00:46:33.820 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3195694) - No such process 00:46:33.820 10:04:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 3195694 is not found' 00:46:33.820 Process with pid 3195694 is not found 00:46:33.820 10:04:09 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:46:33.820 10:04:09 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:46:37.129 Waiting for block devices as requested 00:46:37.129 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:46:37.388 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:46:37.388 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:46:37.388 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:46:37.647 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:46:37.647 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:46:37.647 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:46:37.906 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:46:37.906 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:46:38.166 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:46:38.166 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:46:38.166 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:46:38.425 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:46:38.425 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:46:38.425 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:46:38.686 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:46:38.686 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:46:38.946 10:04:14 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:46:38.946 10:04:14 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:46:38.946 10:04:14 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:46:38.946 10:04:14 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:46:38.946 10:04:14 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:46:38.946 10:04:14 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:46:38.946 10:04:14 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:46:38.946 10:04:14 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:46:38.946 10:04:14 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:46:38.946 10:04:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:46:38.946 10:04:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:46:41.486 10:04:16 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:46:41.486 00:46:41.486 real 0m51.965s 00:46:41.486 user 1m5.079s 00:46:41.486 sys 0m18.557s 00:46:41.486 10:04:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:46:41.486 10:04:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:46:41.486 ************************************ 00:46:41.486 END TEST nvmf_abort_qd_sizes 00:46:41.486 ************************************ 00:46:41.486 10:04:16 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:46:41.486 10:04:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:46:41.486 10:04:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:46:41.486 10:04:16 -- common/autotest_common.sh@10 -- # set +x 00:46:41.486 ************************************ 00:46:41.486 START TEST keyring_file 00:46:41.486 ************************************ 00:46:41.486 10:04:16 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:46:41.486 * Looking for test storage... 00:46:41.486 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:46:41.486 10:04:16 keyring_file -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:46:41.486 10:04:16 keyring_file -- common/autotest_common.sh@1711 -- # lcov --version 00:46:41.486 10:04:16 keyring_file -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:46:41.486 10:04:16 keyring_file -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:46:41.486 10:04:16 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:46:41.486 10:04:16 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:46:41.486 10:04:16 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:46:41.486 10:04:16 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:46:41.486 10:04:16 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:46:41.486 10:04:16 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:46:41.486 10:04:16 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:46:41.486 10:04:16 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:46:41.486 10:04:16 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:46:41.486 10:04:16 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:46:41.486 10:04:16 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:46:41.486 10:04:16 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:46:41.486 10:04:16 keyring_file -- scripts/common.sh@345 -- # : 1 00:46:41.487 10:04:16 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:46:41.487 10:04:16 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:46:41.487 10:04:16 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:46:41.487 10:04:16 keyring_file -- scripts/common.sh@353 -- # local d=1 00:46:41.487 10:04:16 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:46:41.487 10:04:16 keyring_file -- scripts/common.sh@355 -- # echo 1 00:46:41.487 10:04:16 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:46:41.487 10:04:16 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:46:41.487 10:04:16 keyring_file -- scripts/common.sh@353 -- # local d=2 00:46:41.487 10:04:16 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:46:41.487 10:04:16 keyring_file -- scripts/common.sh@355 -- # echo 2 00:46:41.487 10:04:16 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:46:41.487 10:04:16 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:46:41.487 10:04:16 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:46:41.487 10:04:16 keyring_file -- scripts/common.sh@368 -- # return 0 00:46:41.487 10:04:16 keyring_file -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:46:41.487 10:04:16 keyring_file -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:46:41.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:41.487 --rc genhtml_branch_coverage=1 00:46:41.487 --rc genhtml_function_coverage=1 00:46:41.487 --rc genhtml_legend=1 00:46:41.487 --rc geninfo_all_blocks=1 00:46:41.487 --rc geninfo_unexecuted_blocks=1 00:46:41.487 00:46:41.487 ' 00:46:41.487 10:04:16 keyring_file -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:46:41.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:41.487 --rc genhtml_branch_coverage=1 00:46:41.487 --rc genhtml_function_coverage=1 00:46:41.487 --rc genhtml_legend=1 00:46:41.487 --rc geninfo_all_blocks=1 00:46:41.487 --rc geninfo_unexecuted_blocks=1 00:46:41.487 00:46:41.487 ' 00:46:41.487 10:04:16 keyring_file -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:46:41.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:41.487 --rc genhtml_branch_coverage=1 00:46:41.487 --rc genhtml_function_coverage=1 00:46:41.487 --rc genhtml_legend=1 00:46:41.487 --rc geninfo_all_blocks=1 00:46:41.487 --rc geninfo_unexecuted_blocks=1 00:46:41.487 00:46:41.487 ' 00:46:41.487 10:04:16 keyring_file -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:46:41.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:41.487 --rc genhtml_branch_coverage=1 00:46:41.487 --rc genhtml_function_coverage=1 00:46:41.487 --rc genhtml_legend=1 00:46:41.487 --rc geninfo_all_blocks=1 00:46:41.487 --rc geninfo_unexecuted_blocks=1 00:46:41.487 00:46:41.487 ' 00:46:41.487 10:04:16 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:46:41.487 10:04:16 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:46:41.487 10:04:16 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:46:41.487 10:04:16 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:46:41.487 10:04:16 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:46:41.487 10:04:16 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:46:41.487 10:04:16 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:46:41.487 10:04:16 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:46:41.487 10:04:16 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:46:41.487 10:04:16 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:46:41.487 10:04:16 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:46:41.487 10:04:16 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:46:41.487 10:04:16 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:46:41.487 10:04:16 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:46:41.487 10:04:16 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:46:41.487 10:04:16 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:46:41.487 10:04:16 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:46:41.487 10:04:16 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:46:41.487 10:04:16 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:46:41.487 10:04:16 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:46:41.487 10:04:16 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:46:41.487 10:04:16 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:46:41.487 10:04:16 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:46:41.487 10:04:16 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:46:41.487 10:04:16 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:41.487 10:04:16 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:41.487 10:04:16 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:41.487 10:04:16 keyring_file -- paths/export.sh@5 -- # export PATH 00:46:41.487 10:04:16 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:41.487 10:04:16 keyring_file -- nvmf/common.sh@51 -- # : 0 00:46:41.487 10:04:16 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:46:41.487 10:04:16 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:46:41.487 10:04:16 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:46:41.487 10:04:16 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:46:41.487 10:04:16 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:46:41.487 10:04:16 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:46:41.487 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:46:41.487 10:04:16 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:46:41.487 10:04:16 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:46:41.487 10:04:16 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:46:41.487 10:04:16 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:46:41.487 10:04:16 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:46:41.487 10:04:16 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:46:41.487 10:04:16 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:46:41.487 10:04:16 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:46:41.487 10:04:16 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:46:41.487 10:04:16 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:46:41.487 10:04:16 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:46:41.487 10:04:16 keyring_file -- keyring/common.sh@17 -- # name=key0 00:46:41.487 10:04:16 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:46:41.487 10:04:16 keyring_file -- keyring/common.sh@17 -- # digest=0 00:46:41.487 10:04:16 keyring_file -- keyring/common.sh@18 -- # mktemp 00:46:41.487 10:04:16 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.hK82t5SYj7 00:46:41.487 10:04:16 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:46:41.487 10:04:16 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:46:41.487 10:04:16 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:46:41.487 10:04:16 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:46:41.487 10:04:16 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:46:41.487 10:04:16 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:46:41.487 10:04:16 keyring_file -- nvmf/common.sh@733 -- # python - 00:46:41.487 10:04:16 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.hK82t5SYj7 00:46:41.487 10:04:16 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.hK82t5SYj7 00:46:41.487 10:04:16 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.hK82t5SYj7 00:46:41.487 10:04:16 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:46:41.487 10:04:16 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:46:41.487 10:04:16 keyring_file -- keyring/common.sh@17 -- # name=key1 00:46:41.487 10:04:16 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:46:41.487 10:04:16 keyring_file -- keyring/common.sh@17 -- # digest=0 00:46:41.487 10:04:16 keyring_file -- keyring/common.sh@18 -- # mktemp 00:46:41.487 10:04:16 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.9XaLC6ElsP 00:46:41.487 10:04:16 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:46:41.488 10:04:16 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:46:41.488 10:04:16 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:46:41.488 10:04:16 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:46:41.488 10:04:16 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:46:41.488 10:04:16 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:46:41.488 10:04:16 keyring_file -- nvmf/common.sh@733 -- # python - 00:46:41.488 10:04:16 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.9XaLC6ElsP 00:46:41.488 10:04:16 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.9XaLC6ElsP 00:46:41.488 10:04:16 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.9XaLC6ElsP 00:46:41.488 10:04:16 keyring_file -- keyring/file.sh@30 -- # tgtpid=3206086 00:46:41.488 10:04:16 keyring_file -- keyring/file.sh@32 -- # waitforlisten 3206086 00:46:41.488 10:04:16 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:46:41.488 10:04:16 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3206086 ']' 00:46:41.488 10:04:16 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:41.488 10:04:16 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:46:41.488 10:04:16 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:41.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:41.488 10:04:16 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:46:41.488 10:04:16 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:46:41.488 [2024-12-09 10:04:16.872238] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:46:41.488 [2024-12-09 10:04:16.872317] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3206086 ] 00:46:41.749 [2024-12-09 10:04:16.966728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:41.749 [2024-12-09 10:04:16.994907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:46:42.318 10:04:17 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:46:42.319 10:04:17 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:46:42.319 10:04:17 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:46:42.319 10:04:17 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:42.319 10:04:17 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:46:42.319 [2024-12-09 10:04:17.686171] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:46:42.319 null0 00:46:42.319 [2024-12-09 10:04:17.718209] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:46:42.319 [2024-12-09 10:04:17.718452] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:46:42.319 10:04:17 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:42.319 10:04:17 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:46:42.319 10:04:17 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:46:42.319 10:04:17 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:46:42.319 10:04:17 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:46:42.319 10:04:17 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:46:42.319 10:04:17 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:46:42.319 10:04:17 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:46:42.319 10:04:17 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:46:42.319 10:04:17 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:42.319 10:04:17 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:46:42.319 [2024-12-09 10:04:17.750277] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:46:42.319 request: 00:46:42.319 { 00:46:42.319 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:46:42.319 "secure_channel": false, 00:46:42.319 "listen_address": { 00:46:42.319 "trtype": "tcp", 00:46:42.319 "traddr": "127.0.0.1", 00:46:42.319 "trsvcid": "4420" 00:46:42.319 }, 00:46:42.319 "method": "nvmf_subsystem_add_listener", 00:46:42.319 "req_id": 1 00:46:42.319 } 00:46:42.319 Got JSON-RPC error response 00:46:42.319 response: 00:46:42.319 { 00:46:42.319 "code": -32602, 00:46:42.319 "message": "Invalid parameters" 00:46:42.319 } 00:46:42.319 10:04:17 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:46:42.319 10:04:17 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:46:42.319 10:04:17 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:46:42.319 10:04:17 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:46:42.319 10:04:17 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:46:42.319 10:04:17 keyring_file -- keyring/file.sh@47 -- # bperfpid=3206148 00:46:42.319 10:04:17 keyring_file -- keyring/file.sh@49 -- # waitforlisten 3206148 /var/tmp/bperf.sock 00:46:42.319 10:04:17 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:46:42.319 10:04:17 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3206148 ']' 00:46:42.319 10:04:17 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:46:42.319 10:04:17 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:46:42.319 10:04:17 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:46:42.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:46:42.319 10:04:17 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:46:42.319 10:04:17 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:46:42.580 [2024-12-09 10:04:17.808108] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:46:42.581 [2024-12-09 10:04:17.808159] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3206148 ] 00:46:42.581 [2024-12-09 10:04:17.897588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:42.581 [2024-12-09 10:04:17.916099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:46:43.152 10:04:18 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:46:43.152 10:04:18 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:46:43.152 10:04:18 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.hK82t5SYj7 00:46:43.152 10:04:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.hK82t5SYj7 00:46:43.412 10:04:18 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.9XaLC6ElsP 00:46:43.412 10:04:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.9XaLC6ElsP 00:46:43.673 10:04:18 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:46:43.673 10:04:18 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:46:43.673 10:04:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:43.673 10:04:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:43.673 10:04:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:43.673 10:04:19 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.hK82t5SYj7 == \/\t\m\p\/\t\m\p\.\h\K\8\2\t\5\S\Y\j\7 ]] 00:46:43.673 10:04:19 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:46:43.673 10:04:19 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:46:43.673 10:04:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:43.673 10:04:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:46:43.673 10:04:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:43.933 10:04:19 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.9XaLC6ElsP == \/\t\m\p\/\t\m\p\.\9\X\a\L\C\6\E\l\s\P ]] 00:46:43.933 10:04:19 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:46:43.933 10:04:19 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:46:43.933 10:04:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:43.933 10:04:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:43.933 10:04:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:43.933 10:04:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:44.193 10:04:19 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:46:44.193 10:04:19 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:46:44.193 10:04:19 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:46:44.193 10:04:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:44.193 10:04:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:44.193 10:04:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:46:44.193 10:04:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:44.454 10:04:19 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:46:44.454 10:04:19 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:44.454 10:04:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:44.454 [2024-12-09 10:04:19.821903] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:46:44.454 nvme0n1 00:46:44.714 10:04:19 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:46:44.714 10:04:19 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:46:44.714 10:04:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:44.714 10:04:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:44.715 10:04:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:44.715 10:04:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:44.715 10:04:20 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:46:44.715 10:04:20 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:46:44.715 10:04:20 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:46:44.715 10:04:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:44.715 10:04:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:44.715 10:04:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:44.715 10:04:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:46:44.976 10:04:20 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:46:44.976 10:04:20 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:46:44.976 Running I/O for 1 seconds... 00:46:46.359 16186.00 IOPS, 63.23 MiB/s 00:46:46.359 Latency(us) 00:46:46.359 [2024-12-09T09:04:21.812Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:46.359 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:46:46.359 nvme0n1 : 1.01 16233.78 63.41 0.00 0.00 7866.95 3085.65 13926.40 00:46:46.359 [2024-12-09T09:04:21.812Z] =================================================================================================================== 00:46:46.359 [2024-12-09T09:04:21.812Z] Total : 16233.78 63.41 0.00 0.00 7866.95 3085.65 13926.40 00:46:46.359 { 00:46:46.359 "results": [ 00:46:46.359 { 00:46:46.359 "job": "nvme0n1", 00:46:46.359 "core_mask": "0x2", 00:46:46.359 "workload": "randrw", 00:46:46.359 "percentage": 50, 00:46:46.359 "status": "finished", 00:46:46.359 "queue_depth": 128, 00:46:46.359 "io_size": 4096, 00:46:46.359 "runtime": 1.005065, 00:46:46.359 "iops": 16233.775924940179, 00:46:46.359 "mibps": 63.41318720679757, 00:46:46.359 "io_failed": 0, 00:46:46.359 "io_timeout": 0, 00:46:46.359 "avg_latency_us": 7866.949245730163, 00:46:46.359 "min_latency_us": 3085.653333333333, 00:46:46.359 "max_latency_us": 13926.4 00:46:46.359 } 00:46:46.359 ], 00:46:46.359 "core_count": 1 00:46:46.359 } 00:46:46.359 10:04:21 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:46:46.359 10:04:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:46:46.359 10:04:21 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:46:46.359 10:04:21 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:46:46.359 10:04:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:46.359 10:04:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:46.359 10:04:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:46.359 10:04:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:46.359 10:04:21 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:46:46.359 10:04:21 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:46:46.359 10:04:21 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:46:46.359 10:04:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:46.359 10:04:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:46.359 10:04:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:46:46.621 10:04:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:46.621 10:04:21 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:46:46.621 10:04:21 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:46:46.621 10:04:21 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:46:46.621 10:04:21 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:46:46.621 10:04:21 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:46:46.621 10:04:21 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:46:46.621 10:04:21 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:46:46.621 10:04:21 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:46:46.621 10:04:21 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:46:46.621 10:04:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:46:46.883 [2024-12-09 10:04:22.142131] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:46:46.883 [2024-12-09 10:04:22.142886] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdd8a00 (107): Transport endpoint is not connected 00:46:46.883 [2024-12-09 10:04:22.143881] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdd8a00 (9): Bad file descriptor 00:46:46.883 [2024-12-09 10:04:22.144882] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:46:46.883 [2024-12-09 10:04:22.144889] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:46:46.883 [2024-12-09 10:04:22.144895] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:46:46.883 [2024-12-09 10:04:22.144901] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:46:46.883 request: 00:46:46.883 { 00:46:46.883 "name": "nvme0", 00:46:46.883 "trtype": "tcp", 00:46:46.883 "traddr": "127.0.0.1", 00:46:46.883 "adrfam": "ipv4", 00:46:46.883 "trsvcid": "4420", 00:46:46.883 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:46:46.883 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:46:46.883 "prchk_reftag": false, 00:46:46.883 "prchk_guard": false, 00:46:46.883 "hdgst": false, 00:46:46.883 "ddgst": false, 00:46:46.883 "psk": "key1", 00:46:46.883 "allow_unrecognized_csi": false, 00:46:46.883 "method": "bdev_nvme_attach_controller", 00:46:46.883 "req_id": 1 00:46:46.883 } 00:46:46.883 Got JSON-RPC error response 00:46:46.883 response: 00:46:46.883 { 00:46:46.883 "code": -5, 00:46:46.883 "message": "Input/output error" 00:46:46.883 } 00:46:46.883 10:04:22 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:46:46.883 10:04:22 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:46:46.883 10:04:22 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:46:46.883 10:04:22 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:46:46.883 10:04:22 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:46:46.883 10:04:22 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:46:46.883 10:04:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:46.883 10:04:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:46.883 10:04:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:46.883 10:04:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:47.145 10:04:22 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:46:47.145 10:04:22 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:46:47.145 10:04:22 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:46:47.145 10:04:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:47.145 10:04:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:47.145 10:04:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:46:47.145 10:04:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:47.145 10:04:22 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:46:47.145 10:04:22 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:46:47.145 10:04:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:46:47.406 10:04:22 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:46:47.406 10:04:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:46:47.668 10:04:22 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:46:47.668 10:04:22 keyring_file -- keyring/file.sh@78 -- # jq length 00:46:47.668 10:04:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:47.668 10:04:23 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:46:47.668 10:04:23 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.hK82t5SYj7 00:46:47.668 10:04:23 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.hK82t5SYj7 00:46:47.668 10:04:23 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:46:47.668 10:04:23 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.hK82t5SYj7 00:46:47.668 10:04:23 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:46:47.668 10:04:23 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:46:47.668 10:04:23 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:46:47.668 10:04:23 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:46:47.668 10:04:23 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.hK82t5SYj7 00:46:47.668 10:04:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.hK82t5SYj7 00:46:47.929 [2024-12-09 10:04:23.221664] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.hK82t5SYj7': 0100660 00:46:47.929 [2024-12-09 10:04:23.221681] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:46:47.929 request: 00:46:47.929 { 00:46:47.929 "name": "key0", 00:46:47.929 "path": "/tmp/tmp.hK82t5SYj7", 00:46:47.929 "method": "keyring_file_add_key", 00:46:47.929 "req_id": 1 00:46:47.929 } 00:46:47.929 Got JSON-RPC error response 00:46:47.929 response: 00:46:47.929 { 00:46:47.929 "code": -1, 00:46:47.929 "message": "Operation not permitted" 00:46:47.929 } 00:46:47.929 10:04:23 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:46:47.929 10:04:23 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:46:47.929 10:04:23 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:46:47.929 10:04:23 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:46:47.929 10:04:23 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.hK82t5SYj7 00:46:47.929 10:04:23 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.hK82t5SYj7 00:46:47.929 10:04:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.hK82t5SYj7 00:46:48.190 10:04:23 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.hK82t5SYj7 00:46:48.190 10:04:23 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:46:48.190 10:04:23 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:46:48.190 10:04:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:48.190 10:04:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:48.190 10:04:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:48.190 10:04:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:48.190 10:04:23 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:46:48.190 10:04:23 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:48.190 10:04:23 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:46:48.190 10:04:23 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:48.190 10:04:23 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:46:48.190 10:04:23 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:46:48.190 10:04:23 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:46:48.190 10:04:23 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:46:48.190 10:04:23 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:48.190 10:04:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:48.452 [2024-12-09 10:04:23.763038] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.hK82t5SYj7': No such file or directory 00:46:48.452 [2024-12-09 10:04:23.763053] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:46:48.452 [2024-12-09 10:04:23.763067] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:46:48.452 [2024-12-09 10:04:23.763072] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:46:48.452 [2024-12-09 10:04:23.763078] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:46:48.452 [2024-12-09 10:04:23.763082] bdev_nvme.c:6796:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:46:48.452 request: 00:46:48.452 { 00:46:48.452 "name": "nvme0", 00:46:48.452 "trtype": "tcp", 00:46:48.452 "traddr": "127.0.0.1", 00:46:48.452 "adrfam": "ipv4", 00:46:48.452 "trsvcid": "4420", 00:46:48.452 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:46:48.452 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:46:48.452 "prchk_reftag": false, 00:46:48.452 "prchk_guard": false, 00:46:48.452 "hdgst": false, 00:46:48.452 "ddgst": false, 00:46:48.452 "psk": "key0", 00:46:48.452 "allow_unrecognized_csi": false, 00:46:48.452 "method": "bdev_nvme_attach_controller", 00:46:48.452 "req_id": 1 00:46:48.452 } 00:46:48.452 Got JSON-RPC error response 00:46:48.452 response: 00:46:48.452 { 00:46:48.452 "code": -19, 00:46:48.452 "message": "No such device" 00:46:48.452 } 00:46:48.452 10:04:23 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:46:48.452 10:04:23 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:46:48.452 10:04:23 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:46:48.452 10:04:23 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:46:48.452 10:04:23 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:46:48.452 10:04:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:46:48.714 10:04:23 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:46:48.714 10:04:23 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:46:48.714 10:04:23 keyring_file -- keyring/common.sh@17 -- # name=key0 00:46:48.714 10:04:23 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:46:48.714 10:04:23 keyring_file -- keyring/common.sh@17 -- # digest=0 00:46:48.714 10:04:23 keyring_file -- keyring/common.sh@18 -- # mktemp 00:46:48.714 10:04:23 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.j5FyddPMwU 00:46:48.714 10:04:23 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:46:48.714 10:04:23 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:46:48.714 10:04:23 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:46:48.714 10:04:23 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:46:48.714 10:04:23 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:46:48.714 10:04:23 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:46:48.714 10:04:23 keyring_file -- nvmf/common.sh@733 -- # python - 00:46:48.714 10:04:23 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.j5FyddPMwU 00:46:48.714 10:04:23 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.j5FyddPMwU 00:46:48.714 10:04:23 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.j5FyddPMwU 00:46:48.714 10:04:23 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.j5FyddPMwU 00:46:48.714 10:04:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.j5FyddPMwU 00:46:48.714 10:04:24 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:48.714 10:04:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:48.976 nvme0n1 00:46:48.976 10:04:24 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:46:48.976 10:04:24 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:46:48.976 10:04:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:48.976 10:04:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:48.976 10:04:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:48.976 10:04:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:49.238 10:04:24 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:46:49.238 10:04:24 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:46:49.238 10:04:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:46:49.499 10:04:24 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:46:49.499 10:04:24 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:46:49.499 10:04:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:49.499 10:04:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:49.499 10:04:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:49.499 10:04:24 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:46:49.499 10:04:24 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:46:49.499 10:04:24 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:46:49.499 10:04:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:49.499 10:04:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:49.499 10:04:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:49.499 10:04:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:49.757 10:04:25 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:46:49.757 10:04:25 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:46:49.757 10:04:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:46:50.016 10:04:25 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:46:50.016 10:04:25 keyring_file -- keyring/file.sh@105 -- # jq length 00:46:50.016 10:04:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:50.277 10:04:25 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:46:50.277 10:04:25 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.j5FyddPMwU 00:46:50.277 10:04:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.j5FyddPMwU 00:46:50.277 10:04:25 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.9XaLC6ElsP 00:46:50.277 10:04:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.9XaLC6ElsP 00:46:50.537 10:04:25 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:50.537 10:04:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:46:50.810 nvme0n1 00:46:50.810 10:04:26 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:46:50.810 10:04:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:46:51.105 10:04:26 keyring_file -- keyring/file.sh@113 -- # config='{ 00:46:51.105 "subsystems": [ 00:46:51.105 { 00:46:51.105 "subsystem": "keyring", 00:46:51.105 "config": [ 00:46:51.105 { 00:46:51.105 "method": "keyring_file_add_key", 00:46:51.105 "params": { 00:46:51.105 "name": "key0", 00:46:51.105 "path": "/tmp/tmp.j5FyddPMwU" 00:46:51.105 } 00:46:51.105 }, 00:46:51.105 { 00:46:51.105 "method": "keyring_file_add_key", 00:46:51.105 "params": { 00:46:51.105 "name": "key1", 00:46:51.105 "path": "/tmp/tmp.9XaLC6ElsP" 00:46:51.105 } 00:46:51.105 } 00:46:51.105 ] 00:46:51.105 }, 00:46:51.105 { 00:46:51.105 "subsystem": "iobuf", 00:46:51.105 "config": [ 00:46:51.105 { 00:46:51.106 "method": "iobuf_set_options", 00:46:51.106 "params": { 00:46:51.106 "small_pool_count": 8192, 00:46:51.106 "large_pool_count": 1024, 00:46:51.106 "small_bufsize": 8192, 00:46:51.106 "large_bufsize": 135168, 00:46:51.106 "enable_numa": false 00:46:51.106 } 00:46:51.106 } 00:46:51.106 ] 00:46:51.106 }, 00:46:51.106 { 00:46:51.106 "subsystem": "sock", 00:46:51.106 "config": [ 00:46:51.106 { 00:46:51.106 "method": "sock_set_default_impl", 00:46:51.106 "params": { 00:46:51.106 "impl_name": "posix" 00:46:51.106 } 00:46:51.106 }, 00:46:51.106 { 00:46:51.106 "method": "sock_impl_set_options", 00:46:51.106 "params": { 00:46:51.106 "impl_name": "ssl", 00:46:51.106 "recv_buf_size": 4096, 00:46:51.106 "send_buf_size": 4096, 00:46:51.106 "enable_recv_pipe": true, 00:46:51.106 "enable_quickack": false, 00:46:51.106 "enable_placement_id": 0, 00:46:51.106 "enable_zerocopy_send_server": true, 00:46:51.106 "enable_zerocopy_send_client": false, 00:46:51.106 "zerocopy_threshold": 0, 00:46:51.106 "tls_version": 0, 00:46:51.106 "enable_ktls": false 00:46:51.106 } 00:46:51.106 }, 00:46:51.106 { 00:46:51.106 "method": "sock_impl_set_options", 00:46:51.106 "params": { 00:46:51.106 "impl_name": "posix", 00:46:51.106 "recv_buf_size": 2097152, 00:46:51.106 "send_buf_size": 2097152, 00:46:51.106 "enable_recv_pipe": true, 00:46:51.106 "enable_quickack": false, 00:46:51.106 "enable_placement_id": 0, 00:46:51.106 "enable_zerocopy_send_server": true, 00:46:51.106 "enable_zerocopy_send_client": false, 00:46:51.106 "zerocopy_threshold": 0, 00:46:51.106 "tls_version": 0, 00:46:51.106 "enable_ktls": false 00:46:51.106 } 00:46:51.106 } 00:46:51.106 ] 00:46:51.106 }, 00:46:51.106 { 00:46:51.106 "subsystem": "vmd", 00:46:51.106 "config": [] 00:46:51.106 }, 00:46:51.106 { 00:46:51.106 "subsystem": "accel", 00:46:51.106 "config": [ 00:46:51.106 { 00:46:51.106 "method": "accel_set_options", 00:46:51.106 "params": { 00:46:51.106 "small_cache_size": 128, 00:46:51.106 "large_cache_size": 16, 00:46:51.106 "task_count": 2048, 00:46:51.106 "sequence_count": 2048, 00:46:51.106 "buf_count": 2048 00:46:51.106 } 00:46:51.106 } 00:46:51.106 ] 00:46:51.106 }, 00:46:51.106 { 00:46:51.106 "subsystem": "bdev", 00:46:51.106 "config": [ 00:46:51.106 { 00:46:51.106 "method": "bdev_set_options", 00:46:51.106 "params": { 00:46:51.106 "bdev_io_pool_size": 65535, 00:46:51.106 "bdev_io_cache_size": 256, 00:46:51.106 "bdev_auto_examine": true, 00:46:51.106 "iobuf_small_cache_size": 128, 00:46:51.106 "iobuf_large_cache_size": 16 00:46:51.106 } 00:46:51.106 }, 00:46:51.106 { 00:46:51.106 "method": "bdev_raid_set_options", 00:46:51.106 "params": { 00:46:51.106 "process_window_size_kb": 1024, 00:46:51.106 "process_max_bandwidth_mb_sec": 0 00:46:51.106 } 00:46:51.106 }, 00:46:51.106 { 00:46:51.106 "method": "bdev_iscsi_set_options", 00:46:51.106 "params": { 00:46:51.106 "timeout_sec": 30 00:46:51.106 } 00:46:51.106 }, 00:46:51.106 { 00:46:51.106 "method": "bdev_nvme_set_options", 00:46:51.106 "params": { 00:46:51.106 "action_on_timeout": "none", 00:46:51.106 "timeout_us": 0, 00:46:51.106 "timeout_admin_us": 0, 00:46:51.106 "keep_alive_timeout_ms": 10000, 00:46:51.106 "arbitration_burst": 0, 00:46:51.106 "low_priority_weight": 0, 00:46:51.106 "medium_priority_weight": 0, 00:46:51.106 "high_priority_weight": 0, 00:46:51.106 "nvme_adminq_poll_period_us": 10000, 00:46:51.106 "nvme_ioq_poll_period_us": 0, 00:46:51.106 "io_queue_requests": 512, 00:46:51.106 "delay_cmd_submit": true, 00:46:51.106 "transport_retry_count": 4, 00:46:51.106 "bdev_retry_count": 3, 00:46:51.106 "transport_ack_timeout": 0, 00:46:51.106 "ctrlr_loss_timeout_sec": 0, 00:46:51.106 "reconnect_delay_sec": 0, 00:46:51.106 "fast_io_fail_timeout_sec": 0, 00:46:51.106 "disable_auto_failback": false, 00:46:51.106 "generate_uuids": false, 00:46:51.106 "transport_tos": 0, 00:46:51.106 "nvme_error_stat": false, 00:46:51.106 "rdma_srq_size": 0, 00:46:51.106 "io_path_stat": false, 00:46:51.106 "allow_accel_sequence": false, 00:46:51.106 "rdma_max_cq_size": 0, 00:46:51.106 "rdma_cm_event_timeout_ms": 0, 00:46:51.106 "dhchap_digests": [ 00:46:51.106 "sha256", 00:46:51.106 "sha384", 00:46:51.106 "sha512" 00:46:51.106 ], 00:46:51.106 "dhchap_dhgroups": [ 00:46:51.106 "null", 00:46:51.106 "ffdhe2048", 00:46:51.106 "ffdhe3072", 00:46:51.106 "ffdhe4096", 00:46:51.106 "ffdhe6144", 00:46:51.106 "ffdhe8192" 00:46:51.106 ] 00:46:51.106 } 00:46:51.106 }, 00:46:51.106 { 00:46:51.106 "method": "bdev_nvme_attach_controller", 00:46:51.106 "params": { 00:46:51.106 "name": "nvme0", 00:46:51.106 "trtype": "TCP", 00:46:51.106 "adrfam": "IPv4", 00:46:51.106 "traddr": "127.0.0.1", 00:46:51.106 "trsvcid": "4420", 00:46:51.106 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:46:51.106 "prchk_reftag": false, 00:46:51.106 "prchk_guard": false, 00:46:51.106 "ctrlr_loss_timeout_sec": 0, 00:46:51.106 "reconnect_delay_sec": 0, 00:46:51.106 "fast_io_fail_timeout_sec": 0, 00:46:51.106 "psk": "key0", 00:46:51.106 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:46:51.106 "hdgst": false, 00:46:51.106 "ddgst": false, 00:46:51.106 "multipath": "multipath" 00:46:51.106 } 00:46:51.106 }, 00:46:51.106 { 00:46:51.106 "method": "bdev_nvme_set_hotplug", 00:46:51.106 "params": { 00:46:51.106 "period_us": 100000, 00:46:51.106 "enable": false 00:46:51.106 } 00:46:51.106 }, 00:46:51.106 { 00:46:51.106 "method": "bdev_wait_for_examine" 00:46:51.106 } 00:46:51.106 ] 00:46:51.106 }, 00:46:51.106 { 00:46:51.106 "subsystem": "nbd", 00:46:51.106 "config": [] 00:46:51.106 } 00:46:51.106 ] 00:46:51.106 }' 00:46:51.106 10:04:26 keyring_file -- keyring/file.sh@115 -- # killprocess 3206148 00:46:51.106 10:04:26 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3206148 ']' 00:46:51.106 10:04:26 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3206148 00:46:51.106 10:04:26 keyring_file -- common/autotest_common.sh@959 -- # uname 00:46:51.106 10:04:26 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:46:51.106 10:04:26 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3206148 00:46:51.106 10:04:26 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:46:51.106 10:04:26 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:46:51.106 10:04:26 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3206148' 00:46:51.106 killing process with pid 3206148 00:46:51.106 10:04:26 keyring_file -- common/autotest_common.sh@973 -- # kill 3206148 00:46:51.106 Received shutdown signal, test time was about 1.000000 seconds 00:46:51.106 00:46:51.106 Latency(us) 00:46:51.106 [2024-12-09T09:04:26.559Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:51.106 [2024-12-09T09:04:26.559Z] =================================================================================================================== 00:46:51.106 [2024-12-09T09:04:26.559Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:46:51.106 10:04:26 keyring_file -- common/autotest_common.sh@978 -- # wait 3206148 00:46:51.106 10:04:26 keyring_file -- keyring/file.sh@118 -- # bperfpid=3207954 00:46:51.106 10:04:26 keyring_file -- keyring/file.sh@120 -- # waitforlisten 3207954 /var/tmp/bperf.sock 00:46:51.106 10:04:26 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3207954 ']' 00:46:51.106 10:04:26 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:46:51.106 10:04:26 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:46:51.106 10:04:26 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:46:51.106 10:04:26 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:46:51.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:46:51.106 10:04:26 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:46:51.106 10:04:26 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:46:51.106 "subsystems": [ 00:46:51.106 { 00:46:51.106 "subsystem": "keyring", 00:46:51.106 "config": [ 00:46:51.106 { 00:46:51.106 "method": "keyring_file_add_key", 00:46:51.106 "params": { 00:46:51.106 "name": "key0", 00:46:51.106 "path": "/tmp/tmp.j5FyddPMwU" 00:46:51.106 } 00:46:51.106 }, 00:46:51.106 { 00:46:51.106 "method": "keyring_file_add_key", 00:46:51.106 "params": { 00:46:51.106 "name": "key1", 00:46:51.106 "path": "/tmp/tmp.9XaLC6ElsP" 00:46:51.106 } 00:46:51.106 } 00:46:51.106 ] 00:46:51.106 }, 00:46:51.106 { 00:46:51.106 "subsystem": "iobuf", 00:46:51.106 "config": [ 00:46:51.106 { 00:46:51.106 "method": "iobuf_set_options", 00:46:51.106 "params": { 00:46:51.106 "small_pool_count": 8192, 00:46:51.106 "large_pool_count": 1024, 00:46:51.106 "small_bufsize": 8192, 00:46:51.106 "large_bufsize": 135168, 00:46:51.106 "enable_numa": false 00:46:51.106 } 00:46:51.106 } 00:46:51.106 ] 00:46:51.106 }, 00:46:51.106 { 00:46:51.106 "subsystem": "sock", 00:46:51.106 "config": [ 00:46:51.106 { 00:46:51.106 "method": "sock_set_default_impl", 00:46:51.106 "params": { 00:46:51.106 "impl_name": "posix" 00:46:51.106 } 00:46:51.106 }, 00:46:51.106 { 00:46:51.106 "method": "sock_impl_set_options", 00:46:51.106 "params": { 00:46:51.106 "impl_name": "ssl", 00:46:51.106 "recv_buf_size": 4096, 00:46:51.106 "send_buf_size": 4096, 00:46:51.106 "enable_recv_pipe": true, 00:46:51.106 "enable_quickack": false, 00:46:51.106 "enable_placement_id": 0, 00:46:51.106 "enable_zerocopy_send_server": true, 00:46:51.106 "enable_zerocopy_send_client": false, 00:46:51.106 "zerocopy_threshold": 0, 00:46:51.106 "tls_version": 0, 00:46:51.106 "enable_ktls": false 00:46:51.106 } 00:46:51.106 }, 00:46:51.106 { 00:46:51.106 "method": "sock_impl_set_options", 00:46:51.106 "params": { 00:46:51.106 "impl_name": "posix", 00:46:51.106 "recv_buf_size": 2097152, 00:46:51.106 "send_buf_size": 2097152, 00:46:51.106 "enable_recv_pipe": true, 00:46:51.106 "enable_quickack": false, 00:46:51.106 "enable_placement_id": 0, 00:46:51.106 "enable_zerocopy_send_server": true, 00:46:51.106 "enable_zerocopy_send_client": false, 00:46:51.106 "zerocopy_threshold": 0, 00:46:51.106 "tls_version": 0, 00:46:51.106 "enable_ktls": false 00:46:51.106 } 00:46:51.106 } 00:46:51.106 ] 00:46:51.106 }, 00:46:51.106 { 00:46:51.106 "subsystem": "vmd", 00:46:51.106 "config": [] 00:46:51.106 }, 00:46:51.106 { 00:46:51.106 "subsystem": "accel", 00:46:51.106 "config": [ 00:46:51.106 { 00:46:51.106 "method": "accel_set_options", 00:46:51.106 "params": { 00:46:51.106 "small_cache_size": 128, 00:46:51.106 "large_cache_size": 16, 00:46:51.106 "task_count": 2048, 00:46:51.106 "sequence_count": 2048, 00:46:51.106 "buf_count": 2048 00:46:51.106 } 00:46:51.106 } 00:46:51.106 ] 00:46:51.106 }, 00:46:51.106 { 00:46:51.106 "subsystem": "bdev", 00:46:51.106 "config": [ 00:46:51.106 { 00:46:51.106 "method": "bdev_set_options", 00:46:51.106 "params": { 00:46:51.106 "bdev_io_pool_size": 65535, 00:46:51.106 "bdev_io_cache_size": 256, 00:46:51.106 "bdev_auto_examine": true, 00:46:51.106 "iobuf_small_cache_size": 128, 00:46:51.106 "iobuf_large_cache_size": 16 00:46:51.106 } 00:46:51.106 }, 00:46:51.106 { 00:46:51.106 "method": "bdev_raid_set_options", 00:46:51.106 "params": { 00:46:51.106 "process_window_size_kb": 1024, 00:46:51.106 "process_max_bandwidth_mb_sec": 0 00:46:51.106 } 00:46:51.106 }, 00:46:51.106 { 00:46:51.106 "method": "bdev_iscsi_set_options", 00:46:51.106 "params": { 00:46:51.106 "timeout_sec": 30 00:46:51.106 } 00:46:51.106 }, 00:46:51.106 { 00:46:51.106 "method": "bdev_nvme_set_options", 00:46:51.106 "params": { 00:46:51.106 "action_on_timeout": "none", 00:46:51.106 "timeout_us": 0, 00:46:51.106 "timeout_admin_us": 0, 00:46:51.106 "keep_alive_timeout_ms": 10000, 00:46:51.106 "arbitration_burst": 0, 00:46:51.106 "low_priority_weight": 0, 00:46:51.106 "medium_priority_weight": 0, 00:46:51.106 "high_priority_weight": 0, 00:46:51.106 "nvme_adminq_poll_period_us": 10000, 00:46:51.106 "nvme_ioq_poll_period_us": 0, 00:46:51.106 "io_queue_requests": 512, 00:46:51.106 "delay_cmd_submit": true, 00:46:51.107 "transport_retry_count": 4, 00:46:51.107 "bdev_retry_count": 3, 00:46:51.107 "transport_ack_timeout": 0, 00:46:51.107 "ctrlr_loss_timeout_sec": 0, 00:46:51.107 "reconnect_delay_sec": 0, 00:46:51.107 "fast_io_fail_timeout_sec": 0, 00:46:51.107 "disable_auto_failback": false, 00:46:51.107 "generate_uuids": false, 00:46:51.107 "transport_tos": 0, 00:46:51.107 "nvme_error_stat": false, 00:46:51.107 "rdma_srq_size": 0, 00:46:51.107 "io_path_stat": false, 00:46:51.107 "allow_accel_sequence": false, 00:46:51.107 "rdma_max_cq_size": 0, 00:46:51.107 "rdma_cm_event_timeout_ms": 0, 00:46:51.107 "dhchap_digests": [ 00:46:51.107 "sha256", 00:46:51.107 "sha384", 00:46:51.107 "sha512" 00:46:51.107 ], 00:46:51.107 "dhchap_dhgroups": [ 00:46:51.107 "null", 00:46:51.107 "ffdhe2048", 00:46:51.107 "ffdhe3072", 00:46:51.107 "ffdhe4096", 00:46:51.107 "ffdhe6144", 00:46:51.107 "ffdhe8192" 00:46:51.107 ] 00:46:51.107 } 00:46:51.107 }, 00:46:51.107 { 00:46:51.107 "method": "bdev_nvme_attach_controller", 00:46:51.107 "params": { 00:46:51.107 "name": "nvme0", 00:46:51.107 "trtype": "TCP", 00:46:51.107 "adrfam": "IPv4", 00:46:51.107 "traddr": "127.0.0.1", 00:46:51.107 "trsvcid": "4420", 00:46:51.107 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:46:51.107 "prchk_reftag": false, 00:46:51.107 "prchk_guard": false, 00:46:51.107 "ctrlr_loss_timeout_sec": 0, 00:46:51.107 "reconnect_delay_sec": 0, 00:46:51.107 "fast_io_fail_timeout_sec": 0, 00:46:51.107 "psk": "key0", 00:46:51.107 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:46:51.107 "hdgst": false, 00:46:51.107 "ddgst": false, 00:46:51.107 "multipath": "multipath" 00:46:51.107 } 00:46:51.107 }, 00:46:51.107 { 00:46:51.107 "method": "bdev_nvme_set_hotplug", 00:46:51.107 "params": { 00:46:51.107 "period_us": 100000, 00:46:51.107 "enable": false 00:46:51.107 } 00:46:51.107 }, 00:46:51.107 { 00:46:51.107 "method": "bdev_wait_for_examine" 00:46:51.107 } 00:46:51.107 ] 00:46:51.107 }, 00:46:51.107 { 00:46:51.107 "subsystem": "nbd", 00:46:51.107 "config": [] 00:46:51.107 } 00:46:51.107 ] 00:46:51.107 }' 00:46:51.107 10:04:26 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:46:51.107 [2024-12-09 10:04:26.538492] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:46:51.107 [2024-12-09 10:04:26.538547] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3207954 ] 00:46:51.366 [2024-12-09 10:04:26.622868] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:51.366 [2024-12-09 10:04:26.638693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:46:51.366 [2024-12-09 10:04:26.776950] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:46:51.934 10:04:27 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:46:51.934 10:04:27 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:46:51.934 10:04:27 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:46:51.934 10:04:27 keyring_file -- keyring/file.sh@121 -- # jq length 00:46:51.934 10:04:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:52.193 10:04:27 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:46:52.193 10:04:27 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:46:52.193 10:04:27 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:46:52.193 10:04:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:52.193 10:04:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:52.193 10:04:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:52.193 10:04:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:46:52.452 10:04:27 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:46:52.452 10:04:27 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:46:52.452 10:04:27 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:46:52.452 10:04:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:46:52.452 10:04:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:52.452 10:04:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:52.452 10:04:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:46:52.452 10:04:27 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:46:52.452 10:04:27 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:46:52.452 10:04:27 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:46:52.452 10:04:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:46:52.711 10:04:27 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:46:52.711 10:04:27 keyring_file -- keyring/file.sh@1 -- # cleanup 00:46:52.711 10:04:27 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.j5FyddPMwU /tmp/tmp.9XaLC6ElsP 00:46:52.711 10:04:27 keyring_file -- keyring/file.sh@20 -- # killprocess 3207954 00:46:52.711 10:04:27 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3207954 ']' 00:46:52.711 10:04:27 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3207954 00:46:52.711 10:04:27 keyring_file -- common/autotest_common.sh@959 -- # uname 00:46:52.711 10:04:27 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:46:52.711 10:04:27 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3207954 00:46:52.711 10:04:28 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:46:52.711 10:04:28 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:46:52.711 10:04:28 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3207954' 00:46:52.711 killing process with pid 3207954 00:46:52.711 10:04:28 keyring_file -- common/autotest_common.sh@973 -- # kill 3207954 00:46:52.711 Received shutdown signal, test time was about 1.000000 seconds 00:46:52.711 00:46:52.711 Latency(us) 00:46:52.711 [2024-12-09T09:04:28.164Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:52.711 [2024-12-09T09:04:28.164Z] =================================================================================================================== 00:46:52.711 [2024-12-09T09:04:28.164Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:46:52.711 10:04:28 keyring_file -- common/autotest_common.sh@978 -- # wait 3207954 00:46:52.711 10:04:28 keyring_file -- keyring/file.sh@21 -- # killprocess 3206086 00:46:52.711 10:04:28 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3206086 ']' 00:46:52.711 10:04:28 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3206086 00:46:52.711 10:04:28 keyring_file -- common/autotest_common.sh@959 -- # uname 00:46:52.711 10:04:28 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:46:52.711 10:04:28 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3206086 00:46:52.971 10:04:28 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:46:52.971 10:04:28 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:46:52.971 10:04:28 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3206086' 00:46:52.971 killing process with pid 3206086 00:46:52.971 10:04:28 keyring_file -- common/autotest_common.sh@973 -- # kill 3206086 00:46:52.971 10:04:28 keyring_file -- common/autotest_common.sh@978 -- # wait 3206086 00:46:52.971 00:46:52.971 real 0m11.947s 00:46:52.971 user 0m28.818s 00:46:52.971 sys 0m2.672s 00:46:52.971 10:04:28 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:46:52.971 10:04:28 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:46:52.971 ************************************ 00:46:52.971 END TEST keyring_file 00:46:52.971 ************************************ 00:46:53.231 10:04:28 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:46:53.232 10:04:28 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:46:53.232 10:04:28 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:46:53.232 10:04:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:46:53.232 10:04:28 -- common/autotest_common.sh@10 -- # set +x 00:46:53.232 ************************************ 00:46:53.232 START TEST keyring_linux 00:46:53.232 ************************************ 00:46:53.232 10:04:28 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:46:53.232 Joined session keyring: 188372113 00:46:53.232 * Looking for test storage... 00:46:53.232 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:46:53.232 10:04:28 keyring_linux -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:46:53.232 10:04:28 keyring_linux -- common/autotest_common.sh@1711 -- # lcov --version 00:46:53.232 10:04:28 keyring_linux -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:46:53.232 10:04:28 keyring_linux -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:46:53.232 10:04:28 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:46:53.232 10:04:28 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:46:53.232 10:04:28 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:46:53.232 10:04:28 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:46:53.232 10:04:28 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:46:53.232 10:04:28 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:46:53.232 10:04:28 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:46:53.232 10:04:28 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:46:53.232 10:04:28 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:46:53.232 10:04:28 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:46:53.232 10:04:28 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:46:53.232 10:04:28 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:46:53.232 10:04:28 keyring_linux -- scripts/common.sh@345 -- # : 1 00:46:53.232 10:04:28 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:46:53.232 10:04:28 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:46:53.232 10:04:28 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:46:53.232 10:04:28 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:46:53.232 10:04:28 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:46:53.232 10:04:28 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:46:53.232 10:04:28 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:46:53.232 10:04:28 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:46:53.232 10:04:28 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:46:53.232 10:04:28 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:46:53.232 10:04:28 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:46:53.492 10:04:28 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:46:53.492 10:04:28 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:46:53.492 10:04:28 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:46:53.492 10:04:28 keyring_linux -- scripts/common.sh@368 -- # return 0 00:46:53.492 10:04:28 keyring_linux -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:46:53.492 10:04:28 keyring_linux -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:46:53.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:53.492 --rc genhtml_branch_coverage=1 00:46:53.492 --rc genhtml_function_coverage=1 00:46:53.492 --rc genhtml_legend=1 00:46:53.492 --rc geninfo_all_blocks=1 00:46:53.492 --rc geninfo_unexecuted_blocks=1 00:46:53.492 00:46:53.492 ' 00:46:53.492 10:04:28 keyring_linux -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:46:53.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:53.492 --rc genhtml_branch_coverage=1 00:46:53.492 --rc genhtml_function_coverage=1 00:46:53.492 --rc genhtml_legend=1 00:46:53.492 --rc geninfo_all_blocks=1 00:46:53.492 --rc geninfo_unexecuted_blocks=1 00:46:53.492 00:46:53.492 ' 00:46:53.492 10:04:28 keyring_linux -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:46:53.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:53.492 --rc genhtml_branch_coverage=1 00:46:53.492 --rc genhtml_function_coverage=1 00:46:53.492 --rc genhtml_legend=1 00:46:53.492 --rc geninfo_all_blocks=1 00:46:53.492 --rc geninfo_unexecuted_blocks=1 00:46:53.492 00:46:53.492 ' 00:46:53.492 10:04:28 keyring_linux -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:46:53.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:53.492 --rc genhtml_branch_coverage=1 00:46:53.492 --rc genhtml_function_coverage=1 00:46:53.492 --rc genhtml_legend=1 00:46:53.492 --rc geninfo_all_blocks=1 00:46:53.492 --rc geninfo_unexecuted_blocks=1 00:46:53.492 00:46:53.492 ' 00:46:53.492 10:04:28 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:46:53.492 10:04:28 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:46:53.492 10:04:28 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:46:53.492 10:04:28 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:46:53.492 10:04:28 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:46:53.492 10:04:28 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:46:53.492 10:04:28 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:46:53.492 10:04:28 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:46:53.492 10:04:28 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:46:53.492 10:04:28 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:46:53.492 10:04:28 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:46:53.492 10:04:28 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:46:53.492 10:04:28 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:46:53.492 10:04:28 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:46:53.492 10:04:28 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:46:53.492 10:04:28 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:46:53.492 10:04:28 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:46:53.492 10:04:28 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:46:53.492 10:04:28 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:46:53.492 10:04:28 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:46:53.492 10:04:28 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:46:53.492 10:04:28 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:46:53.492 10:04:28 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:46:53.492 10:04:28 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:46:53.492 10:04:28 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:53.492 10:04:28 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:53.492 10:04:28 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:53.492 10:04:28 keyring_linux -- paths/export.sh@5 -- # export PATH 00:46:53.492 10:04:28 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:46:53.492 10:04:28 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:46:53.492 10:04:28 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:46:53.492 10:04:28 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:46:53.492 10:04:28 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:46:53.492 10:04:28 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:46:53.492 10:04:28 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:46:53.492 10:04:28 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:46:53.492 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:46:53.492 10:04:28 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:46:53.492 10:04:28 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:46:53.492 10:04:28 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:46:53.492 10:04:28 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:46:53.492 10:04:28 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:46:53.492 10:04:28 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:46:53.492 10:04:28 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:46:53.492 10:04:28 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:46:53.492 10:04:28 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:46:53.492 10:04:28 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:46:53.492 10:04:28 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:46:53.492 10:04:28 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:46:53.492 10:04:28 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:46:53.492 10:04:28 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:46:53.492 10:04:28 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:46:53.492 10:04:28 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:46:53.492 10:04:28 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:46:53.492 10:04:28 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:46:53.492 10:04:28 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:46:53.492 10:04:28 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:46:53.492 10:04:28 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:46:53.492 10:04:28 keyring_linux -- nvmf/common.sh@733 -- # python - 00:46:53.492 10:04:28 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:46:53.492 10:04:28 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:46:53.492 /tmp/:spdk-test:key0 00:46:53.492 10:04:28 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:46:53.492 10:04:28 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:46:53.492 10:04:28 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:46:53.492 10:04:28 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:46:53.492 10:04:28 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:46:53.492 10:04:28 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:46:53.492 10:04:28 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:46:53.492 10:04:28 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:46:53.492 10:04:28 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:46:53.492 10:04:28 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:46:53.492 10:04:28 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:46:53.492 10:04:28 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:46:53.492 10:04:28 keyring_linux -- nvmf/common.sh@733 -- # python - 00:46:53.492 10:04:28 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:46:53.492 10:04:28 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:46:53.492 /tmp/:spdk-test:key1 00:46:53.492 10:04:28 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:46:53.492 10:04:28 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=3208395 00:46:53.492 10:04:28 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 3208395 00:46:53.492 10:04:28 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 3208395 ']' 00:46:53.492 10:04:28 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:53.492 10:04:28 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:46:53.492 10:04:28 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:53.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:53.492 10:04:28 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:46:53.492 10:04:28 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:46:53.492 [2024-12-09 10:04:28.853207] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:46:53.492 [2024-12-09 10:04:28.853260] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3208395 ] 00:46:53.492 [2024-12-09 10:04:28.934413] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:53.752 [2024-12-09 10:04:28.950955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:46:53.752 10:04:29 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:46:53.752 10:04:29 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:46:53.752 10:04:29 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:46:53.752 10:04:29 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:46:53.752 10:04:29 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:46:53.752 [2024-12-09 10:04:29.119101] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:46:53.752 null0 00:46:53.752 [2024-12-09 10:04:29.151159] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:46:53.753 [2024-12-09 10:04:29.151512] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:46:53.753 10:04:29 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:46:53.753 10:04:29 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:46:53.753 489279588 00:46:53.753 10:04:29 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:46:53.753 151884877 00:46:53.753 10:04:29 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=3208446 00:46:53.753 10:04:29 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 3208446 /var/tmp/bperf.sock 00:46:53.753 10:04:29 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:46:53.753 10:04:29 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 3208446 ']' 00:46:53.753 10:04:29 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:46:53.753 10:04:29 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:46:53.753 10:04:29 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:46:53.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:46:53.753 10:04:29 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:46:53.753 10:04:29 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:46:54.014 [2024-12-09 10:04:29.227707] Starting SPDK v25.01-pre git sha1 a2f5e1c2d / DPDK 23.11.0 initialization... 00:46:54.014 [2024-12-09 10:04:29.227757] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3208446 ] 00:46:54.014 [2024-12-09 10:04:29.311240] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:54.014 [2024-12-09 10:04:29.327518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:46:54.014 10:04:29 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:46:54.014 10:04:29 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:46:54.015 10:04:29 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:46:54.015 10:04:29 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:46:54.275 10:04:29 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:46:54.275 10:04:29 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:46:54.536 10:04:29 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:46:54.536 10:04:29 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:46:54.536 [2024-12-09 10:04:29.889510] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:46:54.536 nvme0n1 00:46:54.536 10:04:29 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:46:54.536 10:04:29 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:46:54.536 10:04:29 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:46:54.536 10:04:29 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:46:54.536 10:04:29 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:46:54.536 10:04:29 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:54.797 10:04:30 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:46:54.797 10:04:30 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:46:54.797 10:04:30 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:46:54.797 10:04:30 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:46:54.797 10:04:30 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:46:54.797 10:04:30 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:46:54.797 10:04:30 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:55.058 10:04:30 keyring_linux -- keyring/linux.sh@25 -- # sn=489279588 00:46:55.058 10:04:30 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:46:55.058 10:04:30 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:46:55.058 10:04:30 keyring_linux -- keyring/linux.sh@26 -- # [[ 489279588 == \4\8\9\2\7\9\5\8\8 ]] 00:46:55.058 10:04:30 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 489279588 00:46:55.058 10:04:30 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:46:55.058 10:04:30 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:46:55.058 Running I/O for 1 seconds... 00:46:56.000 23796.00 IOPS, 92.95 MiB/s 00:46:56.000 Latency(us) 00:46:56.000 [2024-12-09T09:04:31.454Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:56.001 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:46:56.001 nvme0n1 : 1.01 23794.60 92.95 0.00 0.00 5362.89 3304.11 7809.71 00:46:56.001 [2024-12-09T09:04:31.454Z] =================================================================================================================== 00:46:56.001 [2024-12-09T09:04:31.454Z] Total : 23794.60 92.95 0.00 0.00 5362.89 3304.11 7809.71 00:46:56.262 { 00:46:56.262 "results": [ 00:46:56.262 { 00:46:56.262 "job": "nvme0n1", 00:46:56.262 "core_mask": "0x2", 00:46:56.262 "workload": "randread", 00:46:56.262 "status": "finished", 00:46:56.262 "queue_depth": 128, 00:46:56.262 "io_size": 4096, 00:46:56.262 "runtime": 1.005438, 00:46:56.262 "iops": 23794.604938345277, 00:46:56.262 "mibps": 92.94767554041124, 00:46:56.262 "io_failed": 0, 00:46:56.262 "io_timeout": 0, 00:46:56.262 "avg_latency_us": 5362.886545170819, 00:46:56.262 "min_latency_us": 3304.1066666666666, 00:46:56.262 "max_latency_us": 7809.706666666667 00:46:56.262 } 00:46:56.262 ], 00:46:56.262 "core_count": 1 00:46:56.262 } 00:46:56.262 10:04:31 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:46:56.262 10:04:31 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:46:56.262 10:04:31 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:46:56.262 10:04:31 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:46:56.262 10:04:31 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:46:56.262 10:04:31 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:46:56.262 10:04:31 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:46:56.262 10:04:31 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:46:56.523 10:04:31 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:46:56.523 10:04:31 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:46:56.523 10:04:31 keyring_linux -- keyring/linux.sh@23 -- # return 00:46:56.523 10:04:31 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:46:56.523 10:04:31 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:46:56.523 10:04:31 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:46:56.523 10:04:31 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:46:56.523 10:04:31 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:46:56.523 10:04:31 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:46:56.523 10:04:31 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:46:56.523 10:04:31 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:46:56.523 10:04:31 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:46:56.523 [2024-12-09 10:04:31.969154] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:46:56.523 [2024-12-09 10:04:31.969929] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2e730 (107): Transport endpoint is not connected 00:46:56.523 [2024-12-09 10:04:31.970925] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2e730 (9): Bad file descriptor 00:46:56.523 [2024-12-09 10:04:31.971926] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:46:56.523 [2024-12-09 10:04:31.971933] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:46:56.523 [2024-12-09 10:04:31.971938] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:46:56.523 [2024-12-09 10:04:31.971945] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:46:56.785 request: 00:46:56.785 { 00:46:56.785 "name": "nvme0", 00:46:56.785 "trtype": "tcp", 00:46:56.785 "traddr": "127.0.0.1", 00:46:56.785 "adrfam": "ipv4", 00:46:56.785 "trsvcid": "4420", 00:46:56.785 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:46:56.785 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:46:56.785 "prchk_reftag": false, 00:46:56.785 "prchk_guard": false, 00:46:56.785 "hdgst": false, 00:46:56.785 "ddgst": false, 00:46:56.785 "psk": ":spdk-test:key1", 00:46:56.785 "allow_unrecognized_csi": false, 00:46:56.785 "method": "bdev_nvme_attach_controller", 00:46:56.785 "req_id": 1 00:46:56.785 } 00:46:56.785 Got JSON-RPC error response 00:46:56.785 response: 00:46:56.785 { 00:46:56.785 "code": -5, 00:46:56.785 "message": "Input/output error" 00:46:56.785 } 00:46:56.785 10:04:31 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:46:56.785 10:04:31 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:46:56.785 10:04:31 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:46:56.785 10:04:31 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:46:56.785 10:04:31 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:46:56.785 10:04:31 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:46:56.785 10:04:31 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:46:56.785 10:04:31 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:46:56.785 10:04:31 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:46:56.785 10:04:31 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:46:56.785 10:04:31 keyring_linux -- keyring/linux.sh@33 -- # sn=489279588 00:46:56.785 10:04:31 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 489279588 00:46:56.785 1 links removed 00:46:56.785 10:04:31 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:46:56.785 10:04:31 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:46:56.785 10:04:31 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:46:56.785 10:04:31 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:46:56.785 10:04:31 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:46:56.785 10:04:32 keyring_linux -- keyring/linux.sh@33 -- # sn=151884877 00:46:56.785 10:04:32 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 151884877 00:46:56.785 1 links removed 00:46:56.785 10:04:32 keyring_linux -- keyring/linux.sh@41 -- # killprocess 3208446 00:46:56.785 10:04:32 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 3208446 ']' 00:46:56.785 10:04:32 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 3208446 00:46:56.785 10:04:32 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:46:56.785 10:04:32 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:46:56.785 10:04:32 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3208446 00:46:56.785 10:04:32 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:46:56.785 10:04:32 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:46:56.785 10:04:32 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3208446' 00:46:56.785 killing process with pid 3208446 00:46:56.785 10:04:32 keyring_linux -- common/autotest_common.sh@973 -- # kill 3208446 00:46:56.785 Received shutdown signal, test time was about 1.000000 seconds 00:46:56.785 00:46:56.785 Latency(us) 00:46:56.785 [2024-12-09T09:04:32.238Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:46:56.785 [2024-12-09T09:04:32.238Z] =================================================================================================================== 00:46:56.785 [2024-12-09T09:04:32.238Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:46:56.785 10:04:32 keyring_linux -- common/autotest_common.sh@978 -- # wait 3208446 00:46:56.785 10:04:32 keyring_linux -- keyring/linux.sh@42 -- # killprocess 3208395 00:46:56.785 10:04:32 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 3208395 ']' 00:46:56.785 10:04:32 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 3208395 00:46:56.785 10:04:32 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:46:56.785 10:04:32 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:46:56.785 10:04:32 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3208395 00:46:56.785 10:04:32 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:46:56.785 10:04:32 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:46:56.785 10:04:32 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3208395' 00:46:56.785 killing process with pid 3208395 00:46:56.785 10:04:32 keyring_linux -- common/autotest_common.sh@973 -- # kill 3208395 00:46:56.785 10:04:32 keyring_linux -- common/autotest_common.sh@978 -- # wait 3208395 00:46:57.046 00:46:57.046 real 0m3.934s 00:46:57.046 user 0m7.361s 00:46:57.046 sys 0m1.400s 00:46:57.046 10:04:32 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:46:57.046 10:04:32 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:46:57.046 ************************************ 00:46:57.046 END TEST keyring_linux 00:46:57.046 ************************************ 00:46:57.046 10:04:32 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:46:57.046 10:04:32 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:46:57.046 10:04:32 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:46:57.046 10:04:32 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:46:57.046 10:04:32 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:46:57.046 10:04:32 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:46:57.046 10:04:32 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:46:57.046 10:04:32 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:46:57.046 10:04:32 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:46:57.046 10:04:32 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:46:57.046 10:04:32 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:46:57.046 10:04:32 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:46:57.046 10:04:32 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:46:57.046 10:04:32 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:46:57.046 10:04:32 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:46:57.046 10:04:32 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:46:57.046 10:04:32 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:46:57.046 10:04:32 -- common/autotest_common.sh@726 -- # xtrace_disable 00:46:57.046 10:04:32 -- common/autotest_common.sh@10 -- # set +x 00:46:57.046 10:04:32 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:46:57.047 10:04:32 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:46:57.047 10:04:32 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:46:57.047 10:04:32 -- common/autotest_common.sh@10 -- # set +x 00:47:05.191 INFO: APP EXITING 00:47:05.191 INFO: killing all VMs 00:47:05.191 INFO: killing vhost app 00:47:05.191 INFO: EXIT DONE 00:47:08.494 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:47:08.494 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:47:08.494 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:47:08.494 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:47:08.494 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:47:08.494 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:47:08.494 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:47:08.494 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:47:08.494 0000:65:00.0 (144d a80a): Already using the nvme driver 00:47:08.494 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:47:08.494 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:47:08.494 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:47:08.494 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:47:08.494 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:47:08.494 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:47:08.494 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:47:08.494 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:47:12.708 Cleaning 00:47:12.708 Removing: /var/run/dpdk/spdk0/config 00:47:12.708 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:47:12.708 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:47:12.708 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:47:12.708 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:47:12.708 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:47:12.708 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:47:12.708 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:47:12.708 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:47:12.708 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:47:12.708 Removing: /var/run/dpdk/spdk0/hugepage_info 00:47:12.708 Removing: /var/run/dpdk/spdk1/config 00:47:12.708 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:47:12.708 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:47:12.708 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:47:12.708 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:47:12.708 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:47:12.708 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:47:12.708 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:47:12.708 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:47:12.708 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:47:12.708 Removing: /var/run/dpdk/spdk1/hugepage_info 00:47:12.708 Removing: /var/run/dpdk/spdk2/config 00:47:12.708 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:47:12.708 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:47:12.708 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:47:12.708 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:47:12.708 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:47:12.708 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:47:12.708 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:47:12.708 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:47:12.708 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:47:12.708 Removing: /var/run/dpdk/spdk2/hugepage_info 00:47:12.708 Removing: /var/run/dpdk/spdk3/config 00:47:12.708 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:47:12.708 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:47:12.708 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:47:12.708 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:47:12.708 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:47:12.708 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:47:12.708 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:47:12.708 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:47:12.708 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:47:12.708 Removing: /var/run/dpdk/spdk3/hugepage_info 00:47:12.708 Removing: /var/run/dpdk/spdk4/config 00:47:12.708 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:47:12.708 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:47:12.708 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:47:12.708 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:47:12.708 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:47:12.708 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:47:12.708 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:47:12.708 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:47:12.708 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:47:12.708 Removing: /var/run/dpdk/spdk4/hugepage_info 00:47:12.708 Removing: /dev/shm/bdev_svc_trace.1 00:47:12.708 Removing: /dev/shm/nvmf_trace.0 00:47:12.708 Removing: /dev/shm/spdk_tgt_trace.pid2536327 00:47:12.708 Removing: /var/run/dpdk/spdk0 00:47:12.708 Removing: /var/run/dpdk/spdk1 00:47:12.708 Removing: /var/run/dpdk/spdk2 00:47:12.708 Removing: /var/run/dpdk/spdk3 00:47:12.708 Removing: /var/run/dpdk/spdk4 00:47:12.708 Removing: /var/run/dpdk/spdk_pid2534837 00:47:12.708 Removing: /var/run/dpdk/spdk_pid2536327 00:47:12.708 Removing: /var/run/dpdk/spdk_pid2537175 00:47:12.708 Removing: /var/run/dpdk/spdk_pid2538215 00:47:12.708 Removing: /var/run/dpdk/spdk_pid2538515 00:47:12.708 Removing: /var/run/dpdk/spdk_pid2539623 00:47:12.708 Removing: /var/run/dpdk/spdk_pid2539718 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2540093 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2541231 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2541788 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2542146 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2542499 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2542908 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2543305 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2543656 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2543953 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2544214 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2545262 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2548724 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2549001 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2549366 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2549471 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2549881 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2550179 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2550557 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2550841 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2551088 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2551266 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2551530 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2551647 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2552117 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2552440 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2552847 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2557631 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2563320 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2575413 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2576096 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2581356 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2581839 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2586901 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2593982 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2597078 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2609597 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2620918 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2623090 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2624271 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2644929 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2649691 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2748692 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2755191 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2762777 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2770408 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2770410 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2771412 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2772420 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2773425 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2774100 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2774124 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2774434 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2774612 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2774729 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2775766 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2776769 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2777773 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2778449 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2778451 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2778782 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2779896 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2781166 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2790950 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2824700 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2830112 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2832119 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2834212 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2834465 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2834482 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2834795 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2835203 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2837222 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2838403 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2839060 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2842058 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2842651 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2843357 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2848419 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2854808 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2854809 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2854810 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2859484 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2864147 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2870011 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2914440 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2919248 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2926463 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2927966 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2929890 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2931463 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2937594 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2942729 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2947761 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2956654 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2956745 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2961781 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2961926 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2962246 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2962674 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2962888 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2964122 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2966005 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2967948 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2969940 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2971904 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2973785 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2981097 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2981913 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2983489 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2984718 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2990946 00:47:12.709 Removing: /var/run/dpdk/spdk_pid2993994 00:47:12.709 Removing: /var/run/dpdk/spdk_pid3000650 00:47:12.709 Removing: /var/run/dpdk/spdk_pid3006957 00:47:12.709 Removing: /var/run/dpdk/spdk_pid3016803 00:47:12.709 Removing: /var/run/dpdk/spdk_pid3025224 00:47:12.709 Removing: /var/run/dpdk/spdk_pid3025282 00:47:12.709 Removing: /var/run/dpdk/spdk_pid3048269 00:47:12.709 Removing: /var/run/dpdk/spdk_pid3049072 00:47:12.709 Removing: /var/run/dpdk/spdk_pid3049599 00:47:12.709 Removing: /var/run/dpdk/spdk_pid3050258 00:47:12.709 Removing: /var/run/dpdk/spdk_pid3051318 00:47:12.709 Removing: /var/run/dpdk/spdk_pid3051980 00:47:12.709 Removing: /var/run/dpdk/spdk_pid3052463 00:47:12.709 Removing: /var/run/dpdk/spdk_pid3053033 00:47:12.709 Removing: /var/run/dpdk/spdk_pid3058080 00:47:12.709 Removing: /var/run/dpdk/spdk_pid3058412 00:47:12.709 Removing: /var/run/dpdk/spdk_pid3065444 00:47:12.709 Removing: /var/run/dpdk/spdk_pid3065679 00:47:12.971 Removing: /var/run/dpdk/spdk_pid3072014 00:47:12.971 Removing: /var/run/dpdk/spdk_pid3077136 00:47:12.971 Removing: /var/run/dpdk/spdk_pid3089185 00:47:12.971 Removing: /var/run/dpdk/spdk_pid3089940 00:47:12.971 Removing: /var/run/dpdk/spdk_pid3094945 00:47:12.971 Removing: /var/run/dpdk/spdk_pid3095348 00:47:12.971 Removing: /var/run/dpdk/spdk_pid3100180 00:47:12.972 Removing: /var/run/dpdk/spdk_pid3106815 00:47:12.972 Removing: /var/run/dpdk/spdk_pid3109822 00:47:12.972 Removing: /var/run/dpdk/spdk_pid3121691 00:47:12.972 Removing: /var/run/dpdk/spdk_pid3132061 00:47:12.972 Removing: /var/run/dpdk/spdk_pid3134168 00:47:12.972 Removing: /var/run/dpdk/spdk_pid3135175 00:47:12.972 Removing: /var/run/dpdk/spdk_pid3154910 00:47:12.972 Removing: /var/run/dpdk/spdk_pid3159480 00:47:12.972 Removing: /var/run/dpdk/spdk_pid3162743 00:47:12.972 Removing: /var/run/dpdk/spdk_pid3169933 00:47:12.972 Removing: /var/run/dpdk/spdk_pid3169938 00:47:12.972 Removing: /var/run/dpdk/spdk_pid3175808 00:47:12.972 Removing: /var/run/dpdk/spdk_pid3178007 00:47:12.972 Removing: /var/run/dpdk/spdk_pid3180434 00:47:12.972 Removing: /var/run/dpdk/spdk_pid3181717 00:47:12.972 Removing: /var/run/dpdk/spdk_pid3184145 00:47:12.972 Removing: /var/run/dpdk/spdk_pid3185539 00:47:12.972 Removing: /var/run/dpdk/spdk_pid3195946 00:47:12.972 Removing: /var/run/dpdk/spdk_pid3196608 00:47:12.972 Removing: /var/run/dpdk/spdk_pid3197271 00:47:12.972 Removing: /var/run/dpdk/spdk_pid3200044 00:47:12.972 Removing: /var/run/dpdk/spdk_pid3200585 00:47:12.972 Removing: /var/run/dpdk/spdk_pid3201254 00:47:12.972 Removing: /var/run/dpdk/spdk_pid3206086 00:47:12.972 Removing: /var/run/dpdk/spdk_pid3206148 00:47:12.972 Removing: /var/run/dpdk/spdk_pid3207954 00:47:12.972 Removing: /var/run/dpdk/spdk_pid3208395 00:47:12.972 Removing: /var/run/dpdk/spdk_pid3208446 00:47:12.972 Clean 00:47:12.972 10:04:48 -- common/autotest_common.sh@1453 -- # return 0 00:47:12.972 10:04:48 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:47:12.972 10:04:48 -- common/autotest_common.sh@732 -- # xtrace_disable 00:47:12.972 10:04:48 -- common/autotest_common.sh@10 -- # set +x 00:47:13.234 10:04:48 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:47:13.234 10:04:48 -- common/autotest_common.sh@732 -- # xtrace_disable 00:47:13.234 10:04:48 -- common/autotest_common.sh@10 -- # set +x 00:47:13.234 10:04:48 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:47:13.234 10:04:48 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:47:13.234 10:04:48 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:47:13.234 10:04:48 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:47:13.234 10:04:48 -- spdk/autotest.sh@398 -- # hostname 00:47:13.234 10:04:48 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-09 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:47:13.495 geninfo: WARNING: invalid characters removed from testname! 00:47:40.189 10:05:14 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:47:42.099 10:05:17 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:47:44.010 10:05:18 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:47:45.395 10:05:20 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:47:47.329 10:05:22 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:47:48.714 10:05:23 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:47:50.641 10:05:25 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:47:50.641 10:05:25 -- spdk/autorun.sh@1 -- $ timing_finish 00:47:50.641 10:05:25 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:47:50.641 10:05:25 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:47:50.641 10:05:25 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:47:50.641 10:05:25 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:47:50.641 + [[ -n 2434076 ]] 00:47:50.641 + sudo kill 2434076 00:47:50.651 [Pipeline] } 00:47:50.664 [Pipeline] // stage 00:47:50.668 [Pipeline] } 00:47:50.681 [Pipeline] // timeout 00:47:50.685 [Pipeline] } 00:47:50.694 [Pipeline] // catchError 00:47:50.698 [Pipeline] } 00:47:50.709 [Pipeline] // wrap 00:47:50.713 [Pipeline] } 00:47:50.722 [Pipeline] // catchError 00:47:50.729 [Pipeline] stage 00:47:50.730 [Pipeline] { (Epilogue) 00:47:50.740 [Pipeline] catchError 00:47:50.741 [Pipeline] { 00:47:50.751 [Pipeline] echo 00:47:50.753 Cleanup processes 00:47:50.758 [Pipeline] sh 00:47:51.046 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:47:51.046 3222234 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:47:51.057 [Pipeline] sh 00:47:51.339 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:47:51.339 ++ grep -v 'sudo pgrep' 00:47:51.339 ++ awk '{print $1}' 00:47:51.339 + sudo kill -9 00:47:51.339 + true 00:47:51.351 [Pipeline] sh 00:47:51.634 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:48:03.890 [Pipeline] sh 00:48:04.178 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:48:04.178 Artifacts sizes are good 00:48:04.194 [Pipeline] archiveArtifacts 00:48:04.203 Archiving artifacts 00:48:04.468 [Pipeline] sh 00:48:04.811 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:48:04.825 [Pipeline] cleanWs 00:48:04.834 [WS-CLEANUP] Deleting project workspace... 00:48:04.834 [WS-CLEANUP] Deferred wipeout is used... 00:48:04.841 [WS-CLEANUP] done 00:48:04.843 [Pipeline] } 00:48:04.858 [Pipeline] // catchError 00:48:04.867 [Pipeline] sh 00:48:05.155 + logger -p user.info -t JENKINS-CI 00:48:05.165 [Pipeline] } 00:48:05.178 [Pipeline] // stage 00:48:05.182 [Pipeline] } 00:48:05.195 [Pipeline] // node 00:48:05.200 [Pipeline] End of Pipeline 00:48:05.234 Finished: SUCCESS